Friday, March 29, 2019

Artificial intelligence and magical thinking


Arthur C. Clarke famously said that “any sufficiently advanced technology is indistinguishable from magic.”  Is this true?  That depends on what you mean by “indistinguishable from.”  The phrase could be given either an epistemological reading or a metaphysical one.  On the former reading, what the thesis is saying is that if a technology is sufficiently advanced, you would not be able to know from examining it that it is not magic, even though in fact it is not.  This is no doubt what Clarke himself meant, and it is plausible enough, if only because the word “sufficiently” makes it hard to falsify.  If there was some technology that almost seemed like magic but could be shown not to be on close inspection, we could always say “Ah, but that’s only because it wasn’t sufficiently advanced.”  So the thesis really just amounts to the claim that people can be fooled into thinking that something is magic if we’re clever enough.  Well, OK.  I don’t know how interesting that is, but it seems true enough.

A more interesting claim results if we give the thesis a metaphysical interpretation.  On this reading, Clarke is saying that a technology that is so advanced that it seems like magic really would be magic.  I doubt this is what Clarke meant, and though it is a more interesting claim, it is also clearly false.  No matter how convincing the sleight of hand of a Ricky Jay or a Michael Carbonaro is, we know it is not really magic, and we know that precisely because we know it was accomplished using technology.  That an effect results from a “sufficiently advanced technology” entails that it is not magic, not that it is magic.  If it were magic, it wouldn’t be the technology that is producing the effect.  The metaphysical reading seems plausible only if we make the verificationist assumption that if there is no way empirically to tell the difference between magic and technology, then there just would be no difference.  But verificationism is false.  (Cf. Aristotle’s Revenge, section 3.1)
 
Probably nobody really believes the stronger, metaphysical interpretation of Clarke’s thesis.  It might seem that people like Erich von Däniken believe it, insofar as they claim that the gods of ancient cultures were really extraterrestrials working marvels by way of advanced technology.  (Think of the way that the Norse gods are portrayed in Marvel movies as an alien race.)  But this idea doesn’t really amount to the claim that magic is real and that it can be explained as a kind of advanced technology.  Rather, it amounts to the claim that magic is not real, and that it only seemed to be real because ancient people were mistaking advanced technology for magic.

There are, however, many people who believe a claim that is analogous to, and as silly as, the metaphysical thesis that sufficiently advanced technology really is magic – namely the claim that a machine running a sufficiently advanced computer program really is intelligent.  It is not intelligent, and we know that it is not intelligent (or should know, if we are thinking clearly) precisely because we know that it is merely running a computer program.

Building a computer is precisely analogous to putting together a bit of magical sleight of hand.  It is a clever exercise in simulation, nothing more.  And the convincingness of the simulation is as completely irrelevant in the one case as it is in the other.  Saying “Gee, AI programs can do such amazing things.  Maybe it really is intelligence!” is like saying “Gee, Penn and Teller do such amazing things.  Maybe it really is magic!”  

The way computers work is by exploiting a parallelism between logical relationships on the one hand and causal connections on the other.  The fundamental examples of this are logic gates.  A logic gate is an input-output device constructed in such a way that its inputs and outputs reliably mirror the inputs and outputs of a logical function.  Take, for instance, the function and, which is such that when p is true and q is true, the conjunctive statement p and q will also be true (as logic students know from their study of truth tables).  An and-gate is a physical device constructed in such a way that when a state that can be interpreted as corresponding to p and a state that can be interpreted as corresponding to q are its inputs, it will cause as output a state that can be interpreted as corresponding to p and q.  Other logic gates can be constructed to parallel other logical functions, such as or and not.  

In an electronic computer, the inputs and outputs of a logic gate will take the form of electric currents, but in other sorts of machine they can take other forms, such as the positions of valves in a hydraulic computer, or the positions of sticks in a computer constructed out of Tinkertoy pieces.  There is nothing essentially electronic about a computer in the modern sense.  It’s just that an electronic computer is going to be vastly speedier and more efficient than a computer constructed out of materials of these other kinds.  In any case, all the complex activity that takes place in a computer of any sort will be an aggregate of the activities of basic elements such as logic gates.

The flow of current or lack thereof (or, alternatively, the position of a valve, or of a stick, or of whatever the basic parts are out of which some computer is constructed) is conventionally interpreted as a bit (either a 1 or a 0) rather than as a propositional variable or a truth value, and the sequences of 1’s and 0’s correlated with the aggregate of the basic elements (again, such as logic gates) are interpreted as a fundamental level of “information” into which other sorts of information can be coded.  

The thing to emphasize is that the computer is not in and of itself carrying out logical operations, processing information, or doing anything else that might be thought a mark of genuine intelligence – any more than a piece of scratch paper on which you’ve written some logical symbols is carrying out logical operations, processing information, or the like.  Considered by themselves and apart from the conventions and intentions of language users, logical symbols on a piece of paper are just a bunch of meaningless ink marks.  Considered by themselves and apart from the intentions of the designers, a Tinkertoy computer is just a bunch of sticks moving around, as stupidly as if they had been tossed down the stairs.  And in exactly the same way, considered by themselves and apart from the intentions of the designers, the electrical currents in an electronic computer are just as devoid of intelligence or meaning as the current flowing through the wires of your toaster or hair dryer.  There is no intelligence there at all.  The intelligence is all in the designers and users of the computer, just as it is all in the person who wrote the logical symbols on the piece of paper rather than in the paper itself.

Indeed, that’s the whole point of a computer in the modern sense.  It’s a way of using utterly unintelligent physical objects and processes to mimic various intelligent activities – just as various utterly non-magical objects and techniques provide an entertainer with a way to mimic magic.  A computer’s mimicry crucially depends on our interpreting what it’s doing in certain ways.  Such-and-such ink marks count as words with meanings only insofar as we have a convention of interpreting them that way; and in exactly the same way, such-and-such electrical circuits count as logic gates, information processers, etc. only insofar as we have a convention of interpreting them that way.  Their status as simulations of various intelligent operations is entirely conventional or observer-relative.  You might say that it is a kind of make-believe, just as the “magic” that an entertainer performs is a kind of make-believe.

Siri and Alexa are not really intelligent, and wouldn’t be no matter how convincing you made them, just as Call of Duty is not really warfare, and wouldn’t be real warfare no matter how realistic you made the CGI.  Computer simulations of intelligent behavior are like computer simulations of war, the weather, the stock market, etc. – simulations, and nothing more.  And we know that for the same reason we know that magic is a mere simulation – namely that we ourselves made the simulation.  

Let’s now consider the various objections that are no doubt brewing in the reader’s mind:

1. “Are you saying that intelligence is a kind of magic?”

No, of course not.  That’s not the point of the analogy.  The point of the analogy is that a simulation of X is not the same as X, and that we should be especially aware of this when we are ourselves the makers of the simulation.  Magic is a particularly good example precisely because no serious person believes in it.  We know there is no such thing as magic and thus are not tempted to mistake the simulation for the real McCoy.  Intelligence, by contrast, is real but also philosophically puzzling, and so in our search for understanding of it we are more prone to commit the fallacy of mistaking simulation for reality where it is concerned.

It is also irrelevant, by the way, whether intelligence is material or immaterial.  The debate between dualism and materialism can be put to one side for present purposes.  Even if human intelligence is entirely explicable in materialist terms – I don’t think it is, but let that pass – the point is that the way it is so explicable cannot be in terms of the idea that the brain is a kind of computer running a program.

2. “But neurons do what logic gates do.  So we know that computers can be intelligent, because they are essentially doing what our brains are doing.”

No, they aren’t.  True, there are causal relations between neurons that are vaguely analogous to the causal relations holding between logic gates and other elements of an electronic computer.  But that is where the similarity ends, and it is a similarity that is far less significant than the differences between the cases.  Logic gates are designed by electrical engineers in a way that will make them suitable for interpretation as implementing logical functions.  No one is doing anything like that with neurons.  In particular, no one is assigning an interpretation as implementing a logical function, or any other interpretation for that matter, to neurons.  (The point is simple and obvious, but commonly overlooked precisely because it is so obvious, like the tip of your nose that you never notice precisely because it is right in front of you.)

That brings us to a second difference, which is that a computer and the logic gates and other elements out of which it is constructed are artifacts, whereas a brain (or, more precisely, the organism of which the brain is an organ) is a substance, in the Aristotelian sense.  A substance has irreducible properties and causal powers, i.e. causal powers that are more than just the sum of the properties and powers of its parts.  Artifacts are not like that.  In an artifact, the properties and causal powers of the whole are reducible to the aggregate of the properties and causal powers of the parts together with the intentions of the designer and users of the artifact.  (Cf. Scholastic Metaphysics, section 3.1.2)

Judging the brain to be a computer on the basis of the analogy between neurons and logic gates is like saying that the “face on Mars” must really be a sculpture even if it came about through natural processes, on the grounds that it looks (sort of) like a sculpture would look.  In fact, anything that came about through natural processes cannot be a sculpture, whatever it looks like, because a sculpture is a kind of artifact, and an artifact is precisely the opposite of something that comes about through natural processes.  And in exactly the same way, precisely because brains and the neurons of which they are made come about by natural processes, they are not artifacts, and thus are not computers, logic gates, and the like (since those things are artifacts).

I can already hear some readers thinking: “But maybe God assigns to neurons the interpretation of implementing a logical function, so that the brain is a kind of computer that God is using, and our thoughts are the result.”  This is completely muddleheaded.  For one thing, this would entail that we are not really thinking at all – any more than a piece of paper or an abacus is thinking when you use it to work out calculations – but that only God is thinking, and somehow using our brains as an aid in doing that, just as we use paper, abacuses, etc. as aids to thinking.  (Why would God need such an aid to thinking?)  For another thing, it supposes that the brain is a kind of artifact, and it simply isn’t that, whether or not God creates it.  (As I have complained many, many times, it is a serious theological and metaphysical error to model divine creation on the making of artifacts.  Cf., for example, my essay “Between Aristotle and William Paley: Aquinas's Fifth Way,” in Neo-Scholastic Essays.)

It is easier to see the fallacy here if you think of a Tinkertoy computer or a hydraulic computer instead of an electronic computer.  It is obvious that the movements of sticks count as the implementation of logical functions, information processing, etc. only insofar as the designer has assigned such interpretations to the movements, and that apart from this interpretation they would be nothing more than meaningless movements.  No one is doing anything like that with the brain.  No one is saying “Let’s count this kind of neural process as an and-gate, that one as an or-gate, etc.” the way they are with the Tinkertoy sticks.  The reason people fall for the fallacy in the case of electronic computers is that they see an analogy between the computer’s electrical activity and the brain’s electrochemical activity and think it lends plausibility to the idea that the brain is a computer.  In reality the similarity is no more relevant than the fact that you can make a computer that weighs about as much as the brain, or one that is the same color as a brain.

3. “But evolution can design computers.  That’s what the brain is – a computer designed by natural selection.”

This is like saying that evolution can make sculptures or that natural selection can write English prose.  Sculptures and English prose are artifacts, which are the products of intelligent creatures.  Natural selection is not intelligent – that’s the whole point of the idea of natural selection – and thus it cannot make artifacts.  Even if it somehow produced something that kinda-sorta looked like a sculpture or an English word, it wouldn’t actually be one, any more than the “face on Mars” is really a sculpture.  And by the same token, it cannot make a computer, since a computer is a kind of artifact.  Tarting up nonsense with the magic word “evolution” doesn’t somehow make it scientific or anything other than nonsense.

Moreover, even if the suggestion that “evolution designs computers” weren’t nonsense, it would, in the current context, be question-begging.  It would assume that it makes sense to describe the product of a natural process as a computer, and thus presupposes that what I’m saying is wrong without showing that it is.

4. “But you’re relying on intuition, and intuitions are a weak basis for metaphysical claims.”

No, I’m not relying on intuition at all.  (In fact, I hate arguments that appeal to intuitions.)  When I point out that ink scribblings have no intrinsic status as words but get that status as a result of human conventions, or that the “face on Mars” cannot be an artifact if it came about through natural processes, I am not appealing to intuition.  I am not saying “Gee, it just seems intuitively like a bunch of ink scribblings have no intrinsic meaning etc.”  Rather, I’m merely calling attention to how words actually come into existence, how artifacts actually come about, etc.  Similarly, when I say that sticks and and-gates and the like have by themselves no status as the implementation of logical functions, etc., I’m not appealing to intuitions but merely calling attention to how Tinkertoy sticks, and-gates, etc. actually get that status – namely, from the conventions and intentions of the designers and users of computers.

5. “Oh, this is just John Searle’s Chinese Room argument.  But that doesn’t work because [insert fallacious response to Searle here].”

No, this is not Searle’s Chinese Room argument.  To be sure, that argument is an excellent argument, and in my view none of the usual responses to it is any good.  But again, I am not giving a variation of the Chinese Room argument.

However, I am saying something that is related to another argument Searle gave about a decade after he first published the Chinese Room argument – an argument presented in his article “Is the Brain a Digital Computer?” and in his book The Rediscovery of the Mind.  That is an argument to the effect that computation is an observer-relative feature of physical processes rather than a feature intrinsic to them, so that the brain cannot be said to be in any interesting sense a digital computer.  It is a computer only in the trivial sense that anything can be said to be computer, insofar as we could, if we wanted to, assign to anything some interpretation of it as carrying out a computation.

Still, my position differs from Searle’s in some important ways.  I have analyzed Searle’s argument at length in my article “From Aristotle to John Searle and Back Again: Formal Causes, Teleology, and Computation in Nature.”  As I note in the article, Searle has in common with his materialist critics an essentially “mechanistic” or post-Aristotelian and non-teleological conception of the nature of matter.  And given that conception of matter, none of the responses to Searle has any hope of succeeding.  However, if we return to an Aristotelian teleological conception of matter, then we can coherently say that there is something analogous to computation in nature (though that doesn’t entail that the right way to think of it would be on the model of Turing machines, binary code, and all the other apparatus of modern computational theory). 

That still wouldn’t salvage the claim that computers in the modern sense can be intelligent, however, because they are still mere artifacts, and the points made above would still apply.  (It also wouldn’t salvage the claim that human intelligence amounts to computation in the brain, since – as, again, I would argue – human intelligence cannot be reduced to purely material activity even given an Aristotelian conception of matter.  But as I have said, that is not essential to the present point.)

6. “How is positing ectoplasm any better than explaining intelligence as a kind of computation?”

Who said anything about ectoplasm?  Not me, since I don’t believe in such a thing.  Pointing out that words and sculptures are artifacts and thus cannot be the products of natural processes doesn’t commit someone to the existence of ectoplasm (whatever that is).  It doesn’t entail that one must think that words get their meaning, or sculptures get their status as representations, from the infusion of some weird kind of substance.  Indeed, it doesn’t commit one to any sort of metaphysical view at all, weird or otherwise.  Similarly, pointing out that computers are a kind of artifact – so that they don’t in and of themselves count as carrying out computations or as doing information processing, and so that the brain is not a kind of computer – does not commit one to any positive account, weird or otherwise, about how the brain works or how intelligence works.  Again, you don’t need to be a dualist to see the point.  

7. “But maybe computers, even the Tinkertoy computer, really are thinking even if it seems that they are not.”

Right, and maybe there are invisible, intangible, silent, odorless elves dancing in front of you right now even if it seems there are not.  Maybe Penn and Teller, Ricky Jay, and Michael Carbonaro really are doing magic, just like Dr. Strange, and only pretending that it is just sleight of hand.

But don’t bet on it, and also don’t bet on the idea that computers are really thinking.  And even if they were, it would not be because of the way they are engineered, the programs they are running, etc. – just as, if Penn and Teller were doing real magic, it would not be because of their skill at sleight of hand.  For sleight of hand is precisely mere simulation rather than real magic, and to be constructed out of logic gates and the like is precisely merely to simulate intelligence rather than really to have it.  If computers really are thinking, that would be because they’ve somehow got brains hidden somewhere (if you’re a materialist) or immaterial souls hidden somewhere (if you’re a dualist), and not because of anything having to do with their being computers.

8. “You just feel threatened by the idea that computers are intelligent!  You just want to believe that the human mind is somehow special!”

And you’re really grasping at straws at this point.  Even if this accusation were true, it would be entirely irrelevant to the points I’ve been making.  To suppose that what motivates a person to make a claim or give an argument is relevant to the truth of the claim or the cogency of the argument is to commit an ad hominem fallacy of poisoning the well.  You might as well say that those who believe the “face on Mars” is not a real sculpture just feel threatened by the idea that it is, or that those who point out that ink scribbles have only conventional rather than intrinsic meaning just feel threatened by the idea that they have intrinsic meaning.  

In any case, the accusation isn’t true.  As I keep pointing out, even someone who rejects dualism and thinks the mind is just one natural feature of the universe among others could accept the points I’ve been making here.  (Searle would be an example.)

9. “Then why do so many people, including even many scientists and philosophers, say that computers can be intelligent?”

Because they are human beings, and a human being is as susceptible of fallacious thinking as the next guy.  And there are several fallacies one can easily fall into in this context. 

One of them I’ve already indicated.  The electrical activity in a modern computer is analogous to the electrochemical activity in the brain.  Hence people can lapse into committing a fallacy of false analogy, concluding that the brain and a computer must be analogous in other respects too.  This is abetted by a fallacy of equivocation.  We often use intentional idioms when speaking about computers – we say that the computer knows such-and-such, or is figuring out the solution to a problem, or has such-and-such in its memory, or what have you.  These are all mere façons de parler, originating from the fact that computers were constructed precisely to mimic such intelligent features.  It is exactly analogous to the way that we casually speak of a statue as having eyes, a nose, a mouth, etc., because the statue was sculpted precisely to have features that look like eyes, a nose, a mouth, etc.  But a statue doesn’t literally have eyes, a nose, or a mouth, and a computer doesn’t literally know, figure out, or remember anything.  When we use the same terms to describe what we do and what the computer does, we can, if we are not careful, fallaciously conclude that it is doing what we do.

There is also a kind of confirmation bias at work here.  People who claim that computers can think are typically materialists, and it can be very tempting to see the undeniably impressive advances made in computer hardware and software as vindication of the claims of materialism.  There is also sometimes a kind of circular reasoning at work.   Materialism is taken to support the computational model of intelligence, and the computational model of intelligence is taken to support materialism.

Then there is the fact that journalists and pop culture have spread the idea of artificial intelligence and made it so familiar that its legitimacy has come widely to be taken for granted.  The average man on the street doesn’t really know much about how the brain works or how computers work, but is impressed by the latter and notes that science fiction and even many scientists suppose both that a thinking computer might one day be constructed, and that the brain is itself a kind of computer.  “Look at Siri and Alexa!  Look at all those pop science books about artificial intelligence on the shelves at Barnes and Noble!  Look at all those thinking machines on TV and in the movies – HAL 9000, Data from Star Trek, the kid from the Spielberg movie AI, Ultron and the Vision from the Avengers movies, etc.  There must be something to it!”

Nor can it be denied that the idea of artificial intelligence is cool and fun.  People want it to be true for that reason too, as well as because of their materialist biases.  (Notice that I am not saying that these motivations show that the idea is false.  That would be to commit a fallacy of poisoning the well.  The idea is false for other reasons, namely the ones given above.  The point is merely that these motivations help explain why people accept an idea that can be fairly easily refuted when one thinks carefully about it.)

None of this is to deny that much of what goes under the name of “artificial intelligence” is technologically very impressive, and promises to become only more impressive.  Nor is it to take a stand one way or another on the current controversy about the potential dangers of AI as it gets more sophisticated.  AI might end up being dangerous for the same sorts of reasons that other technologies can be dangerous.  For example, we might become too dependent on it, or it might become too complex to control, or there might be glitches that lead to horrible accidents, and so forth.

However, it will not become dangerous by virtue of becoming literally more intelligent than us, because it is not literally intelligent at all.  Nor are any of the other odd things sometimes claimed by those who’ve gotten carried away with the idea of thinking machines – such as that we might achieve immortality by virtue of our minds being downloaded onto a computer, or that the universe might really be a computer simulation – any more plausible.  All of this is sheer nonsense.  

You might as well say that our universe is really just a pattern of movements in a vast assemblage of Tinkertoy sticks, or that your mind might persist after your death as a set of movements in a bunch of Tinkertoy sticks.  Movements in Tinkertoy sticks, however complex, are in and of themselves nothing more than that – movements.  That’s all.  They “process information” or carry out “computations” only in the sense that we can decide to interpret certain of the patterns that way, just as we can decide to count certain ink marks as words.  And the idea is no more plausible when we substitute electronic computers for Tinkertoy computers.

Further reading:


Accept no imitations [on the Turing test]





Kripke, Ross, and the Immaterial Aspects of Thought [a 2013 American Catholic Philosophical Quarterly article, reprinted in Neo-Scholastic Essays]

Kurzweil’s phantasms [a 2013 book review from First Things]

192 comments:

  1. The argument here amounts to "we know they don't have formal causes, since they have material causes."

    It's as wrong made by you as it is made by materialists.

    ReplyDelete
    Replies
    1. ?? You must have been reading some other article, since I didn't say or imply any such thing.

      Delete
    2. Material things have both formal causes and material causes at the same time. So entirelyuseless's argument appears entirely useless.

      Delete
    3. when they say magic and technology are indistinguishable from each other it's like saying aliens and gods are indistinguishable from each other which they are. It's all just a matter of semantics truly,they are saying. By that they mean in the universe magic is a technology in itself. To where on Earth in our perception we see something disappear and reappear and we call it magic. in the universe on a universal technological scale they have quantum teleportation devices that we can't fathom even creating so we call it magic but it's technology. It's just like science and religion arguing about aliens and gods being different and there's no way they could possibly be the same thing. Even though the two descriptions are hundred percent interchangeable for the words.I don't know of any religious scripture is that claimed that the gods crawled up out of the dirt or out of the ocean most all of the ones that I've ever heard of that came from the sky or the heavens,which is outer space.even if you think some people think that heaven is actually a different dimension on here on Earth and or in the sky of Earth but not the universe itself, multi-dimensional beings are considered alien. The Bible clearly explains the spiritual realms is being multi-dimensional realms. Science believes in interdimensional light beings which are invisible to the naked eye but can be detected by certain spectrometers but they discredit religion for believing in an invisible God and vice versa. It blows my mind.

      Delete
  2. When you say AI are you talking about computers in general or specifically neural networks that can be used to generate a neural network model based on training data?

    ReplyDelete
    Replies
    1. A.I. as in Alien Intelligence. Which could also be artificial intelligence because no one has ever denied or proven that alien couldn't be an artificial being. And there's MDI multi-dimensional intelligence,there is L.I. for linear intelligence. There's DPI dual polarity intelligenc, I could go on and on and on. The whole gist of the article though is explained in the movie Thor very well on Earth we consider science and magic separate in the universe they're not are one in the same . If the technology is too advanced for the mind to conceive it must be magicbut if the magic isn't good enough then you can see right through it at the illusion of it and the technology behind it. So I think Michael carbonaro is a multi-dimensional alien using quantum entanglement and his magic tricks to some of that s*** it's just unexplainable.

      Delete
  3. Dr. Feser,

    Perhaps the point can be made even without computers. Consider a ball thrown in the air -- it describes a curve which (ignoring special and general relativity, air resistance and so on) instantiates Newtonian laws of motion perfectly. But no one would argue that the ball _understands_ said laws. While its trajectory can be plotted accurately using vector algebra, the ball itself is not calculating vector algebra. At most one can say that the ball simulates the performance of vector algebra by rational creatures, in that we could observe its motion as an instance where the path and velocity of the ball correspond to the purely formal equations of vector algebra.

    As for AI being "cool" and "fun", I'm sure we can say that to all those people in the middle east massacred by American drones.

    ReplyDelete
    Replies
    1. An even simpler way of illustrating this would be to refer to counting things on your fingers. Obviously we use our fingers to assist our calculations; The fingers do not actually calculate anything by themselves. An abacus works the same way, but even though it is a device that has existed for centuries, no one in history ever thought that the abacus was actually an intelligent mind because it always so obvious that people were using it as an aid to their own calculations.

      Delete
    2. @Jonathan Lewis

      I sometimes wonder if human intelligence has declined over the centuries.

      Delete
    3. I'd say that it isn't Human intelligence that's declining, but rather the availability of so much information has led to people thinking less and less. Why dwell on a question when you can Google the answer in seconds?

      Delete
  4. Real Intelligence is tacit or intrinsically wordless living existence.

    By contrast mind is the first form of artificial intelligence. Mind is an interior projection of a language-program that, in its imaginative elaboration of itself, conceives of purposes and ideas (in the realm of illusion) for which there are no corresponding physical or cognitive data.

    Human beings are all living in a "virtual world" of mind. Human beings are, characteristically, self-identified with a "robot", an artificial intelligence.

    ReplyDelete
  5. So is magic actually metaphysically possible?


    As the article says, it's one thing to produce an effect through the material means of technology, but that by definition wouldn't be magic.

    To distinguish magic from technology then, it needs to be something completely different from technology. And not just technology, but science as well, since science is also not thought of as magical in any way at all. And like technology, to produce a scientific explanation of an effect is to essentially show it's not magic. So magic cannot be anything scientific or technological.

    Or rather, magic cannot be scientific and technological because they both explain things via material causation and intermediaries. Magic is not actually a material form of causation then, since it affects material reality in a way completely different from science and technology, which is intrinisically material. It would have to be something immaterial then.

    So if magic and magical powers actually existed, and could affect the world in an immaterial way (no chemical reactions or particles being moved around as in the case of science and technology), how would A-T metaphysics explain it?

    For example, would the form of the magic power (magic words, certain actions, other ways the power is expressed and affects reality) be the explanation for how it can act immaterially on the world?



    And if A-T can show that magic is metaphysically possible, that would be another advantage it has over naturalism - it can accomodate the existence of magic, something which seems intuitively possible to most people because magic and it's causality is easily graspable.

    ReplyDelete
    Replies
    1. It's highly unlikely that magic is possible. There are apparently two "laws" of magic -- that of similarity and that of contagion. The former amounts to the belief that the practitioner can bring about an effect by doing something which simulates the effect -- for example, mutilating a voodoo doll which looks very similar to a real person will result in the latter undergoing the same mutilations. The second law amounts to the belief that if two objects were in contact, then the influence of each one on the other remains even if the contact later ceases. Think of the same voodoo doll, but containing within it a lock of hair belonging to a person one wants to harm.

      It should be obvious that material objects don't work this way, because they cannot transcend themselves. Nor can any amount of visualizing or whatever on the part of human beings bridge the chasm between similar objects or objects which were once adjacent or contiguous.

      Another form of magic involves bridging the aforementioned chasm with the aid of various gods/angels/demons. These entities are supposed to have secret names by which they may be invoked and bound to one's will. The idea is that if you recite the true name of the god in the right way, he (it?) will be bound to you by the power of said true name. But it is again unlikely (if not impossible) that a succession of sounds, which is not even a substance, can have the power to transcend the physical realm (and therefore, itself) so as to bind these spiritual entities. Nor can any such sound correspond to the form of any of these entities, given that they are immaterial whilst the sound itself is entirely material.

      A third form would involve propitiating said entities through prayers and offerings, and by their aid, the practitioner might effect changes in the physical world. But this is not magic, rather, it is (for lack of a better term) religion. Specifically, idolatry. Or to be even more specific, it is demonolatry.

      Delete
    2. @Sri,


      Quote:"The former amounts to the belief that the practitioner can bring about an effect by doing something which simulates the effect -- for example, mutilating a voodoo doll which looks very similar to a real person will result in the latter undergoing the same mutilations."


      That wasn't what I was thinking of exactly. The magic in question is more like the magic in fantasy novels (LoTR, Harry Potter etc.) and TV shows.

      For example, saying a certain combination of words with a special object in your hand results in the object releasing lightning, or water, or some other effect, and it not being a chemical reaction or movement of particles that accounts for this.

      Or, say, a magic crystal that can generate light but (again) without a material reaction accounting for it. Or objects that can fly in the air magically, which means without
      generating some physical force or other that accounts for the flight.



      Quote:"The second law amounts to the belief that if two objects were in contact, then the influence of each one on the other remains even if the contact later ceases. "


      Isn't this basically spooky action at a distance, or quantum entanglement? We already know from QM that two objects can influence each other instantly no matter what the distance (some would even say that even instant interaction can't explain the interaction between such particles, thus some say that causality has been disproven at the quantum level if even instantaneous interaction has been disproven. Here's an article: http://advances.sciencemag.org/content/2/8/e1600162), so either this is proof of magic-like causality in the world - meaning the voodoo doll example isn't so unrealistic now, or this is something else physical and thus not exactly magic either.


      Quote:"It should be obvious that material objects don't work this way, because they cannot transcend themselves."


      Well, it's not the material objects' matter that would actualise these effects, but their form. And the form would in fact be an immaterial aspect of matter that could potentially affect reality.

      And anyways, the fact that we can easily understand and conceive of immaterial causality following from material things or tokens in itself at least speaks in favor of the metaphysical possibility of magic. It's intelligible enough to be grasped by the intellect and accepted.


      Quote:"A third form would involve propitiating said entities through prayers and offerings, and by their aid, the practitioner might effect changes in the physical world."


      The name for this would be divination, actually.

      Delete
    3. In material objects, it is neither the form nor the matter which acts, but the object itself. Action is the outflow of the object's act-of-existence, which is neither the matter nor the form. One can indeed say that the form acts, but only insofar as it does so by informing the matter. Obviously material objects cannot act in non-local ways. The paper you cited only makes this point for me, since it rejects direct non-local causality from A to B and vice versa.

      I would deny that it is possible to coherently conceive material objects having immaterial effects, since this would mean that said objects would have an immaterial aspect as well, which means they are no longer purely material objects, but have some sort of "soul" which survives the corruption of the material parts of the object. But then, this would mean that said object, by virtue of having teleology immaterially, would thus be intelligent. Surely this is absurd, since there is nothing intelligent about a wand or a crystal or a ring. These things are not even substances, therefore they cannot even have a substantial form.

      Delete
    4. @Sri,



      Quote:"In material objects, it is neither the form nor the matter which acts, but the object itself. Action is the outflow of the object's act-of-existence, which is neither the matter nor the form."


      In that case, the form of the thing would be what supplies it with it's natural powers and potencies. And in the case of magic, the object would simply have the power to affect reality in an immaterial way, without chemistry or physics, which is given to it by it's form.




      Quote:"One can indeed say that the form acts, but only insofar as it does so by informing the matter."


      Well, the form isn't an efficient cause, but a formal cause, so it can't bring about change in the same way an efficient cause can. That may seem like an objection against the possibility of magic conceived of as formal causality, but in the case of immaterial intellects, which are pure forms, they can in fact bring about change in the world efficiently, so that may be a possible model for how material things may bring about effects in an immaterial way.




      Quote:" The paper you cited only makes this point for me, since it rejects direct non-local causality from A to B and vice versa. "


      Understood. But the paper also claims that this means ordinary notions of causality as such cannot explain these interactions, if even instantaneous interactions have been ruled out, and so we should completely change the way we view causality.


      Quote:"I would deny that it is possible to coherently conceive material objects having immaterial effects, since this would mean that said objects would have an immaterial aspect as well, which means they are no longer purely material objects, but have some sort of "soul" which survives the corruption of the material parts of the object."


      Not necessarily. When we understand magic as presented in fantasy & pop-culture, we can easily see that the idea of certain combinations of things having immaterial effects is intelligible.

      In that case, the material objects already have an immaterial aspect (the form), and all that magic / immaterial causality requires is that they have the power to cause things in the world in an immaterial way as well. The form could perhaps supply them with the ability to do this.

      And it doesn't even solely apply to individual material substances. It could apply to combinations of actions and enviroments; say that when one picks up a stick and enters a specific house and claps one's hand a certain number of times and says certain specific things, an effect in the world occurs immaterially.

      The power to cause that effect immaterially belongs to the combinations of actions itself, but not to the particular objects and actions that form the combinations. The form of the combination (all the acts and things needed to make it) as a whole has the power to cause the change, and the various objects & actions in other enviroments don't cause the effect.



      Quote:"These things are not even substances, therefore they cannot even have a substantial form."

      We could conceive of crystal-shaped or wand-shaped substances that, like water, amass in such a shape and have accidental qualities. But even if the crystal and wand weren't true substances by themselves, they would still be made out of true substances - the crystal out of underlying minerals, the wand out of dead wood etc. And the substantial forms of those underlying substances could be the source of the magical power.

      Delete
    5. I don't really want to get sidetracked on quantum mechanics here, but I'll just say this much:

      The paper defines causality not as metaphysicians define it, but in terms of statistical relationships. What the paper concludes is that the correlations between interventions on A and the observations of B are not strongly correlated enough to say that A influences B. Further, A doesn't need to act on B in order to make it the case that B's state becomes fixed in a particular way (c.f. Gil Sanders' paper on QM and Aristotelian metaphysics regarding this). In sum: no, the paper hasn't refuted causality in the quantum realm because its definition of causality is too narrow.

      Angels are _subsistent_ forms, they are forms which exist without matter and it is because of this that they can act without matter. But since the forms of material objects cannot exist without matter, they can also not act without matter. Hence the possibility of a material object acting immaterially is precluded.

      As for wands and so on, even if they are substances, they cannot act immaterially for the reason I gave. But if you want to think it is possible for dead wood or gemstones or powdered mercury to have magical powers, you are entitled to that, there is no arguing with absurdity of this sort.

      Delete
    6. @JoeD,

      Forgive me if the last sentence came across as too combative, that wasn't my intention. But I simply don't see how dead wood or rocks or animal parts could have immaterial properties.

      Delete
    7. @Sri,



      Quote:"In sum: no, the paper hasn't refuted causality in the quantum realm because its definition of causality is too narrow."


      Understood. Thanks for the analysis!



      Quote:"But since the forms of material objects cannot exist without matter, they can also not act without matter."


      This is true in a trivial sense, insofar as the form of the material thing would be dependent on the existence of matter in order to exist and act. But this doesn't mean that it cannot act immaterially, only that it depends on the matter to be able to exist and act immaterially.



      Quote:"As for wands and so on, even if they are substances, they cannot act immaterially for the reason I gave."


      It's relatively easy to conceive of a material thing acting in an immaterial way. After all, a large amount of fiction and entertainment wouldn't exist if that weren't the case.

      Now, if it was metaphysically impossible for material thing to have immaterial powers, then why would it be so easy to conceive and understand such a concept then?



      Quote:"Forgive me if the last sentence came across as too combative, that wasn't my intention."


      Eh, don't worry about that! Clarification taken, so it's understandable.

      Delete
    8. To any Christian or Jew or Muslim it is obvious that there IS in fact a sort of action that is like what we mean by "magic", namely that of miracles. In a miracle, God causes an act which is not possible by reason of natural causes alone.

      Now even if we restrict the term "magic" to those things that humans can bring about at will, one might mistake some miracles as being magic, because some miracles occur with men seemingly "at the helm", causing them at command. Like with Elijah calling forth fire on the sacrifice against the priests of Baal. But this is somewhat misleading: in all cases, man in the event is not the primary cause that sets the event in motion, but God, and when men are involved as a cause at all, it is only as a secondary cause, I think. That is, it is God who causes the man to want the miracle and (particularly) to believe that calling on God for this miracle will result in bringing something forth. Nobody of his own production imagines that "I expect God will produce this sign precisely because I ask it of him" and then gets such results. It is, rather, that God inspires him to believe God will do it that he calls for it.

      (I exclude from this the case of a Catholic priest in the Mass causing the bread and wine to become the Body and Blood of Jesus Christ, both because clearly he is able to do so only by reason of participating in Christ's own priesthood (so it is not his own action per se), and also because in a technical sense this is not a miracle because the change is not evident to the senses and can only be apprehended with faith. [I would exclude the miracle of Lanciano from this exclusion, on the grounds that it IS evident to the senses, but point out that, obviously, the priest did NOT EXPECT OR INTEND the result that came about.])

      Calling on angels for supernatural aid is no better, because (a) angels don't produce results unless God wills it, and (b) since angels are above men in the order of created beings, they are not bound to obey men and do not produce results merely because a man has commanded it.

      Demons, being like angels in their natures, are unlike in their motivations but like in their not being bound to obey men. Nothing in the nature of demons makes them subservient to men, and if a man were to sign a contract to get such aid, he would soon find that he was a fool to try it, because demons are "liars and murderers". They might SEEM to obey him and thus allow him to produce magical results, but it would be for their purposes rather than his, and he would find that they do not "obey" any of his commands that do not serve their ends.

      Delete
  6. Replies
    1. Defining magic is difficult.

      Most of what is depicted as "magic" in fantasy stories is depicted as some kind of technique or technology for interacting with the world (but one which doesn't actually work in our world).

      The basic gist is:
      - you must have certain material components (eye of newt, hair of the person to be cursed, or whatever);
      - you must perform rituals (usually both Verbal and Somatic; i.e., words and actions);
      ...and it then just works. (Although, in some formulations, it only works if you have either a certain inborn talent; or else some kind of special authority granted by a being which has that inborn talent.)

      Since the verbal and somatic rituals are the kind of thing which, if they really worked, would spread far and wide; and since the material components aren't usually too difficult to obtain; it follows that magic would be being done by millions of people everywhere if they could do it. But it visibly isn't, which makes magic an implausible premise in any fiction-story set in our world.

      I suspect that the talent/authority requirement serves to restore plausibility to the idea of magic by explaining why just everyone can't frustratedly blast nearby cars with sorcerous fireballs when stuck in traffic.

      Anyhow, it's important to note that magic, so described, is really a sort of science/technology (albeit one that doesn't actually work in our world).

      This makes the Arthur C. Clarke quote ("any sufficiently advanced technology is indistinguishable from magic") somewhat recursive: "Any sufficiently advanced technology is indistinguishable from a kind of technology that, given what you know about physics, isn't supposed to work, but somehow does."

      Boil this down even more, and you get: "What makes a technology seem advanced is that it (a.) reliably works, but (b.) how it works is so non-obvious that it seems like it shouldn't work at all."

      Or, even shorter: "Magic is a word for technology that surprises you by just working, though you don't know how."

      Delete
    2. @R.C.,


      Quote:"Anyhow, it's important to note that magic, so described, is really a sort of science/technology (albeit one that doesn't actually work in our world)."


      It's a type of science only in a broad way. Because the way magic is supposed to interact with the world is not meant to reduce it down to a merely scientific and physical phenomenon - otherwise the Fantasy genre would actually be Science Fiction.


      The way it causes changes in the world is in an immaterial way. For example, a magic stick along with a verbal command causes change in the world (i.e. lightning comes out of the stick by command) without moving around particles in the physical world or causing a chemical reaction from the stick or sound-waves.


      In this way it is unlike science and technology because the underlying causality is immaterial. It is a type of technology only insofar as the effects are instrumental and intelligible, and follow from the formal cause of the things possessing the power to cause effects in an immaterial way.

      Delete
    3. JoeD:

      Thanks for your kind reply. I think the distinction you make ("power to cause effects in an immaterial way") is an important insight.

      But I also think it's a bit leaky/fuzzy around the edges.

      First, quantum weirdness like "spooky action at a distance," virtual particles, and the Heisenberg uncertainty principle all seem to make things that are material appear to either be immaterial, be of uncertain materiality, or cause effects in an immaterial way. Is communication via entangled particles a kind of "magic?"

      Secondly, how can one distinguish between immaterial causes and material causes one can't see? This is an epistemological rather than an ontological distinction, to be sure: You'd end up calling something "magic" even if it had a material cause, provided the material cause was sufficiently obscure.

      I suppose we could finesse this by distinguishing between "verified magic" and "putative magic": The verified form would be that where we somehow knew the chain of causation to be immaterial; whereas the term "putative magic" covers effects where the chain of causation is obscure and seems plausibly immaterial.

      Finally, wherever I see authors/moviemakers trying to flesh out the details about "how magic works" in their fantasy-world, it seems to fall in two categories:

      Category One requires magic to always depend on intervention from summoned spiritual beings.

      Category Two might require "talent" plus rituals and spell-components, but doesn't require intervention from spirits. This kind of fictional magic is explained as: "That's just how objects/energies/minds behave in my [fantasy] world."

      The former -- reliance on demons or whatnot -- is arguably a kind of magic, though I think "occult spirituality" is the better term. But set that aside for a moment and look at the other kind. The second category of fictional magic surely qualifies as some kind of physics, doesn't it? It's not about what demons are willing to do, but "how my [fantasy] world works." It is natural to that world.

      Is that kind of magic "material?" Hard to say. After all, is gravity more "material" than telekinesis? How much more material is a lightning bolt from a Van der Graaf generator than an equally-powerful one from the fingertips of a sorcerer? Neither are solid. A magnetic field from a neodymium magnet isn't solid, although the magnet is. Could the magnet be qualified as the "material component" in a magnetism spell?

      Category One magic, "occult spirituality," is not like physics. But, there again, the distinction seems a bit fuzzy: What is it about a summoned demon that (in works of fiction) allows it to be confined within a pentagram? Could it just as easily (a la Ghostbusters) be confined in a laser containment system? Are these beings really "spiritual" or just transdimensional bodily beings, like Mr. Sphere intersecting Flatland to talk to Mr. Square?

      Now, I'm a Christian and a Catholic. So I hold that occult spirituality is bad for humans and forbidden by God; that no, confining ghosts in a laser containment system can't possibly work; and if it were ever true that a demon was confined by shapes drawn on the floor, it would only be on account of some exercise of divine authority which the demon had to obey, not because spirits are blocked by lines of chalk.

      Anyhow, if there were real magic of both types, it seems easier to distinguish the Category One type from physics, than the Category Two type.

      And, I think that authors and filmmakers get confused about the distinction, too. That groaner George Lucas wrote about "midichlorians" in Star Wars Episode I: The Phantom Menace shows that it's easy to fall into redefining spirituality as physics. (Especially, as Obi Wan might say, for "the weak-minded.")

      Delete
    4. @R.C.,


      1)Who knows? If the interactions can be proven to not have a material intermediate component or medium, then they may be truly magical because they aren't physical.

      Such interactions would follow from the forms of the particles, or from the formal nature of the interaction itself.

      I even brought this up earlier in the comment thread with Sri as a possible candidate for immaterial causation in physics.


      2)The second category of magic where it works in a world by it's very nature is a type of physics only in the sense that it is intelligible via the nature's of things.


      Physics, after all, is partially about the substantial forms of things and what their natures can do. So if there were material things that could cause things immaterially by nature, that would be a form of physics in the broad Aristotelian sense (think formal causality).

      But it wouldn't be physics in the popular sense where everything requires a material medium to work. That would be Science Fiction, not Fantasy.


      3)As for ghostbusters and the confining of spiritual beings to certain things and places, I agree.

      Since immaterial beings are pure forms, it is incoherent to suppose that they can be affected by material things per se. Hence the metaphysical impossibility of the Ghostbusters if we think of the visible ghosts as merely manifestations of purely immaterial beings.


      The same goes for tropes in fiction where the soul is said to be "sucked" from a character. The soul in that case is necessarily conceived of along the lines of ectoplasm or a weird ethereal substance that inheres in the body, not as the immaterial principle that makes the thing what it is - and to say that the form was sucked from a person, or that half a soul was sucked from him, is as coherent as saying that the triangularity was taken from a triangle, or that half it's triangularity was taken from it.


      4) I also agree about the point that material being isn't necessarily solid.

      What is necessarily material is extension and particularity. And that obviously includes water, air, plasma and field-like forces.

      Light is a bit of a vague case since two particles of light can literally occupy the same space, which goes against the A-T view that matter cannot occupy the same location.

      But Tony has pointed out that light (and other bosons) do not occupy space in the same way other material things do, that light also has wave-like properties, and how this may explain how it can "occupy" the same location with two particles without violating metaphysics.

      Delete
  7. Stephen Hawking, among others, has warned of the threat of AI. But none of these commentators actually work in AI. If they did, they would understand how difficult it is to get computers to "understand" what even the average 4 year old does. What we're being asked to believe is that there comes a point (a sort of "critical mass" in terms of algorithm complexity or processing power) after which a machine somehow begins develop a sense of it's own identity and purpose. This is nonsense. A program is a list of instructions; that's it.

    ReplyDelete
    Replies
    1. Call me a Luddite, but other than for things like replacing human miners and firefighters with automata, I don't really see much use in them. The use of automata in service-oriented fields like say, waiting tables and so on, where one seeks a human face, will lessen human-human interaction and thus accelerate the collapse of our individualist societies because they produce more anomie.

      Delete
    2. That seems to be Hollywood version of AI you're thinking about. In fact there are a huge number of applications. See

      https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence

      Delete
  8. "The thing to emphasize is that the computer is not in and of itself carrying out logical operations..."

    This is false. Logical operations are simply combinations of objects followed by selection. Computation is repetition of combination and selection.

    Logical operations are mechanical. For example, if you have two distinct objects, there are four different ways to combine them ([1, 1], [1, 2], [2, 1], [2, 2]) and sixteen different ways to select one of the two objects. It's easy to show this by enumeration. Furthermore, it's easy to show that two of the sixteen mechanical ways to select from one of two objects can, when strung together, produce all of the sixteen ways to select one of two objects. The point being that complex computations are complex networks of simple combination/selection operations. A computer made of NAND or NOR gates is one way to instantiate computation. As Feynman said, "Computer theory has been developed to a point where it realizes that it doesn't make any difference; when you get to a universal computer, it doesn't matter how it's manufactured, how it's actually made."

    When you look at computer theory, the way you get meaning out of operations (i.e. combination and selection) is to arbitrarily select two symbols and designate one as "it is the case that" and the other as "it is not the case that". (See, e.g. Lambda Calculus, especially section 3).

    Because this assignment is arbitrary, you cannot a priori tell by looking at the physical device, which symbols are being used for what purpose at what time.

    All you can do is look at what the network means by how it interacts with the external world.

    It is because of this that the statement "The point ... is that a simulation of X is not the same as X" is wrong. For some things, the simulation of the thing is not the thing. For others, it is. To compare computation to magic is to commit a category error.

    It is because meaning is "hidden" in logic networks that tests for intelligence have to look at the behavior of the thing under test. It's why intelligence is viewed as a range of behaviors. My dog is partly intelligent: he can solve simple problems, he can communicate simple things. My three year old grandson is slightly more intelligent, and so on.

    ReplyDelete
    Replies
    1. "It's why intelligence is viewed as a range of behaviors."

      Then if that is intelligence, then that is miles away from us.

      " My dog is partly intelligent: he can solve simple problems, he can communicate simple things. My three year old grandson is slightly more intelligent, and so on."

      Yea but none of that is remotely close to what we possess. Your dog can't grasp or communicate abstract concepts at all. It's not like your dog is doing this but just in a more limited way. Your dog isn't doing this at all. Your dog doesn't solve problems in the same sense that we solve problems. This isn't about range but about levels. We aren't just doing what dogs do but with a wider range. We are doing something very different.

      Tell me, are rocks intelligent, just less so than dogs?

      Delete
    2. Cashing out "combination" and "selection" in any meaningful way without reference to external parties (like humans) is going to be part of your problem. That is, unless you think a falling rock "selects" where it lands merely in virtue of landing in some place and "combines" with another object merely in virtue of landing in some close proximity to it.

      Delete
    3. Billy:
      Then if that is intelligence, then that is miles away from us.
      How do you know?

      Yea but none of that is remotely close to what we possess.
      How do you know?

      You only know this by observing their behavior. You can then support your observation by comparing brain structure. First, a dog's brain is not as complex as a human's. Second, dogs don't use external storage. Humans do, and storage is one element of computational ability.

      We are doing something very different.
      Outline a proof of why you think this is so. In particular, given what we know about computability theory, why you are making a claim of difference in kind and not just degree.

      Tell me, are rocks intelligent
      No. Rocks don't have mechanisms that can compute.

      Delete
    4. ccmnxc: Cashing out "combination" and "selection" in any meaningful way without reference to external parties (like humans) is going to be part of your problem.
      It's not problem, it's the way things work. It's a feature of the system.

      That is, unless you think a falling rock "selects" ...
      The rock is part of a larger system. So you have to consider the rock and the land. Furthermore, in your example, you only have one symbol (the rock), and there aren't many things you can compute with one symbol.

      Finally, why do you think we see "intelligent behavior" behind the universe? It's because the universe is in motion with combination and selection, and that's what our brains look at to try to extract meaning.

      Delete
    5. wrf3,

      It's not problem, it's the way things work. It's a feature of the system.

      Whether or not combination and selection are purely features of the system is precisely what is at issue here.

      The rock is part of a larger system. So you have to consider the rock and the land. Furthermore, in your example, you only have one symbol (the rock), and there aren't many things you can compute with one symbol.

      Okay, a few things:

      1. What gets to count as a system here? Are we talking the entire physical universe? Just the local rock formation + forces of physics, etc? Because keep in mind, we are talking whether such systems carry out logical operations (i.e. think). So, depending upon our systems here, we could end up with some sort of pantheistic or world soul view if we allow for merely one system, all the way up to indefinitely many thinking systems in the natural world, which starts to look a heck of a lot like animism or some such thing.

      2. Say we consider the rock and the land together. In that case, is there selection and combination in the way I inquired of above?

      3. I attach no importance to the rock being the only object. Multiply objects as you feel free.

      Finally, why do you think we see "intelligent behavior" behind the universe? It's because the universe is in motion with combination and selection, and that's what our brains look at to try to extract meaning.

      The word "teleology" comes to mind, especially when you speak of us "extract[ing] meaning" from the universe as if it were built in.

      Delete
    6. ccmnxc: Whether or not combination and selection are purely features of the system is precisely what is at issue here.

      Why is there an issue? It's how things that compute work. Let me repeat Feynman's quote: "Computer theory has been developed to a point where it realizes that it doesn't make any difference; when you get to a universal computer, it doesn't matter how it's manufactured, how it's actually made." But you can see how logic is mechanical application of combination and selection rules, e.g. here. In the first table, it doesn't matter if you use 0 and 1, or A and B, or oranges and bumblebees.

      What gets to count as a system here?
      At a minimum, the thing that has a combination/selection network and whatever is needed to support it.

      carry out logical operations (i.e. think).
      You are confusing terms. Logical operations are just mechanical selections. Thinking is the sequence of logical operations.

      So, depending upon our systems here, we could end up with some sort of pantheistic or world soul view...
      You can't argue against something at the outset simply because you don't like where it might end up.

      Say we consider the rock and the land together.
      Sure. River beds are the combination and selection of dirt and water over time. The Heider-Simmel animation is the combination and selection of light and dark elements over time.

      Now, for it to be a computational engine, it has to have the combination/selection that encodes the "this is that" relationship. But that's where the problem lies. We can see it where it exists, we can see it where it doesn't exist, and we can not see it where it does exist.

      Delete
    7. wrf3,

      In order to keep the posts from growing grotesquely long, I am going to focus on only a couple things:

      1. You say it doesn't matter what we use for these computations, whether it be 1 & 0, A & B, organges & bumblebees, etc. Yet this is my issue. What exactly is happening if there were to be a combination of, say, oranges and bumblebees. What would be the mechanics of combination in some physical system in the world? Or, for that matter, selection? Say there is an orange at a certain coordinate location on the earth and a bumblebee in another coordinate location; what makes them be combined or not combined?
      The point I am trying to make here is if you try to cash these operations out in terms of a purely physical account, where mind is either excluded or is nothing more than the physical, then notions like combination or selection are going to end up coming out very unstable.

      2. When speaking of the "this is that" relationship, what grounds the identity between "this" and "that?" Does their identity precede the application of the rule (as in, does the rule recognize their already extant identity), or is it a result of the rule (as in, saying "X is Y" just is what makes X be Y)? If the former, at least some meaning cannot be explained in terms of logical operation and computation. If the latter, then you end up in the dilemma of having to account for rule's efficacy, where you end up looking at either Platonism (not friendly to merely computationalist theories of mind) or some brute fact convergence between the rule and between X and Y's identity.

      Delete
    8. ccmnxc: What would be the mechanics of combination in some physical system in the world? Or, for that matter, selection?
      You can build a semiconductor that take two inputs (plus a reference voltage) and produces one output. You can do it with streams of water and plastic molds. There's lots of ways to do it.

      then notions like combination or selection are going to end up coming out very unstable.
      The fact that we can build these systems argues against that.

      When speaking of the "this is that" relationship, what grounds the identity between "this" and "that?"
      In some cases, nothing, other than internal consistency (e.g. Euclidean and non-Euclidean geometries.) In other cases, correspondence to nature. If your bridge collapses, there's a wrong identity somewhere.

      Delete
    9. How do I know that dogs don't have intellects like we do? Dogs communicate (in the broad sense of the term), therefore if they had intellects and communicated (in the strict sense) like we do, a properly mature dog would communicate their grasp of abstractions like we do. But they don't.

      "Outline a proof of why you think this is so."

      I kinda just did above. Tell me, do you think plants and bacteria have sensation?

      "Rocks don't have mechanisms that can compute."

      Okay, how about bacteria and plants? Are they intelligent?

      Delete
    10. To add to that last question: Are plants and bacteria intelligent, but just less so than dogs?

      Delete
    11. Billy: To add to that last question: Are plants and bacteria intelligent, but just less so than dogs
      Based on what I've written, you have the tools to answer the question yourself. Remember, computation is based on combination, selection, and repetition. So for something to be intelligent, it has to have a mechanism that does these mechanical operations. There are a lot of different ways to do this. Neurons are one way. The complexity of behavior is related to the complexity of the computational network. So if plants and bacteria have mechanisms that can do computation, then they would be somewhere on the intelligence scale. But you'd have to ask a specialist if such mechanisms exist in these things.

      Delete
    12. "Remember, computation is based on combination, selection, and repetition. So for something to be intelligent, it has to have a mechanism that does these mechanical operations."

      You seem to just be begging the question here. You are defining intelligence as something computers have, since your mechanism is precisely what a computer does, then saying "see computers have intelligence".

      You are assuming what you should be proving. You are working from whats true about computers, saying that this is intelligence, and then applying that to us, which is backwards. We know we have intelligence, it is the intelligence of the computer that we are trying to confirm. You can't appeal to the computer to confirm what intelligence is.

      I definitely wouldn't say that this is intelligence.

      "There are a lot of different ways to do this. Neurons are one way."

      But why think being able to do this means having intelligence? As I said, you are working from computers back to us, which is backwards. You can call having this mechanism you mention as having intelligence, but then I would simply repeat: none of that is remotely close to what we possess. The grasping of abstract concepts, or even the reasoning about them, is not identical to the mechanical operations of the brain, even if they are intimately connected. Considering philosophers have pointed out the various problems, and modern materialists utter failure with figuring out how they can be identical, I am skeptical as to whether you can do it. Materialist philosophers struggle just to account for sensation.

      Here is Feser explaining the problem and defending a different position:
      https://www.youtube.com/watch?v=fNi0j19ZSpo&

      Delete
    13. Billy wrote: you seem to just be begging the question here... You can't appeal to the computer to confirm what intelligence is.

      That's not what I'm doing. We start by observing that intelligence is measured (even if loosely) by sets of behaviors. We also observe that the human brain is central to human behavior. We observe that animal brains are involved in animal behavior. We see that there appears to be a relationship between brain construction and behavior.

      Next, we observe that a brain is a network of neurons. We know how neurons work. They calculate a weighted sum of their inputs and fire a signal if the sum is above a certain threshold.

      We then look at computability theory. The lambda calculus (which is one of several equivalent ways to describe computation) shows us how computation is done with meaningless symbols. We can show that networks of neurons are one implementation of this model, networks of NAND gates is another (see the quote from Feynman elsewhere in the comments).

      It is because of what we know about computability theory, and because of what we know about neurobiology, and because of what we know about computer engineering, that we can say that the difference between computer behavior and human behavior is a difference of degree -- not kind.

      Now, there is an assumption in here, namely what is known as the Church-Turing Thesis, which basically holds that if anything is computable, then it falls under the model developed by Church (the lambda calculus) and Turing (Turing machines).

      So one "out" that a someone who rejects the correspondence between humans and machines might then claim that what humans do is somehow fundamentally different from what machines do. But it isn't enough to construct a fine-sounding story. One has to show how that story is grounded. But to do that, one has to posit all sorts of mechanisms in human brains that just aren't there. Penrose's "quantum microtubules" is one such proposal. But this ignores the fact that classical computers can do everything quantum computers can (albeit not as fast). It might be an engineering issue, not a theory issue.

      I'll look at Feser's video later. I have to get ready for church.

      Delete
    14. @wrf3,

      I've been reading your exchanges and, whilst I won't make a full point-by-point response since I simply haven't got the time, it is nonetheless worth mentioning that the root of all your confusion is your acceptance of nominalism, the likes of which merely begs the question against Thomists.

      In fact, classical essentialists such as Thomists will insist that the intellectual disease (to use Étienne Gilson's expression) of nominalism is the core issue which, directly or indirectly, in one way or another, gave rise to pretty much every modern evil.

      Now, given your nominalism, it's really not surprising you're reaching those conclusions. The problem is that you also seem unaware of how Thomists (you can start by Dr Feser's books) have since forever being pointing out nominalism's utter incoherence.

      I also noticed you're a Protestant. Well, indeed it was nominalism that created Protestantism. And it was the nominalist framework of Protestantism that in turn created secularist modernity. Oh, yes... Ideas Have Consequences (by the way, that's the title of Richard Weaver's, a Protestant, famous book). (Highly recommended take on the subject: Brad Gregory's The Unintended Reformation.)

      Moral of the story: you really should reexamine your most foundational assumptions, lest you end up reaching the conclusions you want the least (but which in any case are false, thankfully)...

      Delete
    15. Anonymous: I've been reading your exchanges and, whilst I won't make a full point-by-point response since I simply haven't got the time,...

      Just do one. I don't think you have the ability. Here's why:

      it is nonetheless worth mentioning that the root of all your confusion is your acceptance of nominalism...
      I'm not a nominalist. I just don't like bad arguments. And I have read Feser's books. His "Philosophy of Mind" is a Gordion knot of mistakes. If you're interested, I'll point you to four partial critiques on my blog.

      Delete
    16. Oh, yes, you are. You just don't know it. Typical Ockhamite.

      That's why you keep begging the question against the Thomist. Here's a tip: it's not all about logic; other metaphysical presuppositions are just as fundamental and can't be ignored like you keep doing.

      And it is interesting how the only book of Feser's you mention is one where he doesn't focus on defending Thomism in a thorough fashion.


      A different Anonymous

      P.S.: I second first Anon's recommendation of Gregory's book, and to a lesser extent Weaver's as well. Please read them. Else you risk having your children come home one day and proclaim they find the faith you taught them to be incoherent through and through. You don't want that, do you?

      Delete
    17. @Wrf3

      I think the sort of objection you're trying to raised is already addressed in point 2. in the above post . I'd suggest taking a closer look at it.

      And I would recommend these two other anon characters here to please not bring up religious controversies in this discussion.

      Delete
    18. Anonymous (4/1 @ 8:57pm): ...taking a closer look at [point 2]

      That's what I've been doing. Let me give a refutation of point 2 in one place. Feser claims that logic gates and neurons are inequivalent because of how they are made, not by what they do. This is wrong. Again, repeating the quote by Feynman, "Computer theory has been developed to a point where it realizes that it doesn't make any difference; when you get to a universal computer, it doesn't matter how it's manufactured, how it's actually made." (bottom of pg. 467). Too, Feser says, "In particular, no one is assigning an interpretation as implementing a logical function, or any other interpretation for that matter, to neurons". But this is also wrong. The equivalence between neurons and logic gates is demonstrated, e.g. here. So his first point fails.

      His second point brings up an alleged difference between artifacts and substances, i.e. "things that are made by natural processes" and "things that are made by intelligent agency". But this is just a re-hash of the first argument. Intelligence is not determined by how it's made but by what it does.

      His third point is a theological objection. But you can't or, at least you shouldn't, make theological objections to engineering theory. Still, I don't understand why Feser thinks that's an objection. If God is an immaterial "substance", and an undivided and indivisible immaterial substance at that, then there can only be one such immaterial substance. So if our thoughts are "connected" to that immaterial substance, then God is, somehow, "thinking" in us.

      In his final part to point 2, Feser writes and that apart from [the] interpretation [of the designer] they would be nothing more than meaningless movements. But this is backwards. We "see" a "face on Mars" in the same way that we "see" a face in a sculpture in that we find meaning in movement (past, present, or future). It is because of computability theory that we have to look at what something does in order to extract meaning that pareidolia occurs. It's why we see meaning in Heider-Simmel animations.

      Feser doesn't grapple with computability theory, which hasn't been around all that long (less than 100 years). You can't explain how minds work without it, any more than you can explain the orbit of Mercury without relativity or the behavior of light without quantum mechanics.


      Delete
    19. wrf3, why's it that everytime you come over to this blog you just make a fool out of yourself?

      You're repeatedly begging the question against Aquinas and Prof. Feser's metaphysics as was pointed out to you already. Yes, there exists a crucial distinction between a substance and an artifact. If you disagree then maybe you could start by, you know, arguing why you hold such distinction to be illusory? Otherwise it only goes to show your complete lack of qualifications to pontificate on the matter at hand.

      No wonder you love to quibble on the meanings of words. Newsflash: everybody knows that if one changes the definition of a term then the argument's soundedness will also change. Guess what, that's exactly the trouble with most modern philosophy (which is pervaded by nominalistic influences like the anons above mentioned) and it's precisely why one must first learn at least the basics of Thomistic principles and the respective jargon before being ready to even attempt to raise a prima facie non-ridiculous objection.

      Delete
    20. Feser claims that logic gates and neurons are inequivalent because of how they are made, not by what they do.....Intelligence is not determined by how it's made but by what it does.

      Most of the problem that I see with your analysis comes down to these types of claims you make. The way Feser explains it here, what they are really doing in reality is very largely related to how it is made.

      Feser says, "In particular, no one is assigning an interpretation as implementing a logical function, or any other interpretation for that matter, to neurons". But this is also wrong. The equivalence between neurons and logic gates is demonstrated, e.g. here. So his first point fails.

      But the article doesn't even remotely seem to interact with the issue being raised in the passage you quote. As Feser notes "True, there are causal relations between neurons that are vaguely analogous to the causal relations holding between logic gates and other elements of an electronic computer." and as he rightly notes next "But that is where the similarity ends, and it is a similarity that is far less significant than the differences between the cases." And this doesn't show otherwise.

      His second point brings up an alleged difference between artifacts and substances, i.e. "things that are made by natural processes" and "things that are made by intelligent agency". But this is just a re-hash of the first argument. Intelligence is not determined by how it's made but by what it does.

      But this assessment of claims made is somewhat confusing otherwise such objection doesn't follow. Let me just quote the passage in the post to make it clear.

      "That brings us to a second difference, which is that a computer and the logic gates and other elements out of which it is constructed are artifacts, whereas a brain (or, more precisely, the organism of which the brain is an organ) is a substance, in the Aristotelian sense. A substance has irreducible properties and causal powers, i.e. causal powers that are more than just the sum of the properties and powers of its parts. Artifacts are not like that. In an artifact, the properties and causal powers of the whole are reducible to the aggregate of the properties and causal powers of the parts together with the intentions of the designer and users of the artifact. (Cf. Scholastic Metaphysics, section 3.1.2)"

      So this isn't an issue of how such and such is made its about how they essentially are. The point is an artifact doesn't have any non-redundant causal power.Strictly speaking, apart from the language of its users it doesn't even exist.

      In his final part to point 2, Feser writes and that apart from [the] interpretation [of the designer] they would be nothing more than meaningless movements. But this is backwards. We "see" a "face on Mars" in the same way that we "see" a face in a sculpture in that we find meaning in movement (past, present, or future). It is because of computability theory that we have to look at what something does in order to extract meaning that pareidolia occurs. It's why we see meaning in Heider-Simmel animations.

      Of course we We "see" a "face on Mars" in the same way that we "see" a face in a sculpture. But it wouldn't follow from this as Feser says that face on mars is a sculpture. In the phenomenon you've mentioned , if we didn't have relevant background knowledge we won't even think of these things as having familiar meaning.

      Earlier you said:
      "The point ... is that a simulation of X is not the same as X" is wrong. For some things, the simulation of the thing is not the thing. For others, it is.

      Do you realize this couldn't be true? A simulation is by definition just that. If it a simulation then it simulates that particular thing. It can't be equivalent to that.

      Delete
    21. You can build a semiconductor that take two inputs (plus a reference voltage) and produces one output. You can do it with streams of water and plastic molds. There's lots of ways to do it.

      My question perhaps wasn't clear, but you speak of logical operations as effectively being a natural phenomenon. Thus, I am not looking for an example of something which simulates logical operations that humans can construct, but rather, my question is if we are to separate logical operation from already existing human minds and make it natural, what would it take for a natural process, entirely unaided or uninfluenced by anything with a mind, to combine or select things. And how would it be distinguished from non-combination and non-selection?

      The fact that we can build these systems argues against that.

      But if we build them, then they lie outside the domain of what I am talking about. My concern is that if we make logical operation an in-principle natural phenomenon, apart from human artifice, that is when we end up with unclarity.

      In some cases, nothing, other than internal consistency (e.g. Euclidean and non-Euclidean geometries.) In other cases, correspondence to nature. If your bridge collapses, there's a wrong identity somewhere.

      Let's focus on the "other cases." If the "this-is-that" rule is dependent upon conformity with nature, then it is entirely descriptive (rather than prescriptive) of already extant this/that relationships. However, in logic, when you, for example, say "x is y" or whatever, that is a prescription - a stipulation). Thus, the nature of the relationship is different such that whatever "logic" might be expressed in nature is not of the same kind as whatever we are doing.

      Delete
    22. A simulation is not the thing simulated, else it wouldn't be a simulation.
      Behaviors are signs that indicate intelligence; behavior is not actual intelligence itself.
      At most you can say the human brain in some respects is like a computer, but not that human intelligence as a whole is like (or actually is) a computer. If I use a calculator, it is not the calculator that is intelligent.
      I'm just some guy who is not formally trained, but much of this seems like common sense. Hilaire Belloc's critique of science comes to mind.

      Delete
    23. wrf3:

      You say, "Let me give a refutation of point 2 in one place. Feser claims that logic gates and neurons are inequivalent because of how they are made, not by what they do."

      No, you've misunderstood what Feser said. He's not interested in how neurons are made. And what you're reading as a reference to how logic gates are made is better-understood as Feser pointing out what they are made for; i.e., as tokens to be interpreted by a mind. But that leaves the actual state of being a mind utterly unexplained: It can't happen in logic gates, since they require a mind to give them meaning. Consequently, if the operations of neurons were entirely coterminous with those of logic gates, then mind couldn't be explained in terms of the activities of neurons, either. Whatever states you find in neurons which are most-analogous to the states you can find in logic gates are precisely the states that aren't relevant to explaining mental phenomena like comprehension of universals.

      Similarly, when you reject Feser's claim, "The thing to emphasize is that the computer is not in and of itself carrying out logical operations," saying, "Logical operations are mechanical...," you're equivocating on the term "logical operations," replacing Feser's meaning (activity of a mind interpreting/understanding certain tokens to represent truth or falsehood) with your own new definition (the purely-material state of the tokens, with no mind present to provide interpretation or meaning). Having thus redefined what Feser says, you then say, "this is wrong." Well, yes, your reinterpretation of Feser is wrong; but Feser's original statement is blindingly obviously correct if you don't redefine it.

      When Feser says, "In particular, no one is assigning an interpretation as implementing a logical function, or any other interpretation for that matter, to neurons," you answer, "But this is also wrong. The equivalence between neurons and logic gates is demonstrated, e.g. HERE." The paper to which you link, with the word "here," is utterly irrelevant to the point Feser's making. And the reason is what I mentioned earlier: To the extent that neurons only do what logic gates do, they thus denude themselves of any explanatory power for producing mind.

      Delete
    24. R.C.: No, you've misunderstood what Feser said. He's not interested in how neurons are made. And what you're reading as a reference to how logic gates are made is better-understood as Feser pointing out what they are made for.

      Feser is conflating what something is made for by how something is made. His example comparing the "face on Mars" with a sculpture demonstrates this. He's asserting the presence of teleology in one case and its absence in another. And the problem is that he's doing so in a way that asserts his conclusion.

      Whatever states you find in neurons which are most-analogous to the states you can find in logic gates are precisely the states that aren't relevant to explaining mental phenomena like comprehension of universals.
      And you know this, how?

      you're equivocating on the term "logical operations,"
      Hardly. I've precisely defined what I mean by "logical operations" (e.g. here). Now, maybe I didn't clearly show that these ways of selecting one item from two isn't a purely mechanical operation, but it should be clear from observation. Furthermore, activity of a mind interpreting/understanding certain tokens to represent truth or falsehood is exactly what the Lambda calculus does.

      To the extent that neurons only do what logic gates do, they thus denude themselves of any explanatory power for producing mind.
      That's what you don't see. The network is capable of explaining itself. That's why I frequently refer to Escher's Drawing Hands as one way to picture what is going on.

      That's the beauty of the system.

      Delete
    25. You said you read Feser's books. Well, your comments show you didn't.

      Delete
    26. I think the bigger blind spot for wrf3 and so many now is not as much nominalism as it is cartesian. Of course the latter developed out of the former, but even intelligent people can’t break free of being grounded in these assumed ways of thinking. First person experience has become almost irrelevant, the abstraction of the thing is the thing. A sensor that measures temperature is the same as feeling pain, it’s just nerves rather than thermostats.

      It’s absolutely correct that this is a two way reinforcement. The mind is an effect of the brain, and the brain is just a machine that evolved for survival, therefore a machine can be built that has mind.

      The fact of course is that no one has even dreamed up a feasible theory for how we could get from one to the other in either case.

      No one can suggest any way in which matter in the brain could possibly produce first person experience. It always ends up appealing to some kind of magic we know nothing about. Equally no one has proposed any kind of mechanism for how experience could possibly rise from logic operations. Yes you can build libraries of meanings, but again it requires magic for these meanings to become understanding. Without understanding it’s just processing libraries of facts and relationships. The suggestion is that if the algorithms reach a certain level of sophistication, then experience will magically appear. There is no reason at all to believe this. In fact we have good reason to believe that simple creatures have some kind of experience, but no reason to believe the most sophisticated computer we can imagine will ever experience anything.

      I suspect that Data from “The Next Generation” has helped drive some of these assumptions with many. Sometimes I think good fiction bleeds into our minds unconsciously, just as people can’t let go of the idea of ‘transporters’ even though they are clearly nonsense rather than ‘clever stuff we will be able to do one day’…

      Delete
  9. What about when AI advocates say that we are computers just like Alexa and Google Home because just like those devices, we have been programmed to give answers to specific questions? Our teachers taught us how to respond to a question like "what's three times four?" We are trained to give automatic responses just like the devices are, it will be said. Of course this is a weak argument because obviously our parents and teachers do not tell us how to respond to every single kind of question. This why Google Home will often answer me by saying "Sorry I can't answer that." There are also characters in video games who just keep repeating the same spoken lines again and again, because they have no other programmed responses.

    ReplyDelete
  10. "All you can do is look at what the network means by how it interacts with the external world."

    But the network doesn't "mean" anything. That's Feser's point.

    ReplyDelete
    Replies
    1. Feser is wrong. The meaning is encoded in what the system does. See the previously cited reference to the Lambda Calculus. It’s how you get meaning out of meaningless symbols. Now, you can argue if it’s creating meaning, or extracting meaning from the surrounding system, but that’s how it works.

      Delete
    2. "The meaning is encoded in what the system does."

      What does this mean? (Pun not intended.)

      Delete
    3. Meaning is defined as a "this is that" relationship. The lambda calculus encodes the statement "it is the case that 'this is that'" as one symbol (in the aforementioned paper, they choose the symbol "t"), and the statement "it is not the case that 'this is that'") by another symbol (in this case, "f"). Because the choice of symbols is arbitrary, the choice isn't fixed in time or space. You simply can't tell by looking at a logic network what the symbols might mean until the network interacts with an external environment.

      When that happens, the logic network in your head tries to map the action to what it knows, to see if it matches a "this is that" relationship. If it does, you think you know the meaning of the action. When my dog stands by the back door and whines, I know he wants to go out. When he stands by me and whines, I know he's hungry. We watch the Heider-Simmel animation and extract meaning from that.

      This is why Feser is wrong that the simulation of intelligence isn't intelligence. It is only by observing behavior that we can understand whether or not the behavior is intelligent.

      Delete
    4. “The lambda calculus encodes the statement "it is the case that 'this is that'" as one symbol (in the aforementioned paper, they choose the symbol "t"), and the statement "it is not the case that 'this is that'") by another symbol (in this case, "f").”

      It cannot be said that the lambda calculus encodes anything, because it is just a method. It is a person who does the encoding, not anything else. And that is what Feser is saying -- that the meanings of this or that motion of a computer is not inherent but is imposed by humans and is thus a construct.

      Delete
    5. TheOFloinn: What does H mean.

      H is just a symbol. Symbols only have meaning in relationship to other symbols (without a "this is that" relationship, there is no meaning).

      Now, you can make an arbitrary relationship between H and another symbol. If fact, you can make any number of such arbitrary relationships.

      So you'll have to tell me what it means, using existing relationships between symbols where we already have agreement.

      Delete
    6. Sri Nahar: wrote It cannot be said that the lambda calculus encodes anything, because it is just a method.

      That's its genius. It's both.

      that the meanings of this or that motion of a computer is not inherent but is imposed by humans and is thus a construct.

      The universe is a construct. What's your point? It's an irrelevancy. Either physical things create meaning ex nihilo, or meaning is inherent and physical things extract meaning. Whether or not we'll ever be able to settle this one way or another is an open problem.

      Delete
    7. Well, the point is that physical things cannot create meaning. As for lambda calculus, it is a mathematical operation performed by humans, and thus whatever meaning is encoded using it is encoded by humans. The motions of the computer itself have no meaning.

      Delete
    8. Sri Nahar wrote: Well, the point is that physical things cannot create meaning.

      Ok, prove it. Without handwaving (e.g. confusions over words) and without assuming your conclusion. Hofstadter, in Gödel, Escher, Bach took 700 pages to try to show that physical things do create meaning. In fact, he wrote, "The self, such as it is, arises solely because of a swirly, tangled pattern among the meaningless symbols." (pg P-3).

      I happen to think he's wrong, but I'm not sure I can prove it, any more than anyone can really prove that the world exists independently from themselves.

      As for lambda calculus, it is a mathematical operation performed by humans...
      And? Does mathematics exist independently of humans, or are humans necessary for mathematics?

      Delete
    9. "Ok, prove it."

      It is simple -- meaning itself is a purely formal relation between two mental entities, anf therefore no physical process can be a formal process, since physical processes are directed towards particular results and not towards abstract ones. There is nothing about, say, a rock, which gives it any particular meaning. Same for an event in a computer. The meaning an electronic event may have is imposed on it by humans, not inherent in it, unlike with human minds, whose thoughts are inherently meaningful.

      "Does mathematics exist independently of humans, or are humans necessary for mathematics?"

      In either case, mathematics consists of abstract operations and therefore cannot be performed by physical processes. I happen to think numbers and the relations between them exist independently of human minds, but even under the latter case, it would be plain that machines cannot do mathematics, since it would be the case that humans are necessary for mathematics.

      Delete
    10. Sri: It is simple -- meaning itself is a purely formal relation between two mental entities,

      First, you’re assuming your conclusion, namely that there is a difference between physical and mental entities.
      Secondly, the lambda calculus shows the “formal relation” between physical entities. Meaning is the arbitrary assignment of the functions λx.λy.x to the symbols “true” and λx.λy.y to “false”.

      and therefore no physical process can be a formal process, since physical processes are directed towards particular results and not towards abstract ones.
      You’re assuming that there can be an actual separation between the physical and abstract. For all you know, the relationship between the physical and abstract is like Escher’s Hands where removal of one part destroys the whole.

      There is nothing about, say, a rock, which gives it any particular meaning.

      Sure. By itself, the rock can be considered a symbol.

      Same for an event in a computer.
      No. An event in a computer involves the mechanical combination then selection of symbols. You’re making a category error.

      The meaning an electronic event may have is imposed on it by humans, not inherent in it, unlike with human minds, whose thoughts are inherently meaningful.

      And just how do you know this? There are only two differences between computers and human brains: one is primarily made of silicon while the other is made of carbon (but, that’s irrelevant) and one has more wiring complexity than the other. That’s it. I have computability theory and neurobiology on my side. You have how you imagine/want things to be, absent actual engineering principles)

      Delete
    11. I have not assumed any difference between physical and mental entities -- I have said that the physical is not the abstract, and that is self-evident. Abstract objects have no extension or spatial location. You say that meaning is the "arbitrary assignment of functions...", but that is the point, that the meanings of computer operations are arbitrarily assigned by humans.

      You further say that a computer involves the "combination and selection of symbols", but it is not the computer which treats a signal as a symbol, but the person who operates the system. The computer only combines and selects signals.

      Finally, whether computers and brains are similar is highly controversial. Simply asserting that neurobiology is on your side won't count for much, I'm afraid.

      Delete
    12. However, if you want to question if human thoughts are inherently meaningful, be my guest. That would be a fine reductio ad absurdum of your position.

      Delete
    13. Sri wrote: However, if you want to question if human thoughts are inherently meaningful...

      If meaning is created ex nihilo by the action of human brains, then one could argue that they are inherently meaningful. If meaning is extracted from the universe by the action of human brains, then one could argue that meaning isn't inherent.

      Delete
    14. Sri: You further say that a computer involves the "combination and selection of symbols", but it is not the computer which treats a signal as a symbol, but the person who operates the system. The computer only combines and selects signals.

      This isn't true. For the computer, the signal is the symbol. For a binary computer, low voltage is one symbol, high voltage is another. The lambda calculus deals with objects. It doesn't matter what the objects are. Furthermore, it doesn't matter to an external observer what the symbols are. We observe behavior and match it to our own behaviors.

      I have said that the physical is not the abstract, and that is self-evident
      There are a lot of things that are claimed to be self-evident that are simply wrong. Supposedly, one of the scientists who was a champion of the phlogiston theory of combustion held to it to his death bed, even when contrary evidence was presented to him.

      In any case, what you call "abstract" is a series of physical computational steps.

      Finally, whether computers and brains are similar is highly controversial. Simply asserting that neurobiology is on your side won't count for much, I'm afraid.
      Well, of course it's controversial. Cognitive dissonance can be very hard to overcome. Bad arguments can be hard to overcome. For goodness sake, anyone who thinks Searle's Chinese Room argument holds any water is seriously mistaken. Searle argues that "if we can build a device that can translate Chinese but doesn't understand Chinese" therefore "we can't build a machine that can translate Chinese and understands Chinese". His conclusion simply doesn't follow. I'm honestly shocked by the number of supposedly smart people who can't (or won't) see the flaw in his argument. And if you don't believe me, try John McCarthy, one of the fathers of artificial intelligence (here).

      And I didn't just assert neurobiology. I asserted neurobiology and computational theory.

      Delete
  11. It seems to me that the question of whether magic is possible boils down to the question of whether there could be things which are responsive to words (or incantations) - and when I say "words," I'm not just referring to their sound (for that would make them mere remote-controlled artifacts), but to their intended meaning. In order for that to be possible, inanimate objects would have to possess built-in linguistic properties: they would have to be ontologically tied to the minds of the intelligent beings who manipulate them by using incantations. Needless to say, things possessing these mind-relative properties would have to be designed by some Intelligence.

    Would it be possible for God to design things with properties like that, for the benefit of His intelligent creatures? Putting it another way: could we suppose that in the hereafter, things will bend to the will of the blessed in Heaven, without them needing to do anything more in order to control their movements?

    It might be argued that objects possessing such mind-relative properties, in addition to their law-governed, physical properties, could not possibly be said to be natural objects, as there would be nothing to tie all of these properties together: they would form a loose and disjoint set, without any central underlying concept to unify them. Still, I'm not sure that the foregoing objection is a decisive one, so I'll throw it out for readers to discuss.

    My two cents. Good article, Ed.

    ReplyDelete
    Replies
    1. @Vincent,



      Quote:"and when I say "words," I'm not just referring to their sound (for that would make them mere remote-controlled artifacts), but to their intended meaning."


      It is possible for there to be objects that produce certain effects immaterially on the basis of the sound of words alone, not their intended meaning.

      And in both cases (responding only to the sound or to the meaning as well) the key to magic-like causation is the simple fact that the interaction, by it's very nature, produces a result by affecting reality. This is simply formal causality at work.

      No ontological connection to rational minds is necessary, only the fact that, if one says such words with such an object, then such and such effects come about immaterially (otherwise the magic would be a simple chemical reaction or physical interaction).


      Also, magic isn't limited to incantations. It's easy to conceive of a certain effect that follows when, say, one taps one's foot in a certain way and in a certain enviroment holding a certain object. No linguistic command is necessary in that case, yet the effect is still magical and still happens.




      Quote:"could we suppose that in the hereafter, things will bend to the will of the blessed in Heaven, without them needing to do anything more in order to control their movements?"


      That might not be necessary. Some speculate that in the world to come humans will be able to control things using their will and imagination alone. Just as we can currently imagine changing reality by turning things into other things, or moving them around, or changing their accidental qualities, we will actually be able to do so in the world to come.

      Some say that we are already destined to have such power, and have it immanently within us, and God constrains us in the use of it because in our current state we could easily screw everything up. Others think such powers aren't immanently present in us but will be given to us by God as a gift in the world to come.



      Quote:"It might be argued that objects possessing such mind-relative properties, in addition to their law-governed, physical properties, could not possibly be said to be natural objects, as there would be nothing to tie all of these properties together: they would form a loose and disjoint set, without any central underlying concept to unify them. "


      It depends on what you mean by "natural object". If we view magic and magical power / causality as a type of formal causality, or flowing from the forms of things in a unique way, then this means that the magical powers of things would be rooted in their nature. So they would in fact be properly called natural objects.

      Delete
    2. If God had designed properties as you describe it would still not be magic, it would be physics.

      Delete
    3. @Kyle,


      It wouldn't be physics because physics implies that there is a physical mechanism underlying it. Which magic doesn't have.

      The law-like behaviour is the result of the magic being intelligible - likely due to it being a type of formal causality. In this sense then, it would be like a law of physics, since it has a definite nature and description.

      Delete
    4. I think both of you are using two different conceptions of physics, since between modern day and aristotelian usages are different. At least from what I've read so far

      Delete
  12. Judea Pearl, recipient of the Turing Award for his contributions to AI has characterized the current state of AI as an exercise in curve-fitting :

    "Pearl contends that until algorithms and the machines controlled by them can reason about cause and effect, or at least conceptualize the difference, their utility and versatility will never approach that of humans.

    He says that it will be impossible to have meaningful dialog with robots until they can simulate human intuition, which requires the ability to understand cause and effect along with alternative actions and outcomes that it might have taken. In short, we’re back to Aristotle."

    https://diginomica.com/ai-curve-fitting-not-intelligence/

    I agree, but I don't hold out much hope for the success of this more advanced form of AI proposed by Pearl, for reasons that Feser and others have given.

    ReplyDelete
  13. So the thesis really just amounts to the claim that people can be fooled into thinking that something is magic if we’re clever enough. Well, OK. I don’t know how interesting that is, but it seems true enough.

    Dr. Feser single-handedly takes every stupid monty-python/douglas-adams nerd quote that I have ever come across on usenet and makes it look like (read: reveals it to be) something only a complete fool could conceive of.

    ReplyDelete
  14. Could you please make an article about Transhumanism sometime in the future (and a refutation / analysis of it)? I have only heard of a few refutations of Transhumanism by Christian philosophers.

    ReplyDelete
    Replies
    1. When you say "refute" transhumanism, what do you mean? I can understand an argument that it isn't desirable, but I think that at least some of the things on the transhumanist agenda are possible. Do you mean "not possible" or "not desirable"?

      Delete
    2. I think the argument to be refuted here is the claim that someone could still be a human if he had most of his parts replaced by artificial components. We believe that man is still human if he has artificial arms and legs, and would remain human even if he had an artificial heart and lungs. But when it comes to the mind, it's hard to believe that he could have some kind of artificial brain and still be a human.

      Delete
    3. "Refute" probably wasn't the best word to use. It seems that Transhumanism has some common beliefs with Gnosticism. The belief that the physical world is limiting who we could be (and things of that sort).

      Delete
    4. Does transhumanism mean that a person could transcend his physical form and become some kind of virtual person? I have heard people say that in the future we will be able to store a backup copy of our mind in a computer, that this would be a kind of immortality. Is that possible? Probably not.

      Delete
    5. Yeah, that's basically what it is.

      Here's a definition of Transhumanism by Max More, a philosopher who holds to this belief.

      "Transhumanism is a class of philosophies of life that seek the continuation and acceleration of the evolution of intelligent life beyond its currently human form and human limitations by means of science and technology, guided by life-promoting principles and values."

      -- Max More

      Delete
  15. That brings us to a second difference, which is that a computer and the logic gates and other elements out of which it is constructed are artifacts, whereas a brain (or, more precisely, the organism of which the brain is an organ) is a substance, in the Aristotelian sense. A substance has irreducible properties and causal powers, i.e. causal powers that are more than just the sum of the properties and powers of its parts. Artifacts are not like that. In an artifact, the properties and causal powers of the whole are reducible to the aggregate of the properties and causal powers of the parts together with the intentions of the designer and users of the artifact.

    I have been thinking, does what this post shows or what A-T philosophy generally believes is that anything other than living things and simples is not a substance?

    Would point of this post be falsifies if computers are composite objects?

    ReplyDelete
    Replies
    1. @Red,


      Well, A-T doesn't require that only living beings be substances. There are, after all, non-living substances out there as well.


      In general, anything that has irreducible causal powers is a substance, while things that don't aren't truly substances.

      It's not constraining to say that things that don't have irreducible causal powers aren't true substances, anymore than understanding definitions and excluding things from that definition that don't manifest it is.

      Delete
    2. Well, A-T doesn't require that only living beings be substances. There are, after all, non-living substances out there as well.

      But would it accept anything which is neither Living thing nor mereological simple particles as substance?

      Delete
    3. @Red,


      If it has irreducible causal powers, then yes. If it doesn't, then no.

      Asking if there is anything between these two is akin to asking if there is something in between substance and accidents, or between white and non-white.

      Delete
    4. I am not asking whether there is anything between them.

      I do get that if a thing is substance then it would have some irreducible causal power but what my question is , is there an example of such a thing that isn't either a thinking-living thing or a simple.?

      Showing something has irreducible causal powers is hard because it appears by observing from third-person perspective that everything which isn't simple is just arrangement of parts. the reason we don't apply this to living things or even more narrowly to thinking-living things or persons is because that would be absurd.

      Delete
    5. @Red,


      Quote:"is there an example of such a thing that isn't either a thinking-living thing or a simple.?"


      Are you asking if there is an example of a substance that is neither a living being nor a mineral (in the broad definition of mineral) substance?

      That is basically asking if there are substances that are neither living nor non-living.

      The only candidates I can think of are angels. And maybe space as a non-intelligent immaterial reality. Any other category of being is either impossible or completely unknown to us such that only God Himself knows what it would look like.


      Quote:"Showing something has irreducible causal powers is hard because it appears by observing from third-person perspective that everything which isn't simple is just arrangement of parts."


      An arrangement of parts is, by definition, an artifact, an accidental form, composed of things that are more fundamental than it, meaning that it has no irreducible causal powers and thus cannot be a substance. Any other definition of arrangement of parts (water would thus be an arrangement of hydrogen and oxygen) ends up being a substance with irreducible causal powers.

      And us having a hard time discerning what is irreducible and what is not is merely an epistemological problem, not an ontological one.

      And it also depends on what you define as "simple". Is a simple something with irreducible causal powers? Then it's a synonym for substance. Is it something that appears simplistic or not made up of parts? That would mean your main question is about composite looking things that are nevertheless substances.

      I guess such a thing is metaphysically possible - it would simply be a substance that has multiple parts which are less fundamental than the whole which has irreducible causal powers.

      Delete
    6. @JoeD

      What I mean by 'simples' is atoms in the old philosophical sense, the smallest building blocks of material. Could be those particles that modern physics talk about.

      When thinking of which things have irreducible causal powers and are thus substances only these two candidates (simples and living things) seem obvious to some philosophers.

      Delete
    7. @Red,



      Qoute:"When thinking of which things have irreducible causal powers and are thus substances only these two candidates (simples and living things) seem obvious to some philosophers."



      Well, A-T philosophers certainly aren't among those.

      Because the vast majority of them accept that things such as water, rocks (if they aren't reducible to the minerals that make them up), oxygen, and all the chemical elements of the periodic table are true substances.

      In the middle ages fire was also thought to be a substance, and there is a question over whether or not stars are substances or mere energy reactions reducible to more fundamental substances like hydrogen and helium.

      Either way, the universe of substances is much larger for A-T theorists than merely the smallest fundamental particles and living things.

      Delete
    8. @JoeD,
      Right, Is there a particular reason why those things you've mentioned are considered as having irreducible causal powers?
      overall it seems regarding only living things and fundamental particles as substances would be sufficient for all the principles of A-T.
      Like it would be something that most coheres with the point made in above post. If some one has a different point of view here and regards all ordinary objects as composite substances then he might also think computers and any AI is substances.

      Delete
    9. Red,
      Chemistry has NOT been reduced to quantum mechanics.
      Nor the things of everyday life reduced to chemical formulae.

      Delete
  16. Hi Ed,
    I see how you’ve undermined arguments for the view that computers can think, but not how you’ve established that they can’t think. You say that we know that a machine running a sufficiently advanced program isn’t thinking because we know that such a machine is merely running a computer program. But how do we know that it’s *merely* running a computer program? Why couldn’t it be both (a) running a program that the designers or observers interpret as representing certain thoughts and, in addition, (b) really thinking? Your argument is supposed to be compatible with a materialist theory of thinking. So suppose our thinking consists in certain material processes. Isn’t it possible for a computer to be built that engages in those processes? In this case, we would seem to have both (a) and (b). Of course, computers we actually build (as far as I know) aren’t modeled after the brain, and so this line of thought may give little reason to think that computers, as we currently build them, think – a fortiori, that they think the very thoughts we “read into” them.

    You maintain that computers are artifacts instead of substances. If computers can’t be substances, and if thoughts must inhere in substance, it follows that no computer will be able to think, even ones modeled on the brain. But supposing our materialist believes that thoughts inhere in substance, I don’t see why he would accept that computers can’t be substances.

    ReplyDelete
    Replies
    1. "I don’t see why he would accept that computers can’t be substances."

      If you want to say that then you would have to say that everything is possibly a substance. Your table could be a substance, your door handle, the macaroni sculpture some young kid made at school, etc. The question becomes: how can we possibly know the difference between a substance and an artifact.

      Feser posted about the difference here:
      http://edwardfeser.blogspot.com/2011/04/nature-versus-art.html

      Delete
    2. I don't want to say computers are substances, but I'm not a materialist about thought. If I were, I'd think that my thoughts consist in states of my brain. Supposing that’s right, some brain states constitute thoughts of a substance (me). Since it’s in principle possible (I assume) to design a computer resembling my brain, the question arises as to why it wouldn’t be possible for such a computer’s states to constitute thoughts of a substance, just as some of my brain states do. I don’t see why affirming this possibility would commit one to saying that *any* arrangement of matter (e.g., macaroni art) would constitute a substance.

      Delete
    3. "I don't want to say computers are substances, but I'm not a materialist about thought."

      Sorry, I mean "you" as if you were a materialist.

      "I don’t see why affirming this possibility would commit one to saying that *any* arrangement of matter (e.g., macaroni art) would constitute a substance."

      That isn't what I said. I said that all arrangements could possibly be substances. If you think a computer, at least how they are atm, can be a substance, it shows a failure of how to differentiate substances from artifacts. The materialist who wants to say computers would be have no concrete way to determine what is or isn't a substance, so then they would accept computers as substances based on nothing but blind faith and maybe intuition.

      Delete
    4. Hi Billy,
      I haven't been addressing computers as they are at the moment. I've asked us to suppose a computer, call it X, is designed that resembles the brain. If states of my brain constitute thoughts of a substance (as a materialist about thought may say), why couldn't states of X do the same? To rationally maintain that they do, the materialist doesn't have to have a general theory of substance that determines what is or isn't a substance. After all, we believe in certain examples of substances prior to having any such theory, and we try to develop theories that accommodate our examples. So the materialist just needs good reason to believe that X provides an example of a substance. And the resemblance between X and the brain seems to provide (for the materialist about thought) such a reason. I’m not saying it follows from (a.) the (supposed) fact that my brain states constitute thoughts of a substance that (b.) X’s states constitute thoughts of a substance. But I think the burden is on the one who accepts (a.) but not (b.) to motivate a relevant difference between the cases.

      Delete
    5. "But I think the burden is on the one who accepts (a.) but not (b.) to motivate a relevant difference between the cases."

      The relevance is the important part, although not in the difference but in the resemblance. It's on the materalist to justify whether they have the relevant resemblance, not for me to justify the difference. They are the one claiming it's relevant.

      Take flight for instance. Birds can fly, and so can planes. Now, why were we able to produce flight with planes? Many of the early failed attempts tried to imitate the flapping motion of the wings of birds. Clearly, there is resemblance, but the flapping is not relevant. We had to focus on the airfoil shape. That was the relevant part, among others.

      We were able to confirm the relevant aspects to resemble when the plane flew. However, how is any materialist going to confirm the relevant aspects of the brain to resemble if they will never see the thought? It would be my burden to prove the plane didn't actually fly. It would not be my burden to prove there was no thought. It's their burden to prove they have the relevant aspects.

      Delete
    6. Billy: However, how is any materialist going to confirm the relevant aspects of the brain to resemble if they will never see the thought?

      Your example of birds is perfect. Compare a blackbird with a Blackbird. A Blackbird can fly at Mach 3.2 at an altitude of 80,000 feet. A blackbird cannot. One is natural, one is man-made. But they both fly due to theory (lift) and construction (shape of wings and means of thrust), which is verified by observation.

      It's the same thing with computers and brains. We have the theory, and we know how neurons and transistors implement the theory. What we don't yet have it the observational aspect of human-level machine intelligence. blackbirds can't fly as high or as fast as Blackbirds, but this is a difference of degree, not kind. Our machines are still at blackbird stage.

      The theory says that you won't see thought -- all you'll see are symbols swirling in a logic network. What you'll see is the behavior related to those swirling symbols.

      Delete
    7. wrf3,

      This example applies only if one were to assume materialism, which is where the discussion with Daniel has started from.

      There are serious problems with a materialist view of thought, which is why I reject it. I was merely pointing out that if one ignores those problems and goes with a materialist view, there are still problems.

      "We have the theory, and we know how neurons and transistors implement the theory."

      And, for the materialist, that is the problem: You know how neurons work, you know how they behave, you know they have some role in reasoning and thought, but how can you possibly know at all whether the behaviour confirms thought? The absolute closest you could get is that they are regularly connected in some way.

      You say above:
      "We start by observing that intelligence is measured (even if loosely) by sets of behaviors."

      This is your starting point, and this error is what sets you down your wrong path. Behaviour of X only measures intelligence if we assume that X is intelligent. If we don't, then the whole thing falls apart because at no point have you proved that intelligence is necessarily connected to behaviour. Just because we regularly see X connected to Y, it does not follow that X is necessarily connected to Y. In fact, you would have to agree with us in order to say that, as you would have to embrace final causes, which brings you to substances, formal causes, etc. You would have to agree with us, but then you would be agreeing with Feser about computers and intelligence.

      Delete
    8. Billy: This example applies only if one were to assume materialism...

      There are two problems with your response. First, I don't assume materialism (in fact, I am not a materialist). Second, the examples work independently of materialist/immaterialist assumptions. The theories of lift and thrust work regardless of whether you are a theist or an atheist. blackbirds and Blackbirds fly regardless of whether you are an atheist or theist. Computational theory works regardless of belief; computer construction works regardless of belief.

      So the issue then is whether or not behavior is indicative of intelligence. One paper says, "Intelligence is defined as that which produces successful behavior..

      Pray tell, if behavior is not an indicator of intelligence, how do you go about it? What criteria do you use?

      Delete
    9. To think the blackbird/Blackbird comparison can be applied to human/computer comparison is the problem. Flight and thought are just way too different, but if you are a materialist, you have to bring them to the same level, which is what is wrong with materialism. This is why the example only applies with the materialist assumption.

      That paper is wrong. If we want to say that dogs are intelligent to a certain degree, then go ahead, but then you will have to come up with a different term to describe us because we are on a different level entirely. We grasp abstract concepts. That is something dogs can't do at all, even in principle they couldn't do it. Again, this is not a matter of degrees.

      "Pray tell, if behavior is not an indicator of intelligence, how do you go about it? What criteria do you use?"

      Well this is where substantial form comes in. Once you confirm the form, then the behaviour is indicative of that form. The behaviour of communication of abstract concepts is indicative of a rational form. But you reject all that, so you are dead in the water. You can't distinguish real intelligence from the appearance of intelligence otherwise.

      Searle's Chinese Room thought experiment explains the issue. Say that you are in a room with an english book with instructions on how to respond to the symbols past to you through the door, and you send out a response that could fool a chinese speaker in to thinking they are speaking with someone who knows and understands chinese. Clearly you don't actually know chinese, you were just following a set of English instructions, but you successfully sent a message in chinese without any understanding of it at all. There is basically no difference between this and what a computer does. It also just runs through instructions without any understanding about what it is producing. Successful behaviour does not, in itself, cannot distinguish whether there is actual intelligence or just the appearance of it.

      Delete
    10. Billy: Flight and thought are just way too different...
      The difference is explained by a) the difference between the theory of computation and the theories of lift and thrust and b) the difference between the engineering.

      ... but if you are a materialist...
      I am not a materialist.

      That paper is wrong.
      Well, let's see by using your response.

      [Grasping abstract concepts] is something dogs can't do at all, even in principle they couldn't do it.
      How do you know? My wife was making fun of our dog this morning. We keep our dog's food on the floor in the pantry, and his treats higher up on a shelf in the pantry. When he wants a treat he will look up and to the right. If he really wants a treat, he will shake his head up and to the right. My wife was mirroring his behavior this morning after he had already been given his treat. He wanted more. He really wanted more. He associates "up and to the right" with treats because that's where they're kept.

      So, when you say dogs can't do it "in principle", is it because their brains are smaller, and therefore have fewer states they can get into, and therefore have a smaller range of possible behaviors? Is it because they don't have our speech organs, so they can't communicate the way we can? "Up and to the right" is pretty clear to me even though he can't vocalize it. Or is it for some other reason?

      Once you confirm the form...
      How do you confirm the form?

      Searle's Chinese Room thought experiment...
      Please see my response here, earlier in the thread. Searle's "argument" holds less water than a sieve. It amazes me that anyone who claims to be an experienced thinker gives it any credence at all.

      Successful behaviour does not, in itself, cannot distinguish whether there is actual intelligence or just the appearance of it.
      Ok, let's take the immaterialist view of it for a minute. You can't sense immaterial things. You can't see immateriality, you can't taste it, you can't touch it, you can't feel it, you can't smell it. So how can you tell it exists, except by the effect it has on material things (i.e. their behavior?)

      Delete
    11. "So, when you say dogs can't do it "in principle", is it because their brains are smaller, and therefore have fewer states they can get into, and therefore have a smaller range of possible behaviors? Is it because they don't have our speech organs, so they can't communicate the way we can? "Up and to the right" is pretty clear to me even though he can't vocalize it. Or is it for some other reason?"

      Having developed vocal organs is irrelevant. Even someone who can't speak can still communicate abstractions, with sign language for instance. If a dog could grasp abstract concepts, then a mature fully developed dog should be able to communicate it in some way. But they don't. The "up and to the right" motion is not the dog communicating an abstract thought. The dog has developed repetitive behaviour. I think we can agree that the way their brain develops plays a part, but it is precisely because they are dogs, which have a different form than us, that their brains develop different to ours, being the size they are and having the structure they have.

      "Searle argues that "if we can build a device that can translate Chinese but doesn't understand Chinese" therefore "we can't build a machine that can translate Chinese and understands Chinese". His conclusion simply doesn't follow."

      I never made the extended claim here. I simply said that you could never tell the difference between when understanding occurs and when it doesn't based purely on behaviour. That is the problem with any attempt to say that computers are or ever could be intelligent. You have to assume they are intelligent, then retroactively apply the mechanism of computers to us, which is backwards. We are the ones who are definitely intelligent. It is the intelligence of computers that is what must be confirmed, and no amount of behaviour is going to determine it. That is my point.

      "How do you confirm the form?"

      Ultimately, by determining if there are inherent tendencies. To have inherent tendencies is to have a substantial form. If a thing does not have them, then they are artifical, and computers have absolutely no inherently tendency toward computation. They are artificial. Feser discusses it further here: http://edwardfeser.blogspot.com/2011/04/nature-versus-art.html

      "So how can you tell it exists, except by the effect it has on material things (i.e. their behavior?)"

      The concept of a triangle is immaterial. You can see triangles, and you can imagine triangles, but you never see or imagine the CONCEPT of a triangle. The concept is something you can have in your intellect, not your imagination or senses. Even if thinking about the concept of a triangle commonly comes with an image of it in your imagination, the image could be of an isosceles triangle, or a right-hand triangle, or any different triangle, but the concept that determines what makes a triangle a triangle is the same no matter what image you happen to have. A clearer way to show it is to imagine a circle, then imagine a 10,000 sided regular polygon. You imagination wouldn't be able to distinquish these two shapes. A 10,001 sided regular polygon would also be indistinguishable. However, you clearly can distinguish that these are different concepts no matter whether you can distinguish them in our imagination or not. You could imagine a circle and still be thinking entirely of a 10,000 sided shape.

      In order for you to grasp the concept of a triangle, the concept must exist in some sense, but clearly its not some triangle you have seen, nor is it a image in your imagination, and we didn't have to mention anything about, say, the behaviour of the brain when you think about it either. All of this can be ignored entirely and you can realise you are grasping something immaterial.

      Delete
    12. It is the intelligence of computers that is what must be confirmed, and no amount of behaviour is going to determine it. That is my point.

      Wouldn't this line of thought also motivate skepticism about intelligence of other humans as well?

      Delete
    13. Anonymous wrote: Wouldn't this line of thought also motivate skepticism about intelligence of other humans as well?

      In their world, it wouldn't. Because they have the form of a human they have the function of a human.

      It has been almost 50 years since I took a design course, but I remember the maxim:

          function → form

      i.e. "form follows function".

      The Thomist wants to take what is nothing more than a heuristic and turn it around:

          form → function

      i.e. "function follows form" (i.e. the inverse relationship holds. And, if the inverse relationship holds, then we might as well say that "form ↔︎ function", which is clearly false).

      Now, form → function can be useful in identifying predators. Something with sharp teeth and claws might eat you, so perhaps the Thomists are simply seeing the world through their deep-seated survival instincts.

      But, form → function isn't true in Nature. Not all bees can sting. It isn't true in the digital age (e.g. here). It can also be considered a source of an evil kind of racism/bigotry. After all, because the Samaritans were "half-breeds", they were polluted forms.

      Delete
    14. You've given plenty of evidence that you didn't even bother trying to understand Thomistic ideas, let alone study the various detailed arguments Thomists give in their defense.

      Delete
  17. Interesting post, Ed,
    A question that i assume is not off-topic:

    As i understand, in Thomism, things like color or smell aren't just on our minds but actuality exists on the universe, does that mean that we could build a robot that experiences something like qualia or they are completely immaterial? Animals have experiences(i assume) and they are material beings, so i had this doubt.

    ReplyDelete
  18. Fascinating post, Ed. I hope you won't mind my sharing this comic that seems relevant.

    https://xkcd.com/505/

    ReplyDelete
  19. If people want to create intelligence they should just have sex.

    ReplyDelete
  20. I don't really have a problem with affirming that machines probably aren't intelligent in the way that is meant by some philosophers, but I think this post may be missing the point as to why many people talk as if machines are/can be intelligent: machines now (or soon will) pose risks/provide opportunities that historically have only come from other intelligent beings. They can (or will be able to) beat us at games, prove new mathematical theorems (at least, at a syntactic level), discover new drugs, manipulate us into buying things we wouldn't under reflection, or otherwise re-arrange the world to maximize some decision function that may or may not coincide with what will further human flourishing (I happen to think it won't so coincide, at least by default anyway).

    This may be what Dijkstra was getting at when he quipped that:
    'The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.'

    ReplyDelete
  21. If computers really are thinking, that would be because they’ve somehow got brains hidden somewhere (if you’re a materialist)

    Voltaire- Brains

    ReplyDelete
  22. I am persuaded the post that AI can never become intelligent in the sense which scholastics understand what it means to be intelligent/rational, ie capable of abstracting essences from material conditions.

    But there must be something to the principle of computational equivalence (a la Stephen Wolfram) for this model of nature does lead to important new scientific and even logical discoveries (detailed for example in Wolfram’s book, ‘A New Kind of Science’). How can this be if the principle is false?

    A separate point, but an Aristotelian model of nature, in contrast to the computational equivalence / complexity model, does not seem to lead to *new* scientific discoveries.

    ReplyDelete
    Replies
    1. Daniel Hegedus: How can this be if the principle is false?

      Classical mechanics is known to be false, but it led physicists to important new scientific discoveries for centuries before its defects were discovered. The fact that a given model or principle is useful is no guarantee that it is true. It may merely have operational validity within a particular domain.

      Delete
  23. A True A.I. is very very likely impossible like travelling faster then light. Which is a blow to Hard Science Fiction Fans.

    At best we might create some really advanced V.I. but they would just be what we have now only vastly more sophisticated. There will never be a thinking machine.

    ReplyDelete
    Replies
    1. Why do you think so? I agree with you, but I want to hear your reasons.

      Delete
    2. I start with Searle's chinese room and I just go down hill from there.

      Delete
  24. Quantum computers and wetware complicate the issue of whether AI will remain just an artifact used by intelligent beings or whether it might in fact be possible to create an intelligent substance. No doubt we all would agree we could create a completely new sentient being using biology(DNA editing)?

    ReplyDelete
    Replies
    1. Yes, would we? I am not convinced of that. Actually, I believe the opposite to be true - that we wouldn't want to create a "completely new sentient being" using DNA editing.

      Delete
    2. My point was not a moral one. Nor did I make an argument for creating a new sentient being, just that it is possible to create a new sentient substance biologically.

      Delete
  25. I wish you guys would pay closer attention to the difference between algorithmic programs and neural networks, as suggested by Mike M in one of the first comments.

    I personally agree with Prof. Feser that an algorithmic program can't actually be thinking. But a neural network doesn't implement logic gates, and it isn't specified by a programmer. It's conceivable that a neural network could really be intelligent in the same sense we are. After all, our brains are neural networks, not algorithmic computers.

    Thanks.

    ReplyDelete
    Replies
    1. Your brain isn't intelligent.

      You are.

      Delete
    2. https://www.analyticsindiamag.com/neural-networks-not-work-like-human-brains-lets-debunk-myth/

      Delete
    3. Agreeing with Billy and grateful for Tom's link, I would add:

      Brains, whether they are neural networks or not, do not think: human beings think, and that thinking takes place in what we call the mind, not in what we call the brain. The human mind is certainly not a neural network.

      Thinking is an activity that can only take place in a substance because the mind is a unity and therefore it demands a substantial unity in which to inhere. No artefact (e.g. an articicial neural network) is a substantial unity: at best it is an accidental unity. There is therefore a fundamental difference between a human mind and an AI neural network. The former thinks: the latter only appears to think.

      Delete
    4. I looked at Tom's link, and the gist is that human brains are just a whole lot more complex than our current neural networks. Various technical details are different, but again that's just because our neural networks are still too simple.

      The key points are that neural networks are not algorithmic. In other words, they are not programmed with instructions. Neural networks aren't digital. They aren't representations of things, but they are the thing itself.

      When you say the brain is not the mind, you're just assuming what you're trying to demonstrate. Also, it's curious how Jonathan says the mind is a unity, because that is manifestly untrue. Consciousness may provide us with a sense of unity, but that's clearly an illusion. There is much more going on in the brain that is unconscious.

      Delete
    5. John B. Moore wrote: The key points are that neural networks are not algorithmic.

      Even if it's true (it isn't in many cases), it's a non-issue. An algorithm is just a computation that terminates. So the important aspect is computation, not algorithm.

      Neural networks aren't digital.
      It doesn't matter. Computation theory doesn't care how the computations are carried out.

      Delete
    6. After all, our brains are neural networks, not algorithmic computers.

      John, I could be behind-times in new developments, but I think you may be employing an equivocal term here.

      Our brains have neurons. That's what the cells have been named. Because they are neurons, the complex of them that constitutes our brain structures are "networks of neurons", i.e. "neural networks".

      The neural networks that we get in computing are not composed of neurons. The networking that they do in operation was NAMED "neural networks" not because they are networks of neurons, but due to SOME similarities with our brains' networks of neurons.

      Similarity is not identity. What is still unclear is whether what happens in our neurons and what happens in the computer network is the SAME in all ways that matter. Part of the reason for this unclarity is precisely that we don't know all of what matters, and we certainly don't know how the operations of our neurons firing works in connection with distinct thoughts. So it is (as far as I can tell, at least) no better than an hypothesis among computer geeks, that the "neural networks" of computing doing the same sort of thing that our brains' networks of neurons are doing.

      Nor is that the only problem: if "reasoning" is, in addition to the production of certain neuron sequencing, the act of an immaterial power of soul called the "mind", then having neural networks do exactly what the brain does is not enough to produce an artificial mind.

      Delete
    7. Let’s try this again. Wrf3: ‘An algorithm is just a computation that terminates.’

      Wrong. An algorithm is not required to terminate. Many algorithms are purposely written not to terminate. Indeed, no algorithm (or, what amounts to the same thing, no set of enumerable rules) can tell in every instance whether a given algorithm will halt or not. Alan Turing proved this before algorithmic computers were even invented.

      Delete
    8. John B. Moore: Neural networks… aren't representations of things, but they are the thing itself.

      Not remotely true. Consider, for instance, a neural network being used to direct a self-driving car. The network is not a car; it is not a road; when it generates a heuristic that means, ‘Stop at a red light,’ it is not a red light. The distinction between signs and things signified is just as essential for neural networks as for any other information system.

      Now, if you really mean that the neural network ‘just is’ a brain, and operates in exactly the same way as a human brain, this claim is easily shown to be false. If you mean that a sufficiently complex neural network inevitably would be functionally equivalent to a human brain, your claim is based upon facts not in evidence. The specific form of a neural network is every bit as important as the fact that it is a neural network. Indeed, it may well be that an electronic ‘neural’ network is fundamentally different in its mode of operation from a network of actual neurons. Certainly we have no external empirical knowledge sufficient to rule out that possibility; which leaves us to consult philosophers of mind, such as Dr. Feser.

      Delete
    9. >The key points are that neural networks are not algorithmic.
      @John B. Moore I'm not sure where you get your information as this statement is known to be false anyone who even a cursory understanding of neural networks. Its falseness is also discoverable with a simple google search.

      "Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns." - https://skymind.ai/wiki/neural-network

      Delete
    10. Tom Simon: Wrong. An algorithm is not required to terminate.

      Let me refer you to Fundamental Agorithms, Second Printing, pg. 4-5:

      Besides merely being a finite set of rules which gives a sequence of operations for solving a specific type of problem, an algorithm has five important features:

      1) Finiteness. An algorithm must terminate after a finite number of steps. ... (A procedure which has all of the characteristics of an algorithm except that it possibly lacks finiteness may be called a "computational method." ...)


      Now, if you want to argue with Donald Knuth, you might as well also argue with St. Paul.

      Delete
    11. It's funny when a Protestant complains about arguing with St Paul.

      Delete
    12. OK, we're starting to have a good discussion about neural networks here, so that's great. Just some very basic definitions: I think of an algorithm as a set of commands. A neural network, on the other hand, is a set of switches, like transistors, in an electrical circuit.

      With a neural network, there are no commands. It's just electricity flowing through wires and switches. That's why I said a neural network is "the thing itself." It's just electricity, and it doesn't necessarily mean anything.

      Nobody "designs" a neural network to do anything in particular. Instead, people might try to train the neural network. This is a big difference. And the training might be of two types: There is artificial selection or natural selection.

      Lots of people point out that electronic transistors are very simple whereas our neurons are extremely complex. That's true, but the key thing is that they both act as switches regulating the flow of electricity (or neuro-electric currents).

      Theists might cling to the hope that biological neurons do something special, above and beyond the mere electronic switching. Meanwhile, AI developers are growing artificial neural networks that do more and more of the same things our brains do. As time goes on, it will be harder and harder for theists to postulate that special something.

      Delete
  26. No serious person believes in magic? Well, certainly from a theological perspective demons are capable of præternatural things, but of course stage performers like Penn and Teller are just performing, though it would be an interesting movie to imagine that magicians are really magical but, to avoid persecution and make money, pretend to be using sleight of hand, etc.

    ReplyDelete
  27. Reading the post and now reading some comments here its tempting to think, like how many other philosophical disputes especially in metaphysics of time or ontology are often labeled. Maybe this whole argumentation is mere "verbal dispute" regarding what counts as "intelligent" .

    I am not really endorsing this line of thought but like I said its tempting.

    ReplyDelete
    Replies
    1. In a sense, it is a verbal dispute of exactly that kind. Some people are arguing that ‘intelligence’ merely means ‘what a computer can do’, and other people argue that it means ‘what a human mind can do’. For the former set, artificial intelligence is tautologically real: computers can do what computers can do. For the latter, AI actually poses interesting questions. Defining the problem out of existence is a cop-out.

      Delete
    2. >"Some people are arguing that ‘intelligence’ merely means ‘what a computer can do’

      For every nearly subject there is a set of people who will make ridiculous claims about the topic often times because they know very little about it.

      I say this trivially true statement because I've never anyone in the industry in which I work, software development, who says or even thinks that "‘intelligence’ merely means ‘what a computer can do’". In point of fact, most developers say "I do X when Y happens" when talking about the program they implemented works.

      Delete
    3. Son of YaKov,

      You should know by now that I'm not swayed by what you think is plain or coherent.

      "Robots can walk and humans can walk but only humans can think and Robots cannot think."

      Begging the question.

      "Indeed Birds can fly but I can flap my arms all day & I will just look silly."

      Your ability or inability is irrelevant. Machines are the issue. Do machines simulate what they do, or do they really do it? How could we tell the difference? If neurons make decisions, and logic gates make decisions, where is the difference? Isn't the rhetoric about "intentionality" just a word game? I won't ignore these questions.

      Delete
  28. A robot walks across the room. Is it walking or simulating walking? I say it's walking.

    ReplyDelete
    Replies
    1. Walking can be completely observed by an external observer. Thinking does not have that property: the process of thinking is only observable by the thinker, and indeed the process of observing thought itself requires thought. The situations are not analogous.

      Delete
    2. >A robot walks across the room. Is it walking or simulating walking? I say it's walking.

      Fallacy of equivocation. Like saying a Rocket flies threw the Air & a Bird flies threw the air ergo they both do so in the same way. Either the Rocket is flying because it is flapping it's wings or the bird is flying because it is shooting burning propellant out of the back of it's arse.

      Here is a proper analogy.

      A real robot walks across an actual room. A simulated Robot from the Fallout 4 video game simulates walking across a virtual room in the game. The later isn't really walking.....


      Delete
    3. Son of Ya'Kov, Fallacy of equivocation? I think not. Now robots walk in the same ways as humans. They may not be made of the same materials, but the mechanics is the same. Walking is a description of mechanical motion. It's not a description of materials.

      Delete
    4. Ton Simons, You address modes of observation. But that says nothing about the phenomena.

      Delete
    5. DJindra,

      100% fallacy of equivocation.

      >Now robots walk in the same ways as humans. They may not be made of the same materials, but the mechanics is the same. Walking is a description of mechanical motion. It's not a description of materials.

      In other words Rocket fly & birds fly ergo Rockets flap their wings and Birds shoot propellant from their arse since flying is a description of motion.

      Stop being goofy.

      Dude that a Robot can walk and a human can walk is trivial. Can a Computer think or just simulate thinking? Well thinking and walking are not the same. Just as flapping your wings is not the same as burning propellent.

      Thus your implied analogy doesn't tell us anything about the phenomena either.

      Delete
    6. Son of YaKov,

      Check out this machine:

      https://en.wikipedia.org/wiki/AeroVironment_Nano_Hummingbird

      The only difference between a robot walking and a human walking is the object doing the walking. There is no basis to claim the robot merely simulates.

      I'm not equating walking with intelligence. I claim that, like walking, and if the words describe things real rather than things self-serving, intelligence is not about objects. It's about action.

      The claim that a properly constructed collection of logic gates cannot act indistinguishably from a properly constructed collection of neurons has no basis.

      The gracious Mr. Feser compares an operating computer to a passive piece of scratch paper on which are written some logical symbols. If you want to be fair, please point out that equivocation.

      Considered by themselves, the "decisions" neurons make are no different than the "decisions" logic gates make. I know full well that this inconvenience is hidden under the highly speculative hylemorphic dualism. IOW, it depends on a magical ingredient that will always be magic. That's not good enough for goofy old me.

      Delete
    7. >I'm not equating walking with intelligence.

      Then you are wasting people's time with that irrelavent analogy.

      >I claim that, like walking, and if the words describe things real rather than things self-serving, intelligence is not about objects. It's about action.

      That is also wrong. Intelligence is about conceving of things.

      >The claim that a properly constructed collection of logic gates cannot act indistinguishably from a properly constructed collection of neurons has no basis.

      By that standard an abicus is "intelligent". As is a coin tossed in the air.

      >Considered by themselves, the "decisions" neurons make are no different than the "decisions" logic gates make.

      Nurons are not computers and they don't make decisions. That begs the question.

      >I know full well that this inconvenience is hidden under the highly speculative hylemorphic dualism. IOW, it depends on a magical ingredient that will always be magic. That's not good enough for goofy old me.

      Well you are still holding on to reductive materialism which is far more magical in that it has non-existent minds thinking & performing intelligent actions.

      Anyway if your analogy isnt' about thinking then why bring it up?

      Delete
    8. Son of YaKov,

      "Intelligence is about conceiving of things."

      For argument's sake, I'll accept this definition. But "conceiving of things" is not an object. It's an action. Conceiving is a verb, after all. So my analogy works at the level i intended. Walking is action. Conceiving is action. If an object performs the action, it's not a simulation of the action. The issue comes down to the definition of the action itself. IOW, what is "conceiving?" What would give anyone high confidence that this action cannot be done by a computer?

      "By that standard an abacus is intelligent."

      False. That's a straw man.

      "Neurons are not computers and they don't make decisions. That begs the question."

      I didn't say neurons were computers. That's another straw man. But there is plenty of research showing that collections of neuron make decisions.

      I'm not interested in your opinion of "reductive materialism."


      Delete
    9. Son of YaKov: By that standard an abicus is "intelligent". As is a coin tossed in the air.

      Both statements are false.

      If you think about the model of a universal computer, i.e. a Turing machine, an abacus is equivalent to the Turing machine's tape. It's the medium by which the machine interacts with the outside world. Nobody considers the subset of a system to be intelligent.

      A flipped coin, however, is different. It non-deterministically computes one bit of information. It embodies two of the fundamental components of computation: combination (the two sides, heads and tails) and selection (the result of the coin toss).

      But one bit of information isn't considered intelligent, either, because it has a trivially small range of behavior. That's because it doesn't embody the other part of computation, which is composition.

      Delete
    10. Djindra,

      You haven't said anything plain or coherent. Walking is analigous to thinking/intelligence or it's not analigious to thinking/intelligence?

      Robots can walk and humans can walk but only humans can think and Robots cannot think & you have not made the case otherwise and walking robots doesnt' tell me anything.

      If thinking is an action you haven't made the case Robots or Computers can perform that action. Indeed Birds can fly but I can flap my arms all day & I will just look silly.

      When you have a coherent way to express a thought let me know. I am a busy man.

      >I didn't say neurons were computers. That's another straw man. But there is plenty of research showing that collections of neuron make decisions.

      This is a distinction without a difference. So a mechanical calculator is "thinking" then.

      Bizzare.

      Delete
    11. Do note Flying is an Action. Just because Robots can do actions humans can do doesn't mean they can do all actions.

      Delete
  29. A robot walks into a bar, orders a drink. Bartender says, "Hey, we don't server robots." and the robot says, "Oh, but someday you will". But of course the walking part is just anthropomorphizing.

    ReplyDelete
    Replies
    1. Why is it "just anthropomorphizing?" An ant walks.

      Delete
  30. As Michael Arthur Simon said fifty years ago*, If the only way we can describe the behavior of a thing that behaves the way a human being behaves is by using the vocabulary of mental states and events, then we cannot deny that the thing has consciousness.

    * https://www.jstor.org/stable/20009291

    ReplyDelete
    Replies
    1. This claim fails on the same grounds as the Sapir-Whorf hypothesis. The truth or falsehood of a proposition never depends on the vocabulary of the particular language in which it is expressed. Otherwise, you could have the absurd situation in which (for instance) a given mathematical theorem was true in English but false in Swahili, because the Swahili language happens not to have a particular word that is necessary to express the theorem in English.

      The thing to remember is that words are the signs and not the things signified. If an object has no sign pointing to it, that does not mean the object isn’t there. It is always possible to put up more signs, and it is always possible to coin new words.

      Delete
    2. Only means only. I didn't say "the vocabulary of the particular language in which it was expressed".

      Nor did I say that words are the things signified.

      Delete
  31. Similar to some previous comments, I'm curious about what is meant by "intelligence". Is it simply defined as what a human mind can do? Then of course the entire field of "artificial" intelligence is trivially a failure by a definition of terms. AI as a field has essentially always understood intelligence functionally: "something is intelligent in so far as it performs tasks said to be done only by intelligent beings", namely humans, and I don't think "intelligence" in the broader sense has been understood to mean just "that which is human", but more specifically to indicate some degree of being able to perceive causal relationships, to model the world, to predict, to calculate, to be creative, etc. as opposed to other human tasks, such as feeling, being conscious, perceiving morality, or acting on intuition or instinct.

    Additionally, what is the significance of the distinction between "artifacts" and non-artifacts? Why should the existence of an intention of a designer or the particulars of the process from which something came into being, whether intentional or natural, have anything to do with what the thing in question actually is once made? If there is a "cup" sitting on my desk, it was clearly a designed artifact. But if I find a stone that has been naturally hewn into the shape of a "cup", according to Dr. Feser, it's not a "cup". In this sense, some items, like logic gates, are ruled out from participating in "intelligent" systems, simply because they have a designer? Why should that matter? What matters is what intelligence itself is and how and whether it can be instantiated, rather than how it is instantiated.

    Additionally, while computers and AI algorithms currently do not interpret the symbols they produce (in a general sense), I don't there is an a priori reason they couldn't do so recursively. On the subject of implementation, neurons are in fact interpreted in various ways, but the key mapping from computer to brain is not in thinking of neurons as logic gates per se, but in simply viewing the brain as a collection of simple units (neurons) that are individually mechanistic. If the mechanism of the units are rebuilt and the units are recombined (with some sufficient degree of fidelity), then the system is rebuilt. There are circuits that do simulate neural spiking patterns. Would we consider a human brain slightly altered electrochemically, say by common medications which rebalance neurotransmitter levels, to affect the status of the altered brain as "intelligent"? What if a circuit was put inside the brain (or other locations in the nervous system) and the brain's neurons naturally learned to interface with it (which has been successfully completed in some work)? What if neurons were incrementally replaced by other engineering artifacts (like in the Ship of Theseus)? How far could the natural brain be altered before it is no longer an "intelligent" brain?

    Most obviously or directly, what if an artificial brain were simply reconstructed? i.e. the physical mechanism of the brain were just synthesized directly through bioengineering? Would that be intelligent (assuming a materialist view, which was claimed to be amenable to Dr. Feser's claims) in Dr. Feser's understanding? Would that still count as a simulation, since it is an artifact, even though that artificial brain would be literally the same physical processes occurring in natural human brains?

    Finally, what synergistic causal properties are humans claimed to have as "essences"? What evidence is claimed for non-reductionist synergy? I can agree with notions of emergence or systems which have complex interactions that cannot be (at least tractably) explained in terms of the individual parts, but that intractability is due to the complexity of the system, rather than to any "magic" properties the parts gain when they come together as a whole.

    ReplyDelete
    Replies
    1. As a correction to my above post in the second paragraph, I meant to say "What matters is what intelligence itself is and how and whether it can be instantiated, rather than with what, if any, purpose it is instantiated with, or the particular process by which the intelligent object is developed between two processes which result in a system meeting the material properties of intelligence (e.g. between natural human procreation say, and potentially AI methods)."

      Delete
    2. Are you suggesting Emergence is a kind of ignorance?

      Delete
    3. Similar to some previous comments, I'm curious about what is meant by "intelligence". Is it simply defined as what a human mind can do? Then of course the entire field of "artificial" intelligence is trivially a failure by a definition of terms. AI as a field has essentially always understood intelligence functionally: "something is intelligent in so far as it performs tasks said to be done only by intelligent beings", namely humans, and I don't think "intelligence" in the broader sense has been understood to mean just "that which is human", but more specifically to indicate some degree of being able to perceive causal relationships, to model the world, to predict, to calculate, to be creative, etc. as opposed to other human tasks, such as feeling, being conscious, perceiving morality, or acting on intuition or instinct.

      Intelligence is an ambiguous term. When we say "clever dog"; or "he's a very talented chess player"; or "man is a rational being" – we are referring to different kinds of intelligence. You're right that AI practitioners first define intelligence in a functionalist way, but then they go back on this when they start talking about the "ethical implications" of AI, suggesting an ontological equivalence between artificial and human intelligence once a certain degree of functionality is reached.

      Intelligence refers really to what the Greeks call the nous, which is the intuitive mind. The first act or energy of the nous or intuitive mind is the immediate perception of being or existence. This is an immaterial or spiritual action or energy, not something that can ever be produced or induced in material artefacts, i.e. "artificial intelligence" is impossible, even nonsensical & oxymoronic. A human chess player and a machine chess player are both "intelligent" in the sense of modelling chess and calculating moves & positions, but only the chess player has the immediate awareness of the existence of the chess board: only he is "aware that he's playing chess". And this is what is meant primarily by intelligence.

      Delete
    4. Additionally, what is the significance of the distinction between "artifacts" and non-artifacts? Why should the existence of an intention of a designer or the particulars of the process from which something came into being, whether intentional or natural, have anything to do with what the thing in question actually is once made? If there is a "cup" sitting on my desk, it was clearly a designed artifact. But if I find a stone that has been naturally hewn into the shape of a "cup", according to Dr. Feser, it's not a "cup". In this sense, some items, like logic gates, are ruled out from participating in "intelligent" systems, simply because they have a designer? Why should that matter? What matters is what intelligence itself is and how and whether it can be instantiated, rather than how it is instantiated.

      In Aristotelian physics there is the distinction between natural and artificial forms. The plastic cup, the stone cup, the wooden cup – all are equally artificial forms. It is the underlying plastic, stone, and wood that is the natural form, to which the artificial form of "cup" has been added by human engineering. The natural form of a wooden desk is also wood, with the added artificial form being the desk. Now the ontological significance of this distinction between natural and artificial forms is that natural forms act or behave in a certain way according to their intrinsic nature, whereas artificial forms act or behave in a certain way according to an extrinsic secondary nature which has been added to the primary nature. Now in terms of intelligence what this means is that acts of intelligence arise naturally out of human beings themselves, whereas acts of intelligence arise only artificially and only in an analogous or mimicked way in AI's. It's the difference between a man which speaks through his own intrinsic power of speech, and a puppet which "speaks" through the orchestration of the puppeteer and ventriloquist.


      Additionally, while computers and AI algorithms currently do not interpret the symbols they produce (in a general sense), I don't there is an a priori reason they couldn't do so recursively. On the subject of implementation, neurons are in fact interpreted in various ways, but the key mapping from computer to brain is not in thinking of neurons as logic gates per se, but in simply viewing the brain as a collection of simple units (neurons) that are individually mechanistic. If the mechanism of the units are rebuilt and the units are recombined (with some sufficient degree of fidelity), then the system is rebuilt. There are circuits that do simulate neural spiking patterns. Would we consider a human brain slightly altered electrochemically, say by common medications which rebalance neurotransmitter levels, to affect the status of the altered brain as "intelligent"? What if a circuit was put inside the brain (or other locations in the nervous system) and the brain's neurons naturally learned to interface with it (which has been successfully completed in some work)? What if neurons were incrementally replaced by other engineering artifacts (like in the Ship of Theseus)? How far could the natural brain be altered before it is no longer an "intelligent" brain?

      The computer can recursively analyse its own algorithms a trillion trillion times and never become aware of its own being or existence, because it lacks that primary perception of being which is an immaterial act of a rational mind. This primary perception of being is not a recursive action or a reflective thought, but an immediate awareness or perception arising out of the vital intelligence.

      Delete
    5. Most obviously or directly, what if an artificial brain were simply reconstructed? i.e. the physical mechanism of the brain were just synthesized directly through bioengineering? Would that be intelligent (assuming a materialist view, which was claimed to be amenable to Dr. Feser's claims) in Dr. Feser's understanding? Would that still count as a simulation, since it is an artifact, even though that artificial brain would be literally the same physical processes occurring in natural human brains?

      The brain is not intelligent. It is a medium or interface through which the immaterial human intelligence accesses the material world. You could construct a marvellous artificial brain with wonderful powers of sense / detection / image-making, but without uniting it to the immaterial and rational mind there is no intelligence or abstract understanding present.

      Finally, what synergistic causal properties are humans claimed to have as "essences"?

      Reason.

      What evidence is claimed for non-reductionist synergy? I can agree with notions of emergence or systems which have complex interactions that cannot be (at least tractably) explained in terms of the individual parts, but that intractability is due to the complexity of the system, rather than to any "magic" properties the parts gain when they come together as a whole.

      You are asking how we know there are essences / forms / natures in things, and not mere parts with accidental relations forming artificial systems. The answer is that the one is the cause of the many, not the many of the one. Systems do not "emerge" out of parts; it is more true to say that parts emerge out of systems. A bee is not one particular accidental assemblage of atoms or whatever; rather, a bee is something whose bee-nature or bee-essence unites atoms, compounds, cells, etc., together in a bee-like way. Physicists are obscurely discovering this for themselves because their "sub-atomic particles" seem to appear and disappear out of existence; the truth is that it is the underlying nature that is causing those particles to exist, not the particles that are causing the nature or "system". You see a dog and think that it can be "reduced" to its atomic or sub-atomic parts, and that the "dog nature" is something merely conceptual, a mere mental category. But no, the dog nature is very real and it is causing those atomic and sub-atomic particles to exist, and building them up in such a way to form the visible structure of the dog – but the dog essence is prior to and more fundamental than that visible structure which it causes. This discussion is beyond the realm of empirical evidence, it belongs to the realm of metaphysics or first principles. The "evidence" that what I'm saying is true is that if you do away with the oneness of natures or essences in the world, what you end up with is a mere formless sea of indistinct particles (or "strings", or whatever other abstract mathematical concept the physicists are dreaming up) upon which forms, natures, categories are mentally and nominally imposed; but this ultimately does away with all rational thought and philosophy, and all real knowledge of the world, and turns everything into mere equivocation and word-games (which is the in fact the conclusion of our postmodern "philosophers" as far as I'm aware). The ethical and political consequence of this is that everything is reduced to "power struggles", because there is no real human nature and no real natural law to unite us; only individuals struggling to impose their wills upon each other. This is the pseudo-philosophy of chaos and tyrants, espoused by the likes of Nietzsche and is what's leading to civilisational decay.

      Delete
    6. Actually, I want to modify my statement that "the brain is not intelligent". This is true if you take the brain as a mere thing-in-itself and abstract it from the underlying human nature of which it forms a part. A brain has no intelligence in and of itself. However, because the intelligent human nature, the vital soul, is united to the human body: you can say that the brain is intelligent, insofar as it participates in the rational intelligence which informs and underlies it. But in this sense we can also say that the human hand or the human foot is intelligent; although the brain plays a more active role in human thought than the hand or foot, still the hand and foot (and every part of the human body) is united to the human intelligence with a vital union, and therefore can be said to be intelligent.

      Delete
  32. When I ordered Aristotle's Revenge a few weeks ago, it gave a crazy delivery date range of something like March 2019-July 2023. Actually, I don't remember the range but at the time it seemed to be that crazy. I was expecting it later vs. sooner.

    Today's email relief: Aristotle's Revenge has shipped.

    ReplyDelete
  33. Tom has nailed you right there djindra.

    Where would you Gnu Atheists be if you didn't have fallacies of equivocation?

    ReplyDelete
  34. I, too, feel that "AI" will remain a Sci Fi pipe dream until well into the distant future (the qualitative difference between Thinking and appearing to Think would probably involve the difference between "thoughts" produced as an *end-result* of programming and Thoughts begotten, internally,by other Thoughts, which are initially triggered by sensations; find me an artificial "neural network" that can generate Thoughts from sensations, then more complex Thoughts as the original Thoughts interact, etc). On the other hand: unlike the OP, I wouldn't use my argument against the feasibility of AI as a Trojan to slip in some back door hosannas for the kick ass Creations of a (Bearded, Anus-Free, Vaguely Levantine Sky Giant). A Catholic using the term "magical thinking" unironically is too ironic to go unremarked, I fear. Unless "magic" is defined in a way convenient to the OP's belief system and in such a way as to render transubstantiation, virgin births, eternal lakes of fire, cosmic sin-expiation-for-bloody-sacrifice bargains and resurrections/ ascensions and Holy Ghosts as Scientific Concepts. I am, respectfully, All Ears.

    ReplyDelete
  35. The Last Superstition. Go read it now, for the sake of your soul.

    ReplyDelete
  36. Actually, magic might be the only way of producing artificial intelligence. St. Augustine refers to the Egyptian hierophant Hermes Trismegistus' writing about the use of magical incantations to bind daemonic spirits to idols of the gods; Shakespeare in his Tempest writes about a spirit being bound to a tree by magical incantation; and there's the old Jewish legend of the golem which is also a construction of magic. Now binding a disembodied intelligence (e.g. a demon) to an artefact – that is real "artificial intelligence", intelligence in an artefact.

    ReplyDelete
  37. " not in and of itself carrying out logical operations, processing information, or doing anything else that might be thought a mark of genuine intelligence – any more than a piece of scratch paper on which you’ve written some logical symbols is carrying out logical operations, processing information, or the like."

    There is a critical and fundamental difference between a logic gate and that scrap of paper, in that the logic gate *physically implements and realizes the logical computation*. You don't need to pull the answer out of the ether (i.e., realize the logical operation in your neurons), because it will do so by virtue of physical law.

    You may just as well say that a human is not carrying out his own thoughts, that they are the result of synaptic spikes - mere biochemical reaction, no different than throwing tinkertoys down stairs. But why can't tinkertoys falling down stairs be intelligent? Seems like a failure of your imagination than an obviously true fact about reality.

    I've seen this one go around in circles for so many years. But it always seems to me to sweeping the problem of human intelligence under the rug: assuming your entire argument holds water, why does it simply not also apply to humans?

    Ink scriblings may have no meaning, but neither do synaptic spikes in human brains, or sound waves and gesticulations produced by human bodies.

    ReplyDelete
    Replies
    1. You're simply begging the question in favor of materialism. In case you haven't realized yet, Feser is not a materialist philosopher (implying, among other things, that he holds that the mind and the brain are not the same). And he gives several very strong arguments, in various places, why that is so (arguments with which this post's intended audience is supposed to be familiar, by the way, since their conclusions, as well as the Thomist framework in general, are assumed from the start).

      Delete
    2. You may also be interested in knowing that Feser usually characterizes the modern tendency of trying to explain everything in a materialistic manner precisely by resorting to the metaphor of sweeping all the dirt under the rug... and then proceeding to get rid of all that dirt which is now under the rug by... sweeping it under the rug. (Hey, if the method has worked so well for everything else then it should also work fine for that last step we still haven't quite figured out, materialistically speaking, aka the mind, right?)

      Delete
    3. Anonymous,

      No sane person would try to measure the horsepower of a drawing of a car. I hop into a car when I drive to the store. Drawing pictures of the car and groceries will not satisfy.

      Every person makes the same materialist assumptions.

      Similarly with the drawing of a logic gate. It doesn't switch states. There is no relevant analogy there.

      Delete
    4. @Jindra

      Stop with your incoherent ramblings once and for all. You've dabbled around here long enough to no longer have any excuse for still not possessing at least a basic grasp of Thomistic presuppositions.

      Delete
  38. Anonymous,

    Your real problem is that I grasp those Thomistic presuppositions very well, certainly well enough to know they won't help you with this analogy issue. Your evasive response confirms that.

    ReplyDelete
    Replies
    1. Your real problem is that I grasp those Thomistic presuppositions very well

      Yeah, it shows.

      this analogy issue

      That analogy is irrelevant, and you'd know that already if you had the slightest understanding of the matter at hand. Go study Kripke's quaddition argument, for instance.

      Delete
    2. Greg S,

      I know Kripke's quaddition argument well via Feser's "Kripke, Ross, and the Immaterial Aspects of Thought" and Ross's "Immaterial Aspects of Thought.” First, it does not apply to measuring horsepower of a drawing. Second, Ross's usage of it merely begs the question as I've repeatedly explained elsewhere.

      Delete
    3. Greg S: ... Thomistic presuppositions ...

      Is there a handy list of these presuppositions that you can point me to? Euclid was kind enough to list his five. Have the Thomists done the same?

      Delete
    4. Yes, there is. Pope Saint Pius X was kind enough to summarize them in the list that came to be known as the Twenty-Four Thomistic Theses.

      But since it is doubtful (to say the least) you'll understand much of it anyway, you're better off by starting with Prof. Feser's books.

      Delete
  39. The metaphysical reading seems plausible only if we make the verificationist assumption that if there is no way empirically to tell the difference between magic and technology, then there just would be no difference.

    Is verificationalism a generalization of Leibniz's identity of indiscernibles?

    ReplyDelete
  40. HI I wish to share my testimonies with the general public about what this man called DR JOHN SOCO of ( drjohnsoco@outlook.com ) has just done for me , this man has just did what I thought nobody will ever do for me, i was HIV positive when one of my family friend introduce this man to me, I never believed that great DR JOHN SOCO could do this, when I contacted him on this same issue on ground, he gave me some parcel to drink, now I am so happy to say that the virus I was having In my body have left me. All thanks to DR JOHN SOCO If you are out there passing through this same kind of problems you can contact him today on his mail ( drjohnsoco@outlook.com ) whatsApp him on +2348147766277
    and he will also help you as well with his great herbal medicine, THANKS BE TO DR JOHN SOCO

    ReplyDelete
  41. Is it possible do you think, whether naturally or miraculously, for a digital or other computer consisting of artifacts to provide the material cause for a rational person, with the formal cause being an immaterial rational soul? In other words, could a computer, like an otherwise irrational animal, be infused by God with a rational soul, or to "develop" the formal and final causes ordinarily provided by the human rational soul? The former seems plausible enough to me, since it doesn't seem to be explicitly contradictory, and thus possible for God, given that the change in formal cause would essentially change it from an artifact to a substance. The latter seems significantly less plausible, but if it were possible, it seems to me that it would have to be in a somehow "natural" way, perhaps as a development of machine learning or something similar.

    ReplyDelete
  42. Your thesis here is completely wrong. Intelligence is a substrate independent emergent property of a physical system. If our brains made out of meat exhibit intelligence there is no logical reasons to suppose a brain made out of silicon can not exhibit the same behavior, except some form of dualism. So while you say "[t]he debate between dualism and materialism can be put to one side for present purposes," that is in fact the only way your argument can be saved.

    To see that this is true, imagine some not too distant future where we can 3D print biological material complex enough to print something the shape of your brain. Are you saying it is impossible for the output o that printer to do what a brain formed in a womb can do. If so, please tell us why?

    ReplyDelete
  43. From a very high level his two major problems are (A) that he equates simulations of physical systems (like the weather) with technological imitations (like computer information processing). This is like confusing a flight simulator with a Boeing 747 and saying humans will never be able to be able to fly like birds can because all we're doing is simulating flight.

    Computer AI is not simulating thinking, it is using technology to imitate thinking "machines" found in nature. Similarly, a Boeing 747 is not simulating flight, it is using technology to imitate flight found in nature. Granted a 747 achieves the task using different materials and different structures than say an eagle, but the principles of power to weight ratio, lift, aerodynamics, etc. all still apply. No one would doubt that a 747 is in fact flying.

    The second major thing he fails to understand is that (B) information processing is substrate independent. This is simple to demonstrate because you can (and I literally have) build an information processing machine from things varying as wildly as legos (see: youtube dot com / watch?v=H-53TVR9EOw), to transistors, to neurons.

    None of this requires the least bit of magical thinking.

    ReplyDelete
  44. AI technology allows for the automation of routine tasks – such as generating sales forecasts. With this type of efficiency and AI’s self-learning capabilities, companies can make more well-informed decisions in less time.
    HLA

    ReplyDelete
  45. I mostly agree with Feser's diagnostic here, but I have some lingering suspicions. So, a few quick points:

    1. First, on the question of simulation. I think David Chalmer's nailed this issue in his book The Conscious Mind. Most simulations are obviously not the same thing as the thing being simulated, but, there are exceptions, where the thing being simulated has "organizational invariance." He writes, "simulated heat is not real heat. On the other hand, for some properties simulation IS replication. For example, a simulation of a system with a causal loop IS a causal loop." He utilizes the principle of "organizational invariance" as the key. He then goes onto argue that phenomenal properties, properties of consciousness, are organizational invariants. "Organizational invariance makes consciousness different in principle from other properties mentioned, and opens the way to strong A.I."

    2.I'm not sure the Feser's argument using artifacts, while interesting, is complete. Man can make artifacts that nonetheless possess their own substantial forms. He has brought up forms like styrofoam before. An "artifact" for sure, but one with its own substantial form.

    3. Does the substantial form of a conscious, intelligent computer exist in God as a potential that could be actualized? That is to say, is God himself able via a miracle to assemble various pieces of mechanical bits together in such a way as to create the substantial form of an intellect which had as its material cause transistors (or legos, or whatever) instead of neurons. If the answer is yes, then one must also consider the possibility that this form could be brought into being via human activity. It's material cause would be the silicon its made from, it's formal cause would be the organization of this silicon (and we already know there are many analogous elements between silicon structures and our non-silicon brains), it's efficient cause would be the A.I. scientists who study and construct the A.I., and it's final cause would be the creation of an actual artificial intelligence. Of course, this A.I. itself would have its own immanent causality, just like we do. From it's point-of-view it would have the same material, formal, and efficient causes as before, but its intrinsic final cause would be the same as the final cause of all rational intellects; its existential destiny would be the same destiny as all rational intellects, that is to say, God. And, if God intends for such a substantial form to come into being, he can use humans as his instruments, the secondary causes, by which this end is brought to fruition. I just can't see why this couldn't be the case unless there is something about carbon atoms that are unique and that God himself couldn't make silicon organizations intelligent.

    Finally, I wonder what Feser would say about Chalmers "dancing, fading qualia" argument. Basically, it imagines a scenario where one by one the neurons of the brain are replaced by silicon chips that perfectly reproduce the causal function of the replaced neuron. The conclusion he reaches is that, in this thought experiment, it's absurd to conclude that at some point human consciousness would simply and inexplicably vanish because the brain is now too "artificial" to sustain consciousness. Remember, the stipulation here is that the silicon-neuron perfectly replicates the causal function of the original (even if this is in actual practice incredibly hard to achieve); the person could in no way possible "notice" a change in their mind because that would be a change in functional organization. (Chalmers is a non-reductive functionalist/dualist who argues for the reasonable principle: no change in functional organization without a change in mental experience).

    ReplyDelete
  46. At best we might create some really advanced V.I. but they would just be what we have now only vastly more sophisticated. There will never be a thinking machine.

    ReplyDelete