Friday, March 29, 2019
Artificial intelligence and magical thinking
Arthur C. Clarke famously said that “any sufficiently advanced technology is indistinguishable from magic.” Is this true? That depends on what you mean by “indistinguishable from.” The phrase could be given either an epistemological reading or a metaphysical one. On the former reading, what the thesis is saying is that if a technology is sufficiently advanced, you would not be able to know from examining it that it is not magic, even though in fact it is not. This is no doubt what Clarke himself meant, and it is plausible enough, if only because the word “sufficiently” makes it hard to falsify. If there was some technology that almost seemed like magic but could be shown not to be on close inspection, we could always say “Ah, but that’s only because it wasn’t sufficiently advanced.” So the thesis really just amounts to the claim that people can be fooled into thinking that something is magic if we’re clever enough. Well, OK. I don’t know how interesting that is, but it seems true enough.
A more interesting claim results if we give the thesis a metaphysical interpretation. On this reading, Clarke is saying that a technology that is so advanced that it seems like magic really would be magic. I doubt this is what Clarke meant, and though it is a more interesting claim, it is also clearly false. No matter how convincing the sleight of hand of a Ricky Jay or a Michael Carbonaro is, we know it is not really magic, and we know that precisely because we know it was accomplished using technology. That an effect results from a “sufficiently advanced technology” entails that it is not magic, not that it is magic. If it were magic, it wouldn’t be the technology that is producing the effect. The metaphysical reading seems plausible only if we make the verificationist assumption that if there is no way empirically to tell the difference between magic and technology, then there just would be no difference. But verificationism is false. (Cf. , section 3.1)
Probably nobody really believes the stronger, metaphysical interpretation of Clarke’s thesis. It might seem that people like believe it, insofar as they claim that the gods of ancient cultures were really extraterrestrials working marvels by way of advanced technology. (Think of the way that the Norse gods are portrayed in Marvel movies as an alien race.) But this idea doesn’t really amount to the claim that magic is real and that it can be explained as a kind of advanced technology. Rather, it amounts to the claim that magic is not real, and that it only seemed to be real because ancient people were mistaking advanced technology for magic.
There are, however, many people who believe a claim that is analogous to, and as silly as, the metaphysical thesis that sufficiently advanced technology really is magic – namely the claim that a machine running a sufficiently advanced computer program really is intelligent. It is not intelligent, and we know that it is not intelligent (or should know, if we are thinking clearly) precisely because we know that it is merely running a computer program.
Building a computer is precisely analogous to putting together a bit of magical sleight of hand. It is a clever exercise in simulation, nothing more. And the convincingness of the simulation is as completely irrelevant in the one case as it is in the other. Saying “Gee, AI programs can do such amazing things. Maybe it really is intelligence!” is like saying “Gee, Penn and Teller do such amazing things. Maybe it really is magic!”
The way computers work is by exploiting a parallelism between logical relationships on the one hand and causal connections on the other. The fundamental examples of this are logic gates. A logic gate is an input-output device constructed in such a way that its inputs and outputs reliably mirror the inputs and outputs of a logical function. Take, for instance, the function and, which is such that when p is true and q is true, the conjunctive statement p and q will also be true (as logic students know from their study of truth tables). An and-gate is a physical device constructed in such a way that when a state that can be interpreted as corresponding to p and a state that can be interpreted as corresponding to q are its inputs, it will cause as output a state that can be interpreted as corresponding to p and q. Other logic gates can be constructed to parallel other logical functions, such as or and not.
In an electronic computer, the inputs and outputs of a logic gate will take the form of electric currents, but in other sorts of machine they can take other forms, such as the positions of valves in There is nothing essentially electronic about a computer in the modern sense. It’s just that an electronic computer is going to be vastly speedier and more efficient than a computer constructed out of materials of these other kinds. In any case, all the complex activity that takes place in a computer of any sort will be an aggregate of the activities of basic elements such as logic gates., or the positions of sticks in .
The flow of current or lack thereof (or, alternatively, the position of a valve, or of a stick, or of whatever the basic parts are out of which some computer is constructed) is conventionally interpreted as a bit (either a 1 or a 0) rather than as a propositional variable or a truth value, and the sequences of 1’s and 0’s correlated with the aggregate of the basic elements (again, such as logic gates) are interpreted as a fundamental level of “information” into which other sorts of information can be coded.
The thing to emphasize is that the computer is not in and of itself carrying out logical operations, processing information, or doing anything else that might be thought a mark of genuine intelligence – any more than a piece of scratch paper on which you’ve written some logical symbols is carrying out logical operations, processing information, or the like. Considered by themselves and apart from the conventions and intentions of language users, logical symbols on a piece of paper are just a bunch of meaningless ink marks. Considered by themselves and apart from the intentions of the designers, a Tinkertoy computer is just a bunch of sticks moving around, as stupidly as if they had been tossed down the stairs. And in exactly the same way, considered by themselves and apart from the intentions of the designers, the electrical currents in an electronic computer are just as devoid of intelligence or meaning as the current flowing through the wires of your toaster or hair dryer. There is no intelligence there at all. The intelligence is all in the designers and users of the computer, just as it is all in the person who wrote the logical symbols on the piece of paper rather than in the paper itself.
Indeed, that’s the whole point of a computer in the modern sense. It’s a way of using utterly unintelligent physical objects and processes to mimic various intelligent activities – just as various utterly non-magical objects and techniques provide an entertainer with a way to mimic magic. A computer’s mimicry crucially depends on our interpreting what it’s doing in certain ways. Such-and-such ink marks count as words with meanings only insofar as we have a convention of interpreting them that way; and in exactly the same way, such-and-such electrical circuits count as logic gates, information processers, etc. only insofar as we have a convention of interpreting them that way. Their status as simulations of various intelligent operations is entirely conventional or observer-relative. You might say that it is a kind of make-believe, just as the “magic” that an entertainer performs is a kind of make-believe.
Siri and Alexa are not really intelligent, and wouldn’t be no matter how convincing you made them, just as Call of Duty is not really warfare, and wouldn’t be real warfare no matter how realistic you made the CGI. Computer simulations of intelligent behavior are like computer simulations of war, the weather, the stock market, etc. – simulations, and nothing more. And we know that for the same reason we know that magic is a mere simulation – namely that we ourselves made the simulation.
Let’s now consider the various objections that are no doubt brewing in the reader’s mind:
1. “Are you saying that intelligence is a kind of magic?”
No, of course not. That’s not the point of the analogy. The point of the analogy is that a simulation of X is not the same as X, and that we should be especially aware of this when we are ourselves the makers of the simulation. Magic is a particularly good example precisely because no serious person believes in it. We know there is no such thing as magic and thus are not tempted to mistake the simulation for the real McCoy. Intelligence, by contrast, is real but also philosophically puzzling, and so in our search for understanding of it we are more prone to commit the fallacy of mistaking simulation for reality where it is concerned.
It is also irrelevant, by the way, whether intelligence is material or immaterial. The debate between dualism and materialism can be put to one side for present purposes. Even if human intelligence is entirely explicable in materialist terms – , but let that pass – the point is that the way it is so explicable cannot be in terms of the idea that the brain is a kind of computer running a program.
2. “But neurons do what logic gates do. So we know that computers can be intelligent, because they are essentially doing what our brains are doing.”
No, they aren’t. True, there are causal relations between neurons that are vaguely analogous to the causal relations holding between logic gates and other elements of an electronic computer. But that is where the similarity ends, and it is a similarity that is far less significant than the differences between the cases. Logic gates are designed by electrical engineers in a way that will make them suitable for interpretation as implementing logical functions. No one is doing anything like that with neurons. In particular, no one is assigning an interpretation as implementing a logical function, or any other interpretation for that matter, to neurons. (The point is simple and obvious, but commonly overlooked precisely because it is so obvious, like the tip of your nose that you never notice precisely because it is right in front of you.)
That brings us to a second difference, which is that a computer and the logic gates and other elements out of which it is constructed are artifacts, whereas a brain (or, more precisely, the organism of which the brain is an organ) is a substance, in the Aristotelian sense. A substance has irreducible properties and causal powers, i.e. causal powers that are more than just the sum of the properties and powers of its parts. Artifacts are not like that. In an artifact, the properties and causal powers of the whole are reducible to the aggregate of the properties and causal powers of the parts together with the intentions of the designer and users of the artifact. (Cf. , section 3.1.2)
Judging the brain to be a computer on the basis of the analogy between neurons and logic gates is like saying that the In fact, anything that came about through natural processes cannot be a sculpture, whatever it looks like, because a sculpture is a kind of artifact, and an artifact is precisely the opposite of something that comes about through natural processes. And in exactly the same way, precisely because brains and the neurons of which they are made come about by natural processes, they are not artifacts, and thus are not computers, logic gates, and the like (since those things are artifacts). must really be a sculpture even if it came about through natural processes, on the grounds that it looks (sort of) like a sculpture would look.
I can already hear some readers thinking: “But maybe God assigns to neurons the interpretation of implementing a logical function, so that the brain is a kind of computer that God is using, and our thoughts are the result.” This is completely muddleheaded. For one thing, this would entail that we are not really thinking at all – any more than a piece of paper or an abacus is thinking when you use it to work out calculations – but that only God is thinking, and somehow using our brains as an aid in doing that, just as we use paper, abacuses, etc. as aids to thinking. (Why would God need such an aid to thinking?) For another thing, it supposes that the brain is a kind of artifact, and it simply isn’t that, whether or not God creates it. (As I have complained many, many times, it is a serious theological and metaphysical error to model divine creation on the making of artifacts. Cf., for example, my essay “Between Aristotle and William Paley: Aquinas's Fifth Way,” in .)
It is easier to see the fallacy here if you think of a Tinkertoy computer or a hydraulic computer instead of an electronic computer. It is obvious that the movements of sticks count as the implementation of logical functions, information processing, etc. only insofar as the designer has assigned such interpretations to the movements, and that apart from this interpretation they would be nothing more than meaningless movements. No one is doing anything like that with the brain. No one is saying “Let’s count this kind of neural process as an and-gate, that one as an or-gate, etc.” the way they are with the Tinkertoy sticks. The reason people fall for the fallacy in the case of electronic computers is that they see an analogy between the computer’s electrical activity and the brain’s electrochemical activity and think it lends plausibility to the idea that the brain is a computer. In reality the similarity is no more relevant than the fact that you can make a computer that weighs about as much as the brain, or one that is the same color as a brain.
3. “But evolution can design computers. That’s what the brain is – a computer designed by natural selection.”
This is like saying that evolution can make sculptures or that natural selection can write English prose. Sculptures and English prose are artifacts, which are the products of intelligent creatures. Natural selection is not intelligent – that’s the whole point of the idea of natural selection – and thus it cannot make artifacts. Even if it somehow produced something that kinda-sorta looked like a sculpture or an English word, it wouldn’t actually be one, any more than the “face on Mars” is really a sculpture. And by the same token, it cannot make a computer, since a computer is a kind of artifact. Tarting up nonsense with the magic word “evolution” doesn’t somehow make it scientific or anything other than nonsense.
Moreover, even if the suggestion that “evolution designs computers” weren’t nonsense, it would, in the current context, be question-begging. It would assume that it makes sense to describe the product of a natural process as a computer, and thus presupposes that what I’m saying is wrong without showing that it is.
4. “But you’re relying on intuition, and intuitions are a weak basis for metaphysical claims.”
No, I’m not relying on intuition at all. (In fact, .) When I point out that ink scribblings have no intrinsic status as words but get that status as a result of human conventions, or that the “face on Mars” cannot be an artifact if it came about through natural processes, I am not appealing to intuition. I am not saying “Gee, it just seems intuitively like a bunch of ink scribblings have no intrinsic meaning etc.” Rather, I’m merely calling attention to how words actually come into existence, how artifacts actually come about, etc. Similarly, when I say that sticks and and-gates and the like have by themselves no status as the implementation of logical functions, etc., I’m not appealing to intuitions but merely calling attention to how Tinkertoy sticks, and-gates, etc. actually get that status – namely, from the conventions and intentions of the designers and users of computers.
5. “Oh, this is just John Searle’s Chinese Room argument. But that doesn’t work because [insert fallacious response to Searle here].”
No, this is not Searle’s Chinese Room argument. To be sure, that argument is an excellent argument, and in my view none of the usual responses to it is any good. But again, I am not giving a variation of the Chinese Room argument.
However, I am saying something that is related to another argument Searle gave about a decade after he first published the Chinese Room argument – an argument presented in his article That is an argument to the effect that computation is an observer-relative feature of physical processes rather than a feature intrinsic to them, so that the brain cannot be said to be in any interesting sense a digital computer. It is a computer only in the trivial sense that anything can be said to be computer, insofar as we could, if we wanted to, assign to anything some interpretation of it as carrying out a computation. and in his book .
Still, my position differs from Searle’s in some important ways. I have analyzed Searle’s argument at length in my article As I note in the article, Searle has in common with his materialist critics an essentially “mechanistic” or post-Aristotelian and non-teleological conception of the nature of matter. And given that conception of matter, none of the responses to Searle has any hope of succeeding. However, if we return to an Aristotelian teleological conception of matter, then we can coherently say that there is something analogous to computation in nature (though that doesn’t entail that the right way to think of it would be on the model of Turing machines, binary code, and all the other apparatus of modern computational theory).
That still wouldn’t salvage the claim that computers in the modern sense can be intelligent, however, because they are still mere artifacts, and the points made above would still apply. (It also wouldn’t salvage the claim that human intelligence amounts to computation in the brain, since – as, again, – human intelligence cannot be reduced to purely material activity even given an Aristotelian conception of matter. But as I have said, that is not essential to the present point.)
6. “How is positing ectoplasm any better than explaining intelligence as a kind of computation?”
Who said anything about ectoplasm? Not me, since I don’t believe in such a thing. Pointing out that words and sculptures are artifacts and thus cannot be the products of natural processes doesn’t commit someone to the existence of ectoplasm (whatever that is). It doesn’t entail that one must think that words get their meaning, or sculptures get their status as representations, from the infusion of some weird kind of substance. Indeed, it doesn’t commit one to any sort of metaphysical view at all, weird or otherwise. Similarly, pointing out that computers are a kind of artifact – so that they don’t in and of themselves count as carrying out computations or as doing information processing, and so that the brain is not a kind of computer – does not commit one to any positive account, weird or otherwise, about how the brain works or how intelligence works. Again, you don’t need to be a dualist to see the point.
7. “But maybe computers, even the Tinkertoy computer, really are thinking even if it seems that they are not.”
Right, and maybe there are invisible, intangible, silent, odorless elves dancing in front of you right now even if it seems there are not. Maybe Penn and Teller, Ricky Jay, and Michael Carbonaro really are doing magic, just like Dr. Strange, and only pretending that it is just sleight of hand.
But don’t bet on it, and also don’t bet on the idea that computers are really thinking. And even if they were, it would not be because of the way they are engineered, the programs they are running, etc. – just as, if Penn and Teller were doing real magic, it would not be because of their skill at sleight of hand. For sleight of hand is precisely mere simulation rather than real magic, and to be constructed out of logic gates and the like is precisely merely to simulate intelligence rather than really to have it. If computers really are thinking, that would be because they’ve somehow got brains hidden somewhere (if you’re a materialist) or immaterial souls hidden somewhere (if you’re a dualist), and not because of anything having to do with their being computers.
8. “You just feel threatened by the idea that computers are intelligent! You just want to believe that the human mind is somehow special!”
And you’re really grasping at straws at this point. Even if this accusation were true, it would be entirely irrelevant to the points I’ve been making. To suppose that what motivates a person to make a claim or give an argument is relevant to the truth of the claim or the cogency of the argument is to commit an ad hominem fallacy of poisoning the well. You might as well say that those who believe the “face on Mars” is not a real sculpture just feel threatened by the idea that it is, or that those who point out that ink scribbles have only conventional rather than intrinsic meaning just feel threatened by the idea that they have intrinsic meaning.
In any case, the accusation isn’t true. As I keep pointing out, even someone who rejects dualism and thinks the mind is just one natural feature of the universe among others could accept the points I’ve been making here. (Searle would be an example.)
9. “Then why do so many people, including even many scientists and philosophers, say that computers can be intelligent?”
Because they are human beings, and a human being is as susceptible of fallacious thinking as the next guy. And there are several fallacies one can easily fall into in this context.
One of them I’ve already indicated. The electrical activity in a modern computer is analogous to the electrochemical activity in the brain. Hence people can lapse into committing a fallacy of false analogy, concluding that the brain and a computer must be analogous in other respects too. This is abetted by a fallacy of equivocation. We often use intentional idioms when speaking about computers – we say that the computer knows such-and-such, or is figuring out the solution to a problem, or has such-and-such in its memory, or what have you. These are all mere façons de parler, originating from the fact that computers were constructed precisely to mimic such intelligent features. It is exactly analogous to the way that we casually speak of a statue as having eyes, a nose, a mouth, etc., because the statue was sculpted precisely to have features that look like eyes, a nose, a mouth, etc. But a statue doesn’t literally have eyes, a nose, or a mouth, and a computer doesn’t literally know, figure out, or remember anything. When we use the same terms to describe what we do and what the computer does, we can, if we are not careful, fallaciously conclude that it is doing what we do.
There is also a kind of confirmation bias at work here. People who claim that computers can think are typically materialists, and it can be very tempting to see the undeniably impressive advances made in computer hardware and software as vindication of the claims of materialism. There is also sometimes a kind of circular reasoning at work. Materialism is taken to support the computational model of intelligence, and the computational model of intelligence is taken to support materialism.
Then there is the fact that journalists and pop culture have spread the idea of artificial intelligence and made it so familiar that its legitimacy has come widely to be taken for granted. The average man on the street doesn’t really know much about how the brain works or how computers work, but is impressed by the latter and notes that science fiction and even many scientists suppose both that a thinking computer might one day be constructed, and that the brain is itself a kind of computer. “Look at Siri and Alexa! Look at all those pop science books about artificial intelligence on the shelves at Barnes and Noble! Look at all those thinking machines on TV and in the movies – HAL 9000, Data from Star Trek, the kid from the Spielberg movie AI, Ultron and the Vision from the Avengers movies, etc. There must be something to it!”
Nor can it be denied that the idea of artificial intelligence is cool and fun. People want it to be true for that reason too, as well as because of their materialist biases. (Notice that I am not saying that these motivations show that the idea is false. That would be to commit a fallacy of poisoning the well. The idea is false for other reasons, namely the ones given above. The point is merely that these motivations help explain why people accept an idea that can be fairly easily refuted when one thinks carefully about it.)
None of this is to deny that much of what goes under the name of “artificial intelligence” is technologically very impressive, and promises to become only more impressive. Nor is it to take a stand one way or another on the current controversy about the potential dangers of AI as it gets more sophisticated. AI might end up being dangerous for the same sorts of reasons that other technologies can be dangerous. For example, we might become too dependent on it, or it might become too complex to control, or there might be glitches that lead to horrible accidents, and so forth.
However, it will not become dangerous by virtue of becoming literally more intelligent than us, because it is not literally intelligent at all. Nor are any of the other odd things sometimes claimed by those who’ve gotten carried away with the idea of thinking machines – such as that we might achieve immortality by virtue of our minds being downloaded onto a computer, or that the universe might really be a computer simulation – any more plausible. All of this is sheer nonsense.
You might as well say that our universe is really just a pattern of movements in a vast assemblage of Tinkertoy sticks, or that your mind might persist after your death as a set of movements in a bunch of Tinkertoy sticks. Movements in Tinkertoy sticks, however complex, are in and of themselves nothing more than that – movements. That’s all. They “process information” or carry out “computations” only in the sense that we can decide to interpret certain of the patterns that way, just as we can decide to count certain ink marks as words. And the idea is no more plausible when we substitute electronic computers for Tinkertoy computers.
Accept no imitations [on the Turing test]
From Aristotle to John Searle and Back Again: Formal Causes, Teleology, and Computation in Nature [a 2016 article from the journal Nova et Vetera]
Kripke, Ross, and the Immaterial Aspects of Thought [a 2013 American Catholic Philosophical Quarterly article, reprinted in Neo-Scholastic Essays]
Kurzweil’s phantasms [a 2013 book review from First Things]