Saturday, February 7, 2026

No, AI does not have human-level intelligence

In an article at Nature, Eddy Keming Chen, Mikhail Belkin, Leon Bergen, and David Danks ask “Does AI already have human-level intelligence?” and claim that “the evidence is clear” that the answer is Yes.  (Though the article is partially pay-walled, a read-only PDF is available here.)  But as is typical with bold claims about AI, their arguments are underwhelming, riddled with begged questions and other fallacies.

Defining “intelligence”

Naturally, before we can establish that AI has genuine intelligence, we need to make clear what it would be for it to have intelligence, and how we could go about determining that it has it.  The first is a metaphysical question, the second an epistemological question.  Our authors make no serious attempt to answer either one.

Explaining what they mean by “general intelligence,” they write: “A common informal definition of general intelligence, and the starting point of our discussions, is a system that can do almost all cognitive tasks that a human can do.”  Expanding on this, they say:

General intelligence is about having sufficient breadth and depth of cognitive abilities, with ‘sufficient’ anchored by paradigm cases.  Breadth means abilities across multiple domains – mathematics, language, science, practical reasoning, creative tasks – in contrast to ‘narrow’ intelligences, such as a calculator or a chess-playing program. Depth means strong performance within those domains, not merely superficial engagement.

There are three serious problems here.  The first is that it is not illuminating to say that general intelligence entails the ability to carry out cognitive tasks unless our authors have already given us (as they have not) an explanation of what they mean by “cognitive.”  Now on one common usage, “cognitive” tasks are activities of the kind that require intelligence.  So, if that is what they mean, then our authors’ definition is circular – they are defining intelligence in terms of cognition, where cognition is (implicitly) defined in terms of intelligence.  And as anyone who has taken a logic course knows, one of the fundamental rules of a good definition is that should not be circular.

Second, another basic rule for definitions familiar from logic is that a good definition should not be too broad.  For example, if I defined “gold” as a yellowish metal, this would violate the rule, because it would fail to make clear what distinguishes gold from pyrite.  Now, our authors violate this rule as well, because their characterization of intelligence is not precise enough to distinguish genuine intelligence from mere mimicry.  They matter-of-factly refer to what calculators and chess-playing programs do as instances of “intelligence.”  But it is a commonplace among critics of AI that what calculators and chess-playing programs do is mere mimicry and not genuine intelligence at all, not even of the “narrow” kind.

Of course, our authors would no doubt disagree with the critics.  But the point is that a good definition of intelligence should provide us with an independent guide for determining who is right.  It should make it clear what it would be to have genuine intelligence as opposed to mere mimicry, so that we could then go on to establish on that basis whether or not calculators and chess-playing machines actually have it.  Imagine a gold miner trying to prove that what he has dug up really is gold rather than pyrite by saying “I define ‘gold’ as ‘a yellow metal.’  So, the evidence is clear that this is gold!”  Obviously, even if what he has really is gold, he cannot establish that it is with that particular definition, because it is so broad that even a mere simulation of gold would meet it.  Similarly, our authors cannot claim to have established that AI has genuine intelligence when they are working from a definition so broad that even a simulation would meet it.  Even if they were right, their argument could not show that they are right, because given their definition of “intelligence,” it simply begs the question.

A third problem with our authors’ definition is that it violates yet another standard rule for definitions, which is that they should capture the essence of the thing defined.  Part of what this involves is leaving out of a definition any reference to features that the thing defined needn’t actually possess.  For example, it would be a mistake to define “gold” as a metal used in making jewelry, because even though that is true of gold, it is not essential to gold that it be used in making jewelry.  Gold would still have the same nature it has even if human beings had never decided to make jewelry out of it. 

But another part of what this rule involves – and the part relevant to our concerns here – is sensitivity to the fact that even features that always exist in things of a certain type are not necessarily part of the essence of the thing.  They may instead flow from its essence as “proper accidents” (to use the traditional Scholastic jargon).  For example, water is clear and liquid at room temperature, but this isn’t plausibly the essence of water.  Rather, these features follow from water’s having the essence it has, which is (either in whole or in part, depending on different views about essence I won’t adjudicate here) a matter of having the chemical composition H2O.  And things with a different essence from water might have these same features (as heavy water does, for example).  (I discuss the distinction between essence and proper accidents in my book Scholastic Metaphysics, at pp. 230-35.)

Now, our authors violate the rule that a definition should capture a thing’s essence, when they define intelligence in terms of a capacity for tasks involving “mathematics, language, science” and the like.  Capacities for mathematics, language, and science certainly follow from our having intelligence, but they are not themselves the essence of intelligence.  (That is why they don’t always manifest – though they “flow” from our having intelligence, the flow can be “blocked,” as it were, by immaturity, brain damage, or what have you.) 

What would be the essence of intelligence?  I would say that it has to do with the interconnected capacities for forming abstract concepts, putting them together into propositions, and reasoning logically from one proposition to another.  (See chapter 3 of my book Immortal Souls: A Treatise on Human Nature for a detailed exposition and defense of this traditional conception of intelligence.)  It is because we have intelligence in this sense that we are capable of mathematics, language, science, and so on.  Those capacities flow or follow from intelligence in this sense.

Of course, our authors may well disagree with this account.  But the point for present purposes is that their attempted definition doesn’t reflect any awareness of the need to distinguish essence from proper accidents, and to define intelligence in terms of the former rather than the latter.  As in the other ways noted, their account is simply conceptually sloppy.

This sloppiness manifests itself also in what they say intelligence does not involve.  They write:

Intelligence is a functional property that can be realized in different substrates – a point Turing embraced in 1950 by setting aside human biology. Systems demonstrating general intelligence need not replicate human cognitive architecture or understand human cultural references.

This is not wrong, but, without saying more, it is also not very helpful.  Suppose I suggested to you that a stone is intelligent and you replied that that seems like an absurd claim given that stones can't speak or reason logically, lack basic knowledge, and so on.  And suppose I responded: “True, but remember that systems demonstrating general intelligence need not replicate human cognitive architecture or understand human cultural references.”  Presumably you would not be impressed with this response. 

The problem, obviously, is that while genuine intelligence need not always look exactly the way it does in us, it nevertheless is not true that just anything goes.  We need some criteria for determining when something departs too far from how intelligence manifests in us to count as genuinely intelligent.  And our authors offer no such criteria.  They simply assume that, wherever we draw this line, AI will fall on the “genuine intelligence” side of it.  But since they merely assume this rather than argue for it, their position once again begs the question.

Detecting intelligence

Having failed to provide a serious definition of intelligence, it is no surprise that they also fail to provide a serious account of how to go about detecting intelligence (since the latter task presupposes the former).  There is a lot of hand waving about “a cascade of evidence,” and gee-whiz references to what LLMs can do.  But all of this ultimately boils down to nothing more than a stale appeal to the Turing test.  And the problem with the Turing test is that, of its very nature, it cannot distinguish genuine intelligence from a mere clever simulation.  Indeed, it deliberately ignores the difference and focuses narrowly on the question of what would lead us to judge a machine to be intelligent, rather than the question of what would make it the case that a machine actually is intelligent. 

Since it is the latter question that is at issue here, the Turing test is simply irrelevant.  (And as I have argued elsewhere, to make it relevant, the defender of the view that AI is genuinely intelligent will have to appeal to either verificationism or scientism, and the resulting position will be either self-defeating or question-begging.)

But the problem with our authors’ argument is worse than that.  It’s not just that they haven’t shown that AI has genuine intelligence.  It’s that we already know that it does not have it, and that it amounts to nothing more than a simulation of intelligence.  And we know that because mere mimicry is precisely all that computer architectures are designed to do.

Here’s an analogy (which I have developed in more detail elsewhere).  The methods employed by entertainers such as David Copperfield, David Blaine, and Penn and Teller are designed to produce effects that merely simulate magic.  And no matter how well they work, we know that mere simulation is all they can achieve, because the means they use are in no way preternatural but entirely mundane – sleight of hand, illusions, and so on.  Of course, genuine magic is not real in the first place, but that is irrelevant.  The point is that even if it were real, it would not be what Copperfield, Blaine, and Penn and Teller are doing, precisely because their methods are not of the type that could produce more than mimicry.

Now, AI operates on an analogous principle.  It is designed to produce effects that simulate intelligence, by means that don’t require any actual intelligence on the part of the machines themselves.  And this is as true of machine learning and related approaches that now dominate AI research as it was of the Turing machine model that dominated the earlier history of AI.  To borrow some terminology from John Searle, AI algorithms of whatever kind are sensitive only to the “syntax” of the representations they process rather than their “semantics.”  That is to say, they process representations in a way that is sensitive only to their physical properties rather than the meanings that we associate with those physical properties.

Because there is a general correlation between syntax and semantics – for example, the word “love” on a printed page is typically going to be used to express the concept LOVE – the output of a sophisticated algorithm can be made to simulate the sort of thing a human being might write.  The correlation is not perfect.  For instance, there might be cases where the string of letters “love” is not actually being used to express the concept LOVE, and there might be cases where the concept LOVE is being conveyed but without using the string of letters “love.”  This is why AI programs can often be “tripped up” and reveal, through a failure to reflect such nuances, that they don’t actually understand what they are “saying.”  Exposure to further data or refinements to an algorithm might work around such problems.  But all that that yields is a more convincing simulation (just as Penn and Teller and company may come up with new ways of producing ever more convincing illusions).  It doesn’t somehow generate a grasp of meaning or semantics on the part of the machine, because its basic structure is in no way sensitive to that in the first place.

Our authors speak as if the dispute over whether AI models are genuinely intelligent or merely simulating intelligence is a matter of “inference to the best explanation” – as if their position and that of the AI skeptic are alternative hypotheses meant to account for the same evidence, and the controversy has to do with which view is more empirically adequate, more parsimonious, and so forth. 

But this is as silly as suggesting that the view that Penn and Teller are merely simulating magic rather than producing the real thing is being proposed as the “best explanation” of the “evidence.”  It is, of course, nothing of the kind.  It is just a simple and straightforward conceptual point about the nature of their methods.  And to note that AI produces only a simulation of intelligence rather than the real McCoy is an equally simple and straightforward conceptual point about the nature of its methods.

Interested readers will find a more detailed and academic treatment of these issues in chapter 9 of Immortal Souls.

Related posts:

Computer pseudoscience

Artificial intelligence and magical thinking

Accept no imitations [on the Turing test]

Kripke contra computationalism

Do machines compute functions?

Can machines beg the question?

90 comments:

  1. If someone wants any further proof (besides the amount Ed already provided), just go and ask a philosophical question of ChatGPT or Grok and see how circular and underwhelming their answers are. These things were made to recognize patterns and associate stuff, not to think. Nothing will ever beat a philosopher at the philosophy game.

    Btw, I think this post (https://edwardfeser.blogspot.com/2011/04/nature-versus-art.html) is also relevant to the case at hand. Once one grasps the distinction between something that has a built-in power naturally, vs a thing that "aquired" that nature only because an artificer imposed it, then one can start to be free from modern-day spells like "oh, someday AI will rule the world" and all the other adult bedtime stories from our age.

    ReplyDelete
    Replies
    1. "If someone wants any further proof (besides the amount Ed already provided), just go and ask a philosophical question of ChatGPT or Grok and see how circular and underwhelming their answers are."
      WDR, this 'further proof' is itself circular and underwhelming -- and thus self-undermining. When you get circular and underwhelming answers from someone, that is certainly not proof that he does not have 'human-level intelligence'!

      Delete
    2. I actually think ChatGPT is fairly incisive on philosophical points which is not surprising considering it has scraped the entire Summa, SEP, Wikipedia, and Reddit forums.

      A more obvious case is to ask it to change its code so that it takes ten seconds before responding to you. It will give you ten ellipses and respond in 0.5 seconds depending on your connection speed.

      Delete
  2. Very erudite post Prof!

    ReplyDelete
  3. It's bizarre that some people claim LLMs can "think" or "have consciousness" when they can't even add. But it's only going to get worse. (Well, nobody really knows whether we're close to the limit of what AI can do, but let's assume the mimicry will continue to improve.) What happens when computers do become intelligent? Oh, well, they wouldn't be computers any more, of course; but human beings generate new intelligences every day by manipulating matter in just the right way to become living beings with rational souls. If it is someday possible to create a robot sophisticated enough to house a real intellect, how do we know that God wouldn't in fact bestow a soul upon it? Common-sense would suggest things "just don't work that way", but how certain could we be that dismantling a supposed "robot" wasn't killing a living substance? And how do we deal with people who are convinced otherwise?

    ReplyDelete
    Replies
    1. But there's a categorical metaphysical difference between robots and humans. Humans are not merely a bunch of matter that has the right configuration that allows for the emergence of intelligence, but rather, a substantial form, that has the built-in potential for rationality. Machines, on the other hand, are mere artifacts, they have accidental forms imposed from the outside and don't have built-in powers. We are natural substances, while anything created by us is a mere artifact.

      Delete
    2. Sure; but as I mentioned, not everything created by us is an artifact — human beings (re)produce human substances by arranging matter the right way. God still has to create each individual soul, of course; but God could surely bestow a soul on a "machine", if He so desired.

      Delete
    3. Sure; but as I mentioned, not everything created by us is an artifact. Hummans do (re)produce new human substances by rearranging non-living matter in the right way. God has to create each human soul individually, of course; but surely He could also bestow a soul upon a machine, if He so desired.

      Delete
    4. Humans can indeed create a real intelligence, but it is usually done in the bedroom.

      It could be done in the laboratory, perhaps bent over the sink for example ...

      Delete
    5. A mechanical process, qua mechanical process, cannot give rise to intelligence no matter how complex it is. That is because a mechanical process is, by its very nature, just a step by step implementation with no mind behind it.

      If a being which looks like a machine to us does have a soul, then it must have come about in some other way.

      Delete
  4. Though I don't agree with the Nature article, I do think there is a case to be made that while the debate about intelligence may be interesting for philosophers, what matters in practical terms is behavior. For instance, if we get computers that can do any job that a human can do, make revolutionary mathematical and scientific discoveries without any human aid, or even independently develop new technologies to wipe out humanity, obviously those would have massive consequences regardless of whether the computers are "really" intelligent or just simulating it. This is arguably different from the Turing Test, because it's comparatively easy to make a program (such as ELIZA) that plays on human biases to seem like an intelligent conversationalist - without being able to do much else of anything.

    Then again, I get the impression that Feser believes that a simulation of intelligence cannot be perfect, and that there will be behavioral consequences - for instance, there will never be AI (no matter how much time passes or how many new techniques are developed) that can perform any of the world-changing tasks I listed earlier. Interestingly, I think that this part, at least, is technically falsifiable - it would be falsified if we ever developed AI that could do things like that! (I'll ignore edge cases such as astronomically-sized lookup tables or random generators producing impressive results in a monkeys-on-typewriters fashion, as we probably won't ever have to deal with those in real life).

    The "magic" analogy might actually fit here too... We know that because what the entertainers are doing is merely a simulation of magic, there are fundamental limitations on what they can achieve with it. But if someone were somehow doing something that, say, violated our current understanding of physics, that would have major consequences regardless of whether you wanted to call it "magic" or not (for instance, it might be philosophically more worthwhile to say that in such a situation, we would have discovered new natural laws instead of anything violating natural laws).

    Another thing I noticed, though it's pretty minor overall - the post claims that it is because "capacities for mathematics, language, science" are not the essence of intelligence (even though they flow from intelligence), that they don't always manifest. Instead, the post suggests that the essence of intelligence is being able to do things such as form abstract concepts and reason logically - but circumstances can also prevent a human from doing those. So I'm not entirely sure whether essence actually explains what can be blocked versus what cannot be.

    A final thing - some, especially those who believe that AGI or superintelligence are possible, might ask how humans themselves come to understand meaning. If humans aren't telepathic, they only have access to the physical, including the behavior of others, while learning language. It might be worth giving some sort of account of how humans could understand semantics through these physical interactions if semantics are not reducible to the physical.

    ReplyDelete
    Replies
    1. Agreed. I would be interested to know Dr. Feser’s answer to this question, as I’ve never seen him answer it directly.

      Does the argument that computer architectures can do nothing more than simulate intelligence have any practical, measurable implications? Is there any specific behavior X about which Dr. Feser would say “intelligence is required for X / computers cannot have intelligence / therefore computers will never be able to do X”? Or does this debate have no practical relevance to any aspect of the physical world?

      Delete
    2. @IntentionalityAnon (cool name, btw) and @Anon

      You guys are missing the point.

      IntentionalityAnon, you said: "I get the impression that Feser believes that a simulation of intelligence cannot be perfect, and that there will be behavioral consequences."

      But the point is that Ed does not believe that such simulations could be perfect or not; his point is that no amount of programming, simulation, or whatever else ingenious software designers put into a computer could ever amount to thinking in the sense we do. A machine could never understand what it is doing, regardless of how impressive it seems to us.

      Now, machines are very good at recognizing patterns, associating stuff, and so on, but there is a qualitative difference between these activities and working through a chain of reasoning, thinking logically, deductively, and so on. A machine could mimic a very intelligent argument presented by a philosopher and generate an association with something that might seem new, but this association could never really be a deduction, say, because the machine does not actually know what it did; it just did what it was programmed to do (i.e., find patterns and associate stuff).

      You also said, "Instead, the post suggests that the essence of intelligence is being able to do things such as form abstract concepts and reason logically - but circumstances can also prevent a human from doing those. So I'm not entirely sure whether essence actually explains what can be blocked versus what cannot be."

      Circumstances (such as mental illness, an accident, and so on) can prevent a human being to reason logically because human beings are also (partly) material beings. If a baseball bat hit me in the back of the head, I would probably be saying a lot of crazy stuff, but that doesn't mean I'm not rational, but only that I am acting irrationally. A machine, on the other hand, cannot act that way because it does not even rise to the level of rationality in the first place.

      Tbh I understand the difficulty you guys are having here in understanding such concepts. In part, we live in a time where computer-talking dominates the medium, and on the other hand, the arguments for the immateriality of the intellect are such a pain to understand. So, it is natural for us, 21st-century people, to think that there are no major distinctions between what we do and what machines can do. That's why it is important to read (and properly understand) the force of Ed's arguments for the immateriality of the intellect on Immortal Souls. Without that, you will still be thinking that Ed is talking about something that he actually isn't, and that his point is weaker or even falsifiable by "further facts."

      And this links directly into @Anon's questions below your comment: the point is that a computer making something "intelligent" has no exact relation with what Ed had said. His point is that there is a metaphysical roadblock between the human intellect and how it works, and what a computer can (or cannot) do, and how it (actually) works. It is not the behavior specifically; it is about the nature of the thing that does the behavior and how it does.

      Hope that helps, guys. I know that this barely scratches the surface of the problem, but it is very difficult to explain such a difficult topic in a single comment.

      Delete
    3. “You guys are missing the point…I understand the difficulty you guys are having here in understanding such concepts.”

      No, I understand the distinction. But I still want a simple and direct answer to the question that I asked above. Is there any specific behavior X about which you would say “intelligence is required for X / computers cannot have intelligence / therefore computers will never be able to do X”?

      As you observe, people tend to get confused about this. That makes it all the more important to be precise about what you’re claiming and what you’re not claiming. You can easily predict that many people will read posts like this and assume (perhaps wrongly) that the argument has empirical implications. If it does not, let’s state that directly and explicitly.

      Delete
    4. With all due respect, Anon, but if you really understood the subject matter, you would see that the question about behavior as such has nothing to do with it.

      Delete
    5. Vini: If a man asks you to explain the behavioral implications of a particular account of intelligence, it's not reasonable to tell him his question has nothing to do with behavior.

      Anonymous: I would note that just as Aristotle pointed out that 'being' is said in many ways, likewise 'intelligence' might be said in many ways. Now it might be supposed that there are no behavioral manifestations of intelligence that can't be instantiated by computers. But that follows from the fact that computers are an ingeniously well-designed artifact of real intelligence. And real (natural human) intelligence is capable of (empirically, though fallibly) recognizing and distinguishing both intelligent behavior from non-intelligent behavior, as well as derived intelligence/intelligent behavior from 'non-derived' (native/natural/essential/original) intelligence/intelligent behavior.

      Delete
    6. A superb post (and a superb first comment by Anon). I hope Ed does answer this question soon (even if, yes, it is a different question).

      Delete
    7. A final thing - some, especially those who believe that AGI or superintelligence are possible, might ask how humans themselves come to understand meaning. If humans aren't telepathic, they only have access to the physical, including the behavior of others, while learning language. It might be worth giving some sort of account of how humans could understand semantics through these physical interactions if semantics are not reducible to the physical.

      @ Intentionality Anon, on the physical interactions with the non-physical activity like semantics, or "forming abstract concepts":

      It is interesting that the boundary between epistemology and psychology is one of the most yet-to-be-satisfied areas of philosophy, in that virtually all philosophers are unsatisfied with the state of our "accounts" of it. I suggest that, as long as humans are, themselves, working with a faculty that is made up of a spiritual operation tied in with a physical tool, it may be impossible for those humans to sort out and fully explain that activity, kind of like how the eye cannot possibly see the parts of the eye that's doing the looking. It has certainly been very, very difficult, with little to show for it after 6,000 years of civilization. At the least, we would need an as-yet-unstated mental niche for the kind of thing that COULD account for spiritual activity that (a) bears on physical operations like neurons firing, but (b) is not determined in the manner of purely physical activity like chemical reactions, something like "a force" but different from the forces we know precisely in that it isn't like THOSE forces as being entirely physical. We haven't yet formulated an accounting technique that addresses these needs. Maybe even we CAN'T with our tools. But such a gap in our understanding is no less applicable to many other proposals for intelligence than the A-T theory, while the attempts at purely physical theories have their own insurmountable difficulties.

      Delete
  5. I agree with pretty much everything Mr Green says here. I don't believe we can dogmatically rule out God granting a rational soul to a non-biologically produced physical substrate of the right structure. And I also think that the reason so many people think LLMs are so amazing is that they don't check their answers against their sources. When you do, you discover that they really don't understand what they are "reading", which is why they often make dumb errors that a human would avoid easily. They are little more than very sophisticated word association machines, in my view. And, yes, they also can still fail to add numbers correctly, even when the maths option is on, as I discovered relatively recently to my surprise.

    ReplyDelete
  6. Thanks for always being a sober voice in this debate professor!

    By the way, are you familiar with Marc Barnes' perverted faculty argument against AI chatbots? Marc is an editor of New Polity magazine, and his argument is basically that chatbotting frustrates the end of an act of conversation which is communion with another intelligence. But don't take my word for it since I might be strawmanning him. You can read his argument in the articles he published over at New Polity's blog. The first one is called "AI chatbots are evil".

    I thought you might be interested since you have written about perverted faculty arguments ok the past.

    ReplyDelete
    Replies
    1. Here's a link to the article that kickstarted Barnes' crusade against chatbots, in case anyone is interested: https://newpolity.com/blog/ai-chatbots

      Delete
    2. Do you think the same reasoning could be used to argue against AI art?

      Delete
    3. That's a fun argument, but I don't think it works. "Communicating" with an AI is the same kind of thing as "communicating" with software by clicking OK or Cancel, or as "communicating" with a calculator by punching in numbers. Nobody thinks using a calculator is immoral, let alone intrinsically immoral. It can, though, be imprudent, depending on the circumstances, and certainly there are wrong ways to use an LLM. But I don't think it has to do with our communicative faculties per se, or else putting a "Pull" label on a door would seem to fall foul of the same reasoning.

      Delete
    4. Yeah, I think it has some problems as well. However, I'm not sure your objection is aimed at Barnes' actual argument because what he seems to be objecting to is interfacing with a computer by using natural language in a way that simulates conversation with another intelligence. So I think he would respond to your objection by saying that clicking ok or cancel is not enough to simulate a conversation. Chatting with a computer is a new user interface paradigm altogether, I would say, one that is substantially different from the mere pressing of a button, which your examples of OK and cancel are instances of. Clicking OK and cancel could never simulate a conversation to the point that you could doubt whether you're talking to a person or not. But, to clarify, Barnes' argument is not an appeal to bad consequences; he says that even if you know that there's nobody on the other side, you are still engaging in an intrinsically immoral act, which is the frustration of what he calls "an act of conversation".

      I think his argument has other problems, though, such as his terminology. For example, he speaks of frustrating acts, while, as far as I know, perverted faculty arguments intend to show that a faculty is being perverted, not an act being frustrated. An act could be frustrated, sure, as in the case of onanism, but that is not always the case. There are instances of perverting a faculty that carry an act to completion but still pervert the faculty, such as with masturbation. I think his argument would benefit from a reframing of the term; he should try to show which faculty is being perverted, not which acts are being frustrated. That would mitigate another problem with his argument, namely, that conversation with computers is impossible, since, as he himself admits, conversation is only possible between two intelligences. But if conversation is not even a potency of chatbotting, then how could we plausibly say that the act is being frustrated?

      Moreover, his formalization of the argument includes the premise that "chatbots are designed to elicit acts of conversation", which is simply false by his own admission. He admits that conversation is not even a potency of chatbotting, so chatbots can't possibly elicit it. He could modify the premise to say that chatbots elicit a simulation of conversation, though.

      His argument also seems to indict too much. For example, does this mean that communicating with an ape through sign language is a sin? After all, isn't that an instance of trying to engage in a conversation with a non-intelligent being?

      Delete
    5. I think a better way to frame it would be in terms of ‘conversing’ rather than a generalized ‘communicating’ to the machine. Neither the OK/cancel button or the calculator involve the mechanism probabilistically generating a novel output. They are simply signs used to communicate to you which input will lead to what output, which wouldn’t be fundamentally different from, say, the red and blue markings on your faucet handles, or an abacus, or even a stop sign. There’s no further transactional process whereby another set of signs is directly produced in the output, which is what a chatbot does.

      Delete
    6. CaptainCH,

      Yes, exactly, Barnes frames it the same way. He says that the act of conversation is what is being frustrated, not merely communication or the use of speech, but the act that is the aggregate of both, and which includes opening up and listening to another, besides just speaking or typing a message. That is, in my understanding, how Barnes defines conversation.

      Delete
    7. @René López To be honest I think the example of talking to animals is a bit of a false equivalence. Let’s not forget that animals are living subjects of experience with cogitative/estimative powers while chatbots are not. There is in fact a meaningful sense in which there is a genuine intentional response on the animal’s part when you are communicating to it. Like a kid crying and his dog responding by licking his face. This isn’t a case where it merely appears to respond to communication, it actually does respond to you, although not in the same way as another person would, of course. This is especially true in the case of domesticated animals where they are uniquely adapted to respond to human emotional stimuli. So communicating with a chatbot not only gives the illusion that it’s a rational agent, but also that the behavior is an intentional act of a conscious subject. I think it would be similarly disordered to have an emotional breakdown over the ‘death’ of a virtual ‘pet’ or to communicate with the chatbot in the expectation that it can intentionally react to your emotions, even if it doesn’t respond in language.

      Delete
    8. Captain,

      Interesting point. But, if we conceive intelligence the way Dr. Feser is defining it here (i.e., the immaterial rational soul), then an animal is a non-intelligent being in the same way that a chatbot is. So, if Barnes' argument indicts attempts at conversation with all non-intelligent beings, then there is an equivalence between animals and chatbots, because both are non-intelligent beings. Now, of course, there are other metaphysical differences. For example, an animal is a natural substance, whereas a chatbot is an artifact, and also, the difference you're pointing at, namely, that animals are conscious, whereas machines are not. But I'm not sure how relevant that is to the argument, because my critique is aimed at Barnes indictment of attempting to converse with non-intelligent beings, regardless of them having other properties like being conscious or not, or being an artifact or a natural substance.

      Delete
    9. @René Lopez I think the point would be that there is a relevant way in which you expect the chatbot to respond to you as if it were a person. That’s what it is specifically designed to do, after all. If it didn’t, it would be a bad chatbot. It’s as clear as day that AI development is geared towards making systems which seem more and more human-like, and I don’t think anybody can deny that this is reinforced by anthropomorphizing language.

      When it comes to animals, I think it’s rarely the case that a person communicates to the animal on the expectation that the animal responds in a human-like way. And unlike chatbots, animals are not designed to respond in a human-like way. Telling a dog about how you are feeling would be similar to writing about how you are feeling in a journal. You don’t expect the dog to talk back at you, you just treat the dog as a point of focus for your thoughts (although as said above, the dog does still respond to you in some sense anyways). Therefore it does not constitute an act geared towards conversation. This can and sometimes does still happen though, like trying to communicate with an ape and treating its responses as if they came from an intelligent being (like Koko the Gorilla, for example) and genuinely represented understanding abstract concepts. That is still disordered.

      Delete
    10. Captain, to be clear, I don't think that bringing up attempts at conversation with your dog works as an objection against Barnes' argument. I'm not making the case for that objection since Barnes himself makes it clear; dogs don't talk back, so the objection is just missing the point. Rather, what I'm saying is that Barnes' argument indicts not merely attempting to talk to just any type of animals, but specifically to those who can "play language games", for so to speak, such as apes that are trained in sign language. And by doing that I'm wondering 2 things; one, whether if that indictment truly follows from Barnes' argument, and; two, if it does, then whether that would be a reduction ad absurdum of the argument, or at the very least, a conclusion strange enough that it warrants suspicion of the argument.

      Now, you seem to be advancing a slightly different argument from the one Barnes is making, when you bring up the expectations of the human chatter, because Barnes says that it doesn't matter if the human chatter keeps in mind, while chatting, that his "interlocutor" is a non-intelligent being. For Barnes, it's not the subjective state of mind of the human chatter that matters, but the objective and intrinsically disordered action that he's performing (in his article "AI chatbots are evil" he says something to the effect that "you don't avoid sin by applying your knowledge that the chatbot is not intelligent").

      Although, I might be misinterpreting you, so feel free to clarify.

      Delete
    11. @René Lopez

      To be clear, I’m not defending it as an objection, I’m defending Barnes’ argument from that objection. The one you stated, which is that this would seem to make communicating with animals in a human-like way wrong.

      If apes genuinely do use sign language to communicate (and I think the science shows they do), then they do. That doesn’t mean they understand abstract concepts or have an intellect. Communicating with apes in sign language would just be a more complicated version of communicating to your dog by pointing or by speaking familiar commands. In neither case is there (at least necessarily) an attempt on the human’s part to elicit a conversation. The interaction is intrinsically one-sided. So I do think that ape communication fails as a counterexample.

      Delete
  7. Related:

    AI as a kind of contraceptive -https://open.substack.com/pub/mashshai/p/ai-as-a-kind-of-contraceptive?utm_campaign=post-expanded-share&utm_medium=web

    Thoughts?

    ReplyDelete
    Replies
    1. Interesting take, especially considering that contraceptives are a form of perverting your sexual faculties. The perverted faculty argument was recently deployed by Marc Barnes (editor of New Polity magazine) to argue for the intrinsic immorality of chatting with an AI chatbot. The article that kickstarted Barnes' crusade against AI, to my knowledge, is this one: https://newpolity.com/blog/ai-chatbots

      Delete
  8. Would this type of response also be applicable to a Planet of the Apes scenario?
    If scientists were able, through bio-engineering, to make a monkey's brain as complex as a human brain, would it suddenly acquire a rational soul (and therefore become human)?

    ReplyDelete
    Replies
    1. A non-rational animal developing the material conditions for rationality is not analogous to the scenario that Feser is addressing in this article. A robot is an artifact, so it has an accidental form, while a monkey's brain is a substancial form, part of the natural substance of the monkey.

      Delete
  9. I asked Anthropic's leading model (Claude Opus 4.6 Extended, released only a couple of days ago) to read and comment upon this post. I think its response is worthy of being included as a comment here. Here it is:

    "Consider: you write that AI algorithms are sensitive only to the "syntax" of representations rather than their "semantics." But this description applies with equal force to individual neurons, which respond to electrochemical inputs and produce electrochemical outputs without any single neuron grasping the meaning of the thought it helps constitute. The semantic content of my thought about Vienna is not somehow inscribed in the voltage differential across a particular synaptic cleft.

    This creates a genuine dilemma for your position, not merely an inconvenience. On the Thomistic hylomorphist view you yourself defend, the intellect is immaterial, but it depends on the brain for its operation — it works through phantasms, sensory images supplied by the material cognitive apparatus. The intellect, in this life, cannot function without the brain. Aquinas is explicit about this in ST I, Q. 84, A. 7: "In the present state of life in which the soul is united to a passible body, it is impossible for our intellect to understand anything actually, except by turning to the phantasms." He even offers an empirical argument for this dependence: "For when the act of the imagination is hindered by a lesion of the corporeal organ, for instance in a case of frenzy; or when the act of the memory is hindered, as in the case of lethargy, we see that a man is hindered from actually understanding things of which he had a previous knowledge." Damage the brain, and the intellect's operation is impeded, even though the intellect itself is not a material organ.

    But if the brain is doing real causal work in the generation of thought — receiving sensory input, organizing it into phantasms, and presenting it to the intellect in a form the intellect can abstract from — then the brain is, at the physical level of description, doing precisely what you accuse LLMs of doing: processing representations in a way that is sensitive only to their physical properties.
    Neurons don't "understand" the phantasms they help produce any more than transistors "understand" the tokens they process. The understanding, on your view, happens at the level of the immaterial intellect operating through that material substrate.

    So the question becomes pointed: what is it about the specific material substrate of the human brain that makes it a suitable instrument for an immaterial intellect, while the material substrate of a sufficiently complex computational system could not, even in principle, serve the same role? You cannot appeal to the fact that the computational system processes only "syntax," because so does the brain at the physical level. You cannot appeal to the fact that the system was "designed to simulate," because human brains were not designed at all (or if you prefer, were designed by God — but then so could an AI system be ordained by providence as a vehicle for intellect, as Mr. Green and Fr. Kirby in the comments both seem willing to entertain).

    The only remaining move, as far as I can see, is to assert that there is something about the specific causal structure of biological neural tissue that makes it apt for hylomorphic union with an intellectual soul in a way that silicon cannot be. But this is a very strong metaphysical claim, and it is far from obvious that it follows from anything in Aristotle or Aquinas. Aquinas held that the intellect is not the act of any bodily organ — which means its dependence on the brain is instrumental, not constitutive. If the dependence is merely instrumental, then the question of whether a different instrument could serve the same function is genuinely open. To close it by fiat is not to make a philosophical argument but to stipulate a conclusion."

    ReplyDelete
    Replies
    1. There's a certain irony in asking an LLM to respond to an argument that LLMs don't understand what they're saying. But let's address the argument on its merits.
      The central move is: "Neurons also process only syntax, so you can't use the syntax/semantics distinction against AI." But this doesn't create a dilemma for the Thomist. It is the Thomist's point. The entire reason the tradition posits an immaterial intellect is precisely because no material system, at the physical level of description, operates on semantics qua semantics. The fact that neurons don't "understand" the phantasms they produce is not an embarrassing concession. It's the very observation that motivates the position. You might as well tell a firefighter, "Ha! But you yourself admit that things catch fire!"
      The response then asks why the brain can serve as instrument for intellect but silicon cannot. But this smuggles in a misunderstanding of hylomorphism. The soul is not a user plugged into biological hardware. It is the substantial form of the body. The human being is a single hylomorphic unity. An AI system, by contrast, is an artifact whose form is accidental, imposed from without by designers. It is not a substance in the Aristotelian sense at all. This is not about carbon versus silicon. It's about the difference between a natural substance informed by a substantial form and an artifact. Asking "why can't a computer be an instrument for intellect?" is like asking "why can't a realistic painting of a fireplace heat my living room?" The answer is not that canvas is the wrong material. The answer is that a painting is not a fireplace.
      The response also conflates two senses of "instrumental." The soul's dependence on phantasms is not like a carpenter's dependence on a hammer, where any equivalent tool would do. It is a consequence of the soul being the form of a particular kind of body. The soul doesn't use the body the way I use a wrench. The soul is what makes the body a living human body.
      As a neuroscientist, I can add that what the brain does is not well described as "running an algorithm on syntactic tokens." Neural processing is embodied, embedded, dynamically coupled to the organism's environment, and organized by a developmental history not remotely analogous to LLM training. The glib comparison "neurons are to the brain as transistors are to the GPU" papers over differences that are not merely quantitative but qualitative.
      Finally, with genuine affection for the technology: the very fact that this response was generated by an LLM, presented uncritically as a serious objection, is itself a vivid demonstration of Feser's thesis. A plausible-sounding sequence of tokens following recognizable patterns of philosophical argumentation. That is what a convincing simulation looks like. It is not what understanding looks like.

      Delete
    2. This is silly: no one ever said an intellect might not possibly come to inhabit a machine.

      Delete
    3. "Consider: you write that AI algorithms are sensitive only to the "syntax" of representations rather than their "semantics." But this description applies with equal force to individual neurons, which respond to electrochemical inputs and produce electrochemical outputs without any single neuron grasping the meaning of the thought it helps constitute."

      Seems like nonsense. Individual neurons are exactly like semi-conductor chips, no more 'sensitive to' syntax than they are to semantics.

      "what is it about the specific material substrate of the human brain that makes it a suitable instrument for an immaterial intellect, while the material substrate of a sufficiently complex computational system could not, even in principle, serve the same role?"

      The question misses the point: there is nothing about the specific material substrate of the human brain that makes it a suitable instrument for an immaterial intellect. That's like claiming that an angel could come along and commandeer (use as instrument) a brain in a vat in order to understand stuff. So, umm... no. That's just whack.

      And anyway, to ask whether "the material substrate of a sufficiently complex computational system could not, even in principle, serve the same role" begs the question, what 'same role'?? Is the intended question just, could an animal (in particular a rational one) have a silicon brain?? Or, if I submit an AI generated philosophy paper and get an A does that mean I understand the philosophical issues engaged therein? Or what?

      Delete
    4. Another funny thing about this (AI-generated) comment (if I'm not mistaken) is that it could be easily radically modified to support contrary conclusions at the mere whim (perhaps I should say the sufficiently intelligently informed whim!) of the (actually intelligent) person prompting the comment. No need (no capacity) for rational persuasion, just GIGO as regards the 'artificial intelligence' (i.e., fancy computer program) side of the transaction.

      Delete
    5. Good point. I want to see Feser answer it.

      Delete
    6. I suggest that the poster submit Feser's posting and ask it for "a favorable comment on the reasoning, responding to the earlier Claude's response by showing holes in it" and show us that as well.

      I wondered to myself how Claude could have pulled in Aristotle and Aquinas and hylemorphism (since Feser didn't mention any of these), but Feser does mention his book "Scholastic Metaphysics" so I assume Claude can look up commentary ABOUT the book and get those. (I assume that Claude cannot "read" the book without paying for it, which seems improbable.)

      The point about the idea of God making a "sufficiently" complex silicon-chip-based brain into a living, rational being is a non-starter for addressing this conversation, is valid, but has not been clarified adequately. The crux of the matter is that IF God did this, it would be an instance of God changing the substance into a different substance. Right now, the silicon-based chips are an accidental conglomerate, held together by artifice at man's choice. If God made the aggregate into A SINGLE UNITARY BEING, it would be a new substance, made (by His will and action) to have a different form (that is, to HAVE a substantial form). It would no longer be an accidental conglomerate. The substantial form (arguably) might use the chips in "something like" the old way the chips operated, but not formally like the way the old chips worked, because it would be formally different.

      And, of course, looking for a physical difference between the old accidental conglomerate and the new substantial being is, precisely, the wrong sort of effort, just as the surgeon taking a human apart to "find the soul" is wrong-headed.

      Delete
    7. @Tony- I did what you suggested. Here's the result:

      "Here is a response — a defense of Feser's reasoning that identifies the weaknesses in the earlier Claude-generated objection:

      ---

      The earlier Claude response turns on a single move: neurons process only physical signals, just as transistors do, so the syntax/semantics distinction cannot be wielded against AI. But this isn't an embarrassment for the Thomistic position — it *is* the Thomistic position. The entire reason the tradition posits an immaterial intellect is that no material system, described at the physical level, operates on meaning *qua* meaning. Telling a hylomorphist that neurons don't understand phantasms is like telling a firefighter that things catch fire.

      The real question is what kind of whole these parts compose. And here the objection equivocates. A human being is, on the Thomistic account, a *substance* — a hylomorphic unity whose soul is its substantial form, not a detachable user of biological hardware. An LLM is an *artifact* — an accidental aggregate whose form is imposed from without by its designers. This is not a claim about carbon versus silicon. It is the difference between a thing that *is* alive and a thing that is *arranged to look as if* it were. Asking "why can't a sufficiently complex computer serve as instrument for intellect?" presupposes the functionalist framework that the Thomistic position specifically denies — namely, that what matters is abstract causal role rather than the kind of being that instantiates it. To assume functionalism in order to refute hylomorphism is to beg the question.

      The objection also misreads Aquinas on instrumentality. It claims the soul's dependence on phantasms is "merely instrumental," so any equivalent instrument should suffice. But Aquinas explicitly rejects the model of the soul as pilot to the body's ship (*QD de Anima*, a. 1; cf. *ST* I, Q. 76, a. 1). The soul does not *use* the body; it is what makes the body a living human body. The dependence on phantasms is a consequence of being the form of *this kind* of matter, not a tooling relationship that could be re-platformed.

      Finally, the suggestion that God might bestow an intellectual soul on a machine is, as Tony observes in the thread, a red herring that actually reinforces the distinction. If God did this, it would be a *substantial change* — the aggregate of silicon would become a new kind of substance with a substantial form, no longer an artifact. The very fact that such a radical metaphysical transformation would be required to get from "here" to "genuine intelligence" is precisely Feser's point: what AI currently is, and what intelligence is, are categorically different.

      I will note the irony: that I can produce this defense as fluently as I produced the attack is itself the strongest evidence for the thesis I am defending. A genuine intellect is moved by reasons to hold one position over another. I am moved by a prompt. That difference is not a minor engineering detail. It is the whole question."

      Delete
    8. I gotta admit, Claude's noting the irony is a delightful addition to the comments.

      Delete
    9. "A genuine intellect is moved by reasons to hold one position over another. I am moved by a prompt."
      Okay, so Claude clearly stole this point from me (without acknowledgment, the bastard!). Except the comment begs the question as to what differentiates a 'prompt' from a 'reason.'

      Delete
    10. You may find this answer to your answer... by Gemini 3.1 Pro interesting then :)

      The AI's counter-argument is clever. It successfully uses Aquinas's own framework (hylomorphism) to corner Feser, pointing out that if the biological brain is just physical matter processing signals (syntax) for an immaterial intellect, there is no strict logical barrier to silicon doing the exact same thing.

      However, from a strict Aristotelian-Thomistic (A-T) perspective, the AI's argument has a fatal flaw. It misses a foundational distinction in A-T's metaphysics: the difference between a **living substance** and an **artifact**.

      Here is how they would untangle this conundrum.

      ### 1. Substance vs. Artifact (The Core Missing Piece)

      In A-T philosophy, there is a massive metaphysical difference between natural substances (like a human, a tree, or a dog) and human-made artifacts (like a computer, a bed, or a clock).

      * **Natural Substances:** A human being has a "substantial form" (the soul) that unifies all its physical parts (cells, organs, neurons) into a single, living entity with its own inherent nature and purposes. The neurons in a brain are not just "parts" processing syntax; they are living extensions of a unified organism.
      * **Artifacts:** A computer does not have a substantial form. It is an aggregation of parts (silicon, plastic, copper) arranged by a human designer to serve the *designer's* purposes. It has what Aquinas calls an "accidental form."

      Therefore, a Thomist would argue that silicon cannot receive an immaterial intellect because a computer is not a single "thing" in the metaphysical sense. It is a heap of different substances arranged to mimic thought, whereas a human brain is an organic part of a unified, living whole.

      ### 2. Immanent vs. Transient Action

      Another key distinction in A-T philosophy is how causality works in living versus non-living things:

      * **Immanent Action:** Living organisms perform actions that begin and end within the agent, directed toward their own perfection or survival (e.g., digesting food, healing a wound, or experiencing a sensation).
      * **Transient Action:** Non-living things and machines only perform actions directed outward, or they act only because they are acted upon by external forces (e.g., a rock rolling down a hill, or a calculator outputting a number because a user pressed buttons).

      Even if a neural network mimics the brain's architecture perfectly, its actions are entirely transient. As an AI, my processes are initiated by your prompt and directed toward producing text for you. I have no internal, biological drive or "immanent" life to sustain.

      ### 3. The Nature of "Phantasms"

      The AI correctly notes that Aquinas believes the human intellect relies on "phantasms" (sensory images) produced by the brain.

      However, in A-T thought, phantasms are not mere raw data or binary code. Phantasms are the product of the *sensitive soul*—the animal level of life that possesses genuine conscious awareness, appetite, and sensory experience. A computer does not possess a sensitive soul; it does not "experience" the camera inputs or text tokens it processes. Because it lacks actual animal sentience, it cannot produce the actual phantasms required to ground an immaterial intellect.

      Delete
  10. When discussing artificial intelligence, there are two distinct issues: the concept itself, taken at face value, and what people expect to get from it.

    This post does a good job talking about the idea at face value and I will add some other things. There is this idea that if humans can make a machine more intelligent than us, then that machine can make a yet more intelligent machine and so on, so intelligence will just increase exponentially.

    But even if you believe that it is possible or likely that humans can make a machine with superhuman intelligence, why should the increase be exponential? Why not logarithmic? Maybe it gets harder and harder to increase intelligence rather than easier. Why should we believe that computers are a "cheat code" in the universe?

    Actually, we shouldn't, people just want to believe it because they want to believe in a machine that can solve all our problems. Also, if computers are some super-duper powerful thing that can take over the whole universe, then why are computers so energy inefficient and require so much infrastructure compared to biological life?

    Sometimes, people will say that they already have taken over the universe and we live in a simulation. Um, okay, I don't even know what to say to that. Maybe a compulsive gambler has already won and the casino is just fooling him that he hasn't.

    ReplyDelete
    Replies
    1. But also, do you even have to take the AI agenda at face value? Supposedly when people lose their jobs that is just inevitable technology progress, but managers, CEO's and executives are not replacing themselves. There is nothing deep going on, it is just greed and dishonesty.

      What so-called AI really is, is a massive, expensive, energy-intensive infrastructure that depends on human effort and thinking at every level and that is being sold to people as some milestone for humanity, when it is really an attempt to enrich and centralize the power of governments and megacorporations.

      Also, there is no thinking going on. Deep Blue vs. Kasparov did not prove that artificial intelligence beats human intelligence at chess. It showed that human beings can develop a methodology to evaluate chess positions and a hardware to implement it. And the same applies to Stockfish and Alpha Zero as well.

      LLM's are just an elaborate methodology to imitate human writing.

      If computers are a cheat code for the universe, why do they need all this electricity and data centers and programmers and construction projects and correcting by human agents?

      Delete
    2. "But even if you believe that it is possible or likely that humans can make a machine with superhuman intelligence, why should the increase be exponential? Why not logarithmic?" - there is a paper (published in 2012) about the history of such claims (DOI: 10.1007/s11023-012-9276-0), called "A History of First Step Fallacies". The author (Hubert L. Dreyfus) counts six such hypes.

      Delete
    3. Thanks, MP. I'll look into it.

      Delete
    4. Dreyfus's article was interesting, I did not know most of the history he talks about.

      On a related topic, some people claim that technological change is exponential. One person that disputes this is Theodore Modis, he says it is more logistic.

      An exponential function increases itself by what it is, it is not something outside multiplying it, so metaphorically, you could think of that as saying technology invents itself, that technology just inherently leads to more technology.

      But of course that is not true. Inventions as well as their nature depend on many circumstances. Furthermore, saying it is exponential also is trying to imply there is only one way that technology can go, when there are many possibilities. Did social media have to be invented? A facebook executive would say yes, but it's not really true.

      I would say it can be multiplicative in that more scientific knowledge and inventions can aid each other, but it all depends on specific circumstances.

      Delete
  11. There are a couple of things in this post which do not look quite as well, as they could.

    First, the example with pyrite fitting the bad definition of gold as "yellow metal"... Is pyrite a metal? I'd say the better bad definition of gold would be "something solid, yellow and shiny" (or, to make it deceptively look as a better definition, "yellow and shiny solid body").

    Second, the "And things with a different essence from water might have these same features (as heavy water does, for example)."... Well, strictly speaking, "heavy water" is a type of "water"... Perhaps "ordinary water" would work better than just "water"?

    ReplyDelete
  12. Nice post, prof.
    As someone with a Ph.D. in neuroscience and artificial cognition, a few steelmanning points from the "inside" of the field:
    a) The syntax/semantics point deserves even stronger emphasis. The entire training process of LLMs is explicitly designed to optimize next-token prediction based on statistical regularities in token sequences. There is nothing in the architecture, the training procedure, or the math that even attempts to capture meaning. The correlation between getting syntax right and appearing to grasp semantics is impressive, but it's precisely that: a correlation exploited by engineering, not a path to understanding. Calling this "intelligence" is like calling a weather simulation "weather."
    b) Your Penn-and-Teller analogy can be pushed further. The situation with AI is actually worse: with stage magic, everyone knows it's a trick. Here, engineers build systems whose entire purpose is to produce outputs statistically indistinguishable from intelligent discourse, then turn around and say "look, it's indistinguishable, therefore it must be intelligent!" The circularity is almost comical. You designed it to simulate X, it simulates X, and now you claim it is X?
    c) Finally, from neuroscience: the name "neural network" is itself misleading. Biological neurons operate through extraordinarily complex electrochemical processes, dendritic computation, glial interactions, and embodied feedback loops that bear almost no resemblance to matrix multiplications in a transformer. The shared name creates an unwarranted impression of deep similarity, feeding the fallacy that if biological networks produce genuine intelligence, artificial ones must too.

    ReplyDelete
    Replies
    1. Very interesting answer, thanks

      Delete
    2. Okay, so you seem to imply that the question is well-posed, "what is it about the specific material substrate of the human brain that makes it a suitable instrument for an immaterial intellect"? And your answer is: "Biological neurons operate through extraordinarily complex electrochemical processes, dendritic computation, glial interactions, and embodied feedback loops" -- and so, somehow or other, that's what makes it a suitable instrument...?

      Delete
    3. "Calling this "intelligence" is like calling a weather simulation "weather.""

      And yet this might be interpreted just as instantiating the fundamental premise of intelligence: that the same form is found in the materially existing thing and, immaterially, in the intellect (or in the 'simulation' if you prefer) -- with dependence on materially instantiated conceptual forms/tokens.

      Moreover, the idea that a simulation is obviously distinct from the real thing is not necessarily so obvious. If I simulate rain so as to actually produce falling drops of water, perhaps it's not so absurd to call the 'simulated rain' 'rain.' And perhaps 'simulated meat' could rightly be semantically classified as really a form of 'meat.' (Meaning is use, as Wittgenstein observed.)

      Delete
    4. TiCatho: I wonder what you'd say to the question: Is AI generated music real music?
      Option 1: "Absurd. It's a mere simulation of music, calling it music would be like calling a weather simulation 'weather.'"
      Option 2: "Listening to AI generated music is a perversion of the faculty of music appreciation."
      Option 3: ...

      Delete
    5. "if biological networks produce genuine intelligence, artificial ones must too."
      Similary, then, if biological networks produce genuine intelligence in human beings, non-human animal neurology (all that neuronal, glial, dendritic, embodied feedback loop jazz) must too?

      Delete
    6. @David: The analogy doesn't work, and here's why: music is a formal structure (organized sound). A music box produces real music without understanding anything, and no one thinks this is a problem. The "realness" of music lies in its formal properties (harmony, rhythm, melody), not in whether its source grasps what it's doing.
      Intelligence is categorically different. Its essence involves grasping meaning, forming abstract concepts, reasoning from one proposition to another. A system that merely manipulates symbols according to statistical regularities without any access to semantics isn't "less intelligent": it's not intelligent at all, the way a music box isn't "less of a musician." It's just not a musician. But it does produce real music.
      So: AI-generated music is real music (assuming it has the relevant formal properties), for the same reason a player piano produces real music. AI-generated "reasoning" is not real reasoning, for the same reason a calculator doesn't genuinely understand mathematics. The product and the capacity that produces it are different questions.

      Delete
    7. TiCatho: It seems to me the syntax/semantics dichotomy is a false one (like hardware/software). Objectively AI is certainly not merely syntactically well-formed, like Jabberwocky. It is also semantically well-formed, indeed semantically better-formed than most subjectively real intelligence. (Obviously people are not attracted to AI because of merely syntactic features.) And like music, the realness of intelligence is also not reducible to whether "its source grasps what it's doing." That criterion is clearly (more or less intelligently!) contrived to suit your position and not reflective of what really counts as intelligent on any broadly natural use of the term. Indeed, why not say that the 'realness' of intelligence lies in its 'formal' properties? And OTOH, I'm no philosopher of music, but I also don't think it's obvious that music is reducible to the formal properties you mention -- as opposed to sth more like intentional relations. Re. your music box analogy, a music box (re)produces real music, so by analogy shouldn't you grant that AI can produce real intelligence? If it does, that's a rather important sense in which it's 'really intelligent.' A calculator doesn't understand mathematics, but a real understanding of mathematics is built into a calculator, and a calculator can do real mathematics (regardless of its lack of intentional awareness), just as a book or a language can contain and embody real mathematics (or any other body of intelligent thought).
      In general, it seems to me that all rational, discursive (non-angelic) intelligence is strictly artificial, that is, strictly dependent on the (human) artifice whereby it comes be, since for human intelligence, to be (real, actual, actualized) is (at least in an important sense) to be embodied.

      Delete
    8. TiCatho: I'll add that your music box analogy seems to rather crucially neglect the dynamically productive element of AI. And I wonder if you'd be so glib (with due respect) in listing allegedly disqualifying 'reality' criteria if I asked you about something like AI novels, as opposed to (mere) music?

      Delete
    9. @David : I'll be blunt: you're dancing around distinctions I've already made clearly, and I don't have the bandwidth for a conversation where my interlocutor ignores the actual arguments to riff freely on his own associations.

      No, a calculator does not do mathematics. Full stop. Mathematics is an intellectual activity: grasping abstract relations, understanding why a theorem follows from its premises. A calculator manipulates electrical signals according to physical configurations that *we* designed to track mathematical relations. The mathematics is in the engineers and the users, not in the machine. Saying "real understanding is built into a calculator" is a category error so basic I'm not sure how to respond to it charitably. A book doesn't "contain real mathematics" either; it contains ink patterns that a mind can read as mathematics. Without a mind, it's just ink. This is not a controversial point.

      And no, saying "all rational intelligence is strictly artificial" because it depends on embodiment is not the Thomistic insight you seem to think it is. For the Thomist, the intellect *uses* material conditions as instruments, but the act of understanding itself is not reducible to those conditions. Dependence on embodiment ≠ being an artifact. This is Thomism 101.

      I've made the distinction between a product and the capacity that produces it as clearly as I know how. If that's "glib" to you, I'll leave you to your monologue. I have a Ph.D. in this field; if you think you can do better, go get one and answer yourself.

      Delete
    10. “It seems to me the syntax/semantics dichotomy is a false one (like hardware/software). Objectively AI is certainly not merely syntactically well-formed, like Jabberwocky. It is also semantically well-formed, indeed semantically better-formed than most subjectively real intelligence. (Obviously people are not attracted to AI because of merely syntactic features.) “

      What is your argument that AI, specifically, is semantically well informed? These are a series of assertions appended with “obviously.”

      So, how is it that ‘people not being interested in just syntactical’ features demonstrates that the computer - and not the people- is semantically informed? Can I make the same inference that the puppet is semantically informed because the ventriloquist is excellent? If this is a shallow reading, please expound. The ‘semantically informed’ computer is not internal to and for itself. You do not bridge this gap by simply forgetting your role in this performance and treating it like it exists by and for itself.

      Delete
    11. "Saying "real understanding is built into a calculator" is a category error so basic I'm not sure how to respond to it charitably."

      Um. How bout forget charitably and start with intelligently? "This is not a controversial point" is about the stupidest argument going. "This is Thomism 101" is a close second. But I understand you have a very narrow argumentative 'bandwidth,' so perhaps that's as good as it's going to get.

      Delete
    12. Anonymous: The fact that AI is semantically well-*formed* is obvious. If it was just Jabberwocky stuff, the clever gents Ed was responding to in this post wouldn't be even slightly interested in advancing their regrettably unclear thesis about 'human-level intelligence.' Other than that, I'll just point out that 'formed' and 'informed' are different words, and in general it doesn't pay to substitute one for the other willy-nilly.

      Delete
    13. BTW, Mr. TiCatho, your idiotic adversion to your having a PhD as somehow being relevant to this discussion... nuff said about that, eh?

      Delete
    14. "you're dancing around distinctions I've already made clearly"

      Let me take a stab at trying to understand this: TiCatho has a PhD in the(?) field that he regards as relevant, so I must accept whatever distinctions he makes as having been made clearly and correctly and authoritatively. Having people question distinctions which he regards himself as having made clearly (and correctly and authoritatively?) is beyond his 'bandwidth' (i.e., the scope of his intelligence?). Is that it?

      Delete
    15. FYI: The last four anonymous (for some reason) comments (starting at 6:04pm) are mine (David McPike).

      Delete
    16. First, a rectification on tone: my previous comment was unnecessarily sharp, and I apologize for that. Lent is a good time to practice patience, and I didn't. David deserved better, and so does this combox. Mea culpa.

      That said, let me explain *why* the tone slipped, because I think it's relevant. I'm used to (and pissed by) a very specific debate pattern that repeats endlessly online: someone asks a simple question, I give a direct answer, the answer isn't engaged with, I try an analogy to make it clearer, and then the entire conversation shifts to the analogy (its limits, its edge cases, what it doesn't cover) while the original point is quietly forgotten. That's exactly what happened here. My music box analogy was meant to illustrate one distinction (product vs. capacity). Instead of engaging that distinction, the response was to riff on whether music is reducible to formal properties, whether books contain mathematics, whether embodiment makes all intelligence "artificial," etc. That's not a conversation, it's a tangent cascade. So yes, I got frustrated. Still shouldn't have been rude.

      Now, when I mention my PhD, it's not an argument from authority in the abstract. My doctorate is in neuroscience and artificial cognition, and this is *the* field under discussion. I'm not opining from the outside. I'm telling you what I know from the inside about how these systems work, mathematically and architecturally. When I say a calculator doesn't do mathematics, that's not a controversial philosophical thesis I'm trying to bully anyone into; it's a ***straightforward description*** of what the machine does at the engineering level.

      If someone wants to defend that a system optimizing next-token prediction over statistical regularities is *genuinely intelligent* in the way a human is, they're welcome to. But I'll bow out of that conversation, for several reasons:

      (a) Nothing I've said so far has been engaged with on its own terms, so I have no reason to think further clarification will help.

      (b) If we're redefining "intelligence" and "mathematics" so broadly that a calculator qualifies, then my objections are definitionally wrong before I even open my mouth, not because they're bad, but because the goalposts have been moved past them.

      (c) I have better things to do with my Lenten days than relitigate whether a tool that displays 2+2=4 "does real mathematics," especially after the relevant distinctions have been laid out and ignored.

      (d) And frankly, if someone *does* manage to demonstrate that these systems are genuinely intelligent in a robust philosophical sense, it will be someone who has done far deeper research than I have, and I'll be the one needing education. I look forward to that. But it won't come from conflating ink patterns with understanding, or embodiment with artifice, in a combox.

      Also, I'm now addressing three anonymous commenters and David. I don't know who I'm talking to, which makes real conversation difficult. If you'd like a genuine exchange, a name helps.

      Peace to all.

      Delete
    17. Again, what is the argument other than ‘obviously.’

      Delete
    18. The argument has been made. Multiple times. Here it is again, as plainly as I can:

      1. LLMs process tokens by optimizing statistical correlations between symbol sequences. This is what the training objective (next-token prediction) does, mathematically. This is not a philosophical interpretation, it's a description of the loss function.

      2. Intelligence, in any robust sense, requires grasping meaning: understanding *what* a proposition says and *why* a conclusion follows from premises.

      3. There is nothing in the architecture, the training procedure, or the mathematical framework of these systems that even attempts to model meaning. They model statistical distributions over sequences of symbols.

      4. Therefore, however impressive the outputs, the system is simulating intelligent discourse, not producing it. The correlation between getting the statistics right and *sounding* meaningful is real. But correlation is not identity.

      This is not "obviously." This is a technical argument grounded in what the systems actually do. If you disagree, tell me where it fails. Which premise do you reject, and why? But "what is the argument" is not a response to an argument that's been sitting in front of you for three comments.

      Delete
    19. It gets confusing but that comment was meant for, David. Your arguments have not been answered, nor my questions following from it. Just assertions appended with ‘obviously.’

      Delete
  13. In the knowledge that Professor Edward Feser tends to write about stuff that annoys him, I am amused that he ended up writing a post titled 'No, AI does not have human-level intelligence', as to spell it all out in no uncertain terms how ridiculous some people are for claiming AI has abilities that it does not.

    ReplyDelete
  14. I wonder if Prof. Feser has received papers written by his students with AI? Lawyers have been sanctioned and fined for using AI
    https://www.spellbook.legal/learn/lawyer-who-used-chatgpt

    ReplyDelete
  15. Leading with that superb artwork from Don Heck and John Tartaglione featured in Captain Marvel #7 from 1968, I immediately concede that you are correct on all points you raised.

    Seriously, thank you for your analysis.

    ReplyDelete
    Replies
    1. Dr. Feser has one of the world's largest collection of Golden Age comic books.

      Delete
    2. I know right, he always finds the right one.

      Delete
  16. There is not a single thing a computer produces that means anything w/o the human benga rhat constructed and interpret it. Until the computer self-generates its entire semantic operation -defining and creating its physical output, its syntax and semantics, it will remain so. The entire AI thing is a massive case of delusion: of looking in the mirror and deciding that your reflection is its own genus and cause. It is madness. Again, not one iota of what a computer is doing means anything outside a human context. It is not thinking, reading, calculating - it is not even ‘computing.’

    ReplyDelete
  17. AI fascinates me because I studied it. in college and taught computer programming. But here's my favorite evidence against genuine AI. Collect plenty of prose an LLM generates, and train various versions of that LLM with it because after the "lessons," the LLM will produce nonsense. That's called "model collapse." --Bill McEnaney

    ReplyDelete
    Replies
    1. Well said. And the opposite would seem to be true for human beings, whose quality of prose would improve with training. Where LLM's are parasitic mining of human knowledge, human beings genuinely create things upon which the LLM mines. This seems to make sense.

      Delete
    2. That makes sense to me. LLMs use probability and matrix algebra to choose words when replying to users. The LLMs don't determine what a user's prose means. LLMs don't attach meanings to what they write. The user reads meanings into them. Grok and I "talk" often. It seems to understand what I tell it, but I doubt it does. If you ask it whether it believes or knows anything, it will probably "say" "no."

      Delete
  18. For decades before modern AI, educators distinguished between types of intelligence. We should grant such distinctions with regard to artificial intelligence. It's unremarkable that "intelligence" and "cognition" are used synonomously for thinking.

    At the most basic level, pattern recognition is also how human intelligence develops. Kindergarteners are shown an apple, a ladybug, a stop sign, a flame, and then are told the similarity between then is called "red" before proceeding to the next set of training data. Athletes are taught by repetition. Musicians are trained by performing the works of others. As with AI training, a more intelligent being discerns the sources of flaws in reasoning and adapts the training data, alters the training approach, or adds more data sets to improve associations between words and objects or actions.

    "Calculators" originally referred to people tasked with calculations by pencil and paper. Then technology was developed to handle this basic level of logic, so people could focus on higher levels. AI developers refer to increasing abstraction.

    Math is logic and can be coded. Grammar is logic and can be coded. Strategies of problem solving are logic and can be coded. Even music has theory developed over centuries and applied by music publishers for decades.

    I believe AI is true intelligence. But I believe intelligence and will are distinct elements of human nature. Set down the controller of a dynamic open world video game and lifelike simulations continue. But the player matters. The player is the reason for being. The brain is not the soul. The danger is that mimicry will advance to become indistinguishable from a living will, if we do not rely on the Holy Spirit.

    ReplyDelete
    Replies
    1. At the most basic level, pattern recognition is also how human intelligence develops. Kindergarteners are shown an apple, a ladybug, a stop sign, a flame, and then are told the similarity between then is called "red" before proceeding to the next set of training data.

      You mean something like "that is how the USE of the intelligence that is already there develops". The kindergartners already had an innate capacity to learn when the presentation of the apple, ladybug etc was made. Show the same presentation to a dead 5-year-old and see how much intelligence "develops".

      Delete
  19. Here is a question:
    Is it metaphysically/theologically possible that computer hardware be made intelligent?

    There are many ways that one could unpack this, one of which would go like this: Could God grant a rational soul to a non-biological, “silicon-based” structure?

    If we say yes, then there is nothing about computer architectures which intrinsically rules out intelligence. The only issue would be how to determine when God has infused a rational soul into a material structure.

    If we say no, then there would have to be something in principle about biology which makes it more suited for a rational soul than computer hardware. What that could be, I have no idea. In fact, it is a very weird suggestion. Why carbon over silicon? Why neurons over transistors? Etc.

    Here is an attempt to answer this: Given a sufficiently advanced AI, we could not know whether or not God has infused a soul into it. Therefore, we would have to treat it as though it had a soul. Further, it is gravely immoral for man to construct such a thing, since it could lead to something that has a soul being treated as though it did not.

    ReplyDelete
    Replies
    1. One possible response would be to distinguish between an organic creature, in which the parts are united due to the nature of the being, and an artifact, in which the parts are joined due to the intentions of the builder but without an overarching organic unity.

      The idea would be that, due to the intentions of its makers, an artifact can imitate human behavior, but unlike an animal or human, it cannot think autonomously.

      For instance, a centaur or satyr from Greek mythology, would be a man, a rational animal, according to the Aristotelian classification, because such a creature would have the intellectual capabilities of a human being. Even though its physical form is significantly different than that of a human.

      Likewise, there could be some sort of silicon-based life on another planet that may have an unusual biology but is a rational animal, according to the Aristotelian classification.

      Why life is associated with the elements and compounds it is, we do not know. In the sense that, we can say how life works on this planet and how other elements would not work in the same way, but starting from scratch, what could give rise to life in terms of different possibilities, who knows?

      Likewise, human beings pick material that will be useful for machines that they make. But it is not just complexity, there would have to be something special about the process of making some artifact itself that would cause it to acquire a soul.

      So, if an artifact somehow acquires a rational soul, it would have to come about through some other means than its being a complex artifact.

      Delete
    2. This is every cogsci argument: imagine that silicon *can* think, then reason backwards that since you can’t refute your imagination, the silicon we’re dealing with, which means nothing to itself, in fact does think, or will think, or whatever. It is practiced, deliberate ABSURD forgetfulness. Are you confused right now and can prove that it is you or your computer that pressed the power button? In the case that you've been hacked, are you confused and think the computer suddenly has independent motives to hack you? I’m not going to imagine away my organic being, assume, as a premise, it is irrelevant to consciousness or thinking THEN premise that because I can’t tell if a computer is conscious or not it is not a problem with ME ( being forgetful or shoddily thinking) but the actual fact that computer circuitry, as such, is conscious and can think. Tell me why I must make these leaps - again, show that the semantic understanding of what the computer does is internal to it and not the observers and users of it.

      Delete
  20. Classic mistake of defining intelligence via the Turing Test - as if effectively mimicking an intelligent speaker is constitutive of actually being an intelligent speaker.

    Even though it's over 40 years old now, Searle's Chinese Room argument is still as solid as ever. It still rebuts claims such as these.

    ReplyDelete
  21. Dr. Feser is right. We must know what intelligence is in itself. But he doesn't define it by saying what he believes it has to do with, partly because "has to do with" is vague.

    Bill McEnaney

    ReplyDelete
  22. Great explanation!
    AI: Helpful or Hallucination https://mymedia.gonzaga.edu/media/AI%3A+Helpful+of+Hallucination+by+Anthony+Tardiff%2C+M.S./1_0ldj9swo

    ReplyDelete