Monday, December 30, 2013

Da Ya Think I’m Sphexy?


Sphex is a genus of wasp which Douglas Hofstadter, Daniel Dennett, and other writers on cognitive science and philosophy of mind have sometimes made use of to illustrate a point about what constitutes genuine intelligence.  The standard story has it that the female Sphex wasp will paralyze a cricket, take it to her burrow, go in to check that all is well and then come back out to drag the cricket in.  So far that might sound pretty intelligent.  However, if an experimenter moves the cricket a few inches while the wasp is inside, then when she emerges she will move the cricket back into place in front of the burrow and go in to check again rather than just take the cricket in directly.  And she will (again, so the standard story goes) repeat this ritual over and over if the experimenter keeps moving the cricket.

Dennett, who has appealed to the Sphex example several times over the decades, tells us in his latest book Intuition Pumps and Other Tools for Thinking that in fact this standard account is oversimplified and that it turns out the Sphex wasp isn’t quite as stupid as legend has it.  All the same, “sphexishness” has come to be a useful label for canned behavioral routines that can at best mimic intelligence but never reflect the real McCoy.  Sphexishness is not limited to cases as easy to detect as the robotic Sphex ritual.  A creature might exhibit a much higher degree of flexibility than the Sphex of legend, but still show itself to be “sphexish” in more subtle ways. 

What is it, exactly, that sphexish creatures are missing that intelligent creatures like us have?  In Gödel, Escher, Bach, Hofstadter answers:

[I]n the wasp brain, there may be rudimentary symbols, capable of triggering each other; but there is nothing like the human capacity to see several instances as instances of an as-yet-unformed class, and then to make the class symbol; nor is there anything like the human ability to wonder, “What if I did this -- what would ensue in that hypothetical world?”  This type of thought process requires an ability to manufacture instances and to manipulate them as if they were symbols standing for objects in a real situation, although that situation may not be the case, and may never be the case. (p. 361)

In his essay “On the Seeming Paradox of Mechanizing Creativity,” reprinted in Metamagical Themas, Hofstadter elaborates on the difference between us and Sphex:

I would summarize it by saying that it is a general sensitivity to patterns, an ability to spot patterns of unanticipated types in unanticipated places at unanticipated times in unanticipated media.  For instance, you just spotted an unanticipated pattern -- five repetitions of a word… Neither in your schooling nor in your genes was there any explicit preparation for such acts of perception.  All you had going for you is an ability to see sameness.  All human beings have that readiness, that alertness, and that is what makes them so antisphexish.  Whenever they get into some kind of “loop”, they quickly sense it.  Something happens inside their heads -- a kind of “loop detector” fires.  Or you can think of it as a “rut detector”, a “sameness detector” -- but no matter how you phrase it, the possession of this ability to break out of loops of all sorts seems the antithesis of the mechanical.  Or, to put it the other way around, the essence of the mechanical seems to be in its lack of novelty and its repetitiveness, in its trappedness in some kind of precisely delimited space. (pp. 531-32)

What Hofstadter is describing is essentially what the Aristotelian Scholastic philosopher would characterize as the intellect’s ability to abstract universal concepts from particular material things.  For the Scholastic, such concepts differ in kind and not merely degree both from mental images and from neural representations of any of the sorts posited by cognitive scientists and materialist philosophers of mind.  Mental images and neural representations may have a certain generality -- a dog’s mental image or neural representation of a ball may be triggered by a number of stimuli and direct the dog to pursue a number of objects -- but they lack the strict universality of concepts.  Their content is also inherently indeterminate in a way the content of concepts is not.  For these reasons, our strictly intellectual powers are incorporeal in a way our imaginative and sensory powers are not.  (I defend all these claims at length in my ACPQ article “Kripke, Ross, and the Immaterial Aspects of Thought.”)  I suppose I need to add for the village naturalists out there that these claims have absolutely nothing to do with belief in ectoplasm, immaterial “stuff,” magic, or any of the other straw men of materialist polemic.  Some materialists are positively Sphex-like in their sheer inability to see in their opponents anything but these ridiculous caricatures.

As that last gibe implies, human beings are not immune to what superficially resembles sphexishness.  Because we are rational animals capable of forming and making use of concepts, we necessarily transcend the true sphexishness that afflicts other animals.  But because we are rational animals, we are sometimes susceptible to behavior that is at least pseudo-sphexish.  For one thing, we can be injured in a way that impairs or blocks the exercise of our rational powers.  For another, our prejudices, emotions, and the limitations on our knowledge can get us into cognitive ruts.  But a healthy rational animal will in principle be able to perceive and rise above such ruts in a way that a sub-rational animal cannot.

Now, Dennett, perceptive fellow that he is when he wants to be, argues in Chapter 2 of his book Elbow Room: The Varieties of Free Will Worth Wanting that any purely physical system is going to be essentially sphexish.  The reason is that qua physical such a system can only ever be sensitive to syntactical properties, and syntactical properties can never add up to semantic properties.  Now a non-sphexish creature would have to be sensitive to semantic properties.  Hence a purely physical and thus purely syntactic system is inevitably going to be a sphexish system.  Dennett thinks it can at least approximate non-sphexishness, however, because a sufficiently complex “syntactic engine” will in his view at least approximate a perfect “semantic engine.”   And sphexishly dogmatic materialist that he is, Dennett insists that human beings are purely physical.  Hence, though we seem non-sphexish, Dennett insists that we really are sphexish, but -- being exquisitely complex syntactical engines -- in so subtle a way that for practical purposes we can treat ourselves as if we were not.

But as Howard Robinson points out in the introduction to his edited volume Objections to Physicalism, Dennett’s position is a muddle.  A purely syntactical engine will not even approximate a perfect semantic engine, because it will fail to be semantic at all.  Syntax by itself doesn’t get you imperfect semantics; it gets you exactly zero semantics, just as the ketchup kids use for blood at Halloween time will never get you even imperfect real blood no matter how much of it you pour out.  Dennett knows this, which is why (as Robinson notes) he has to resort to the essentially instrumentalist position that our sophistication as complex syntactic engines makes it useful for us to interpret ourselves as if we were semantic engines.  But this too is a muddle, for interpretation is itself an act that presupposes real semantics rather than a mere ersatz.  Dennett’s further reformulations of his position (e.g. in his paper “Real Patterns”) only ever paper over this fundamental incoherence rather than resolve it, but his dogmatic materialism makes him think there must be some way to make it something other than the reductio ad absurdum that it is.

In any event, it would be a mistake to suppose that our basis for regarding ourselves as non-sphexish is that our behavior so closely approximates non-sphexishness.   It is not a kind of inductive inference to the effect that since we usually act unsphexish, we must really be unsphexish (as if further empirical evidence could in principle lead us to revise this “opinion” about ourselves).  It is much simpler and more obvious and conclusive than that.  It is that we have things sphexish creatures do not have: concepts.  End of story.  The reasoning isn’t: “We don’t act very sphexish; therefore we must have concepts.”  It’s rather: “We have concepts; that’s why we don’t act very sphexish.”

It is silly to object: “But a sphexish creature would think the same thing!”  No it wouldn’t, because a sphexish creature, being by Dennett’s own admission devoid of semantics, wouldn’t think at all in the first place.  It wouldn’t have even sphexish concepts (whatever that might mean), because it wouldn’t have any concepts at all.  Nor can we possibly be wrong in supposing that we have concepts (whatever “supposing oneself not to have concepts” might mean), for reasons that are blindingly obvious but which, if you really have any doubts, are spelled out in the ACPQ paper referred to above.  (See also the many posts about eliminativism you’ll find on this blog, such as this one, this one, this one, and this one.)

Now, you’ll recall from a recent post the notion of a cognitive zombie -- a creature physically and behaviorally identical to a normal human being, but devoid of concepts and thus devoid of the other aspects of rationality.  You might think that a cognitive zombie would be sphexish, but that is a mistake.  If it was sphexish, it wouldn’t be behaviorally identical to a normal human being, and thus by definition wouldn’t be a cognitive zombie.  A true cognitive zombie would be something which would, like a sphexish creature, be devoid of concepts, but which, like a normal human being, would behave as if it had concepts.

The notion of sphexishness thus helps to clarify the notion of a cognitive zombie.  If ya think I’m sphexy, then you don’t think I’m a cognitive zombie.  A sphexy Rod Stewart on his best day wouldn’t pass for a cognitive zombie.  A James Brown sphex machine wouldn’t pass either.  People magazine’s Sphexiest Man Alive definitely wouldn’t be a cognitive zombie.  The notion of a cognitive zombie is the notion of something as utterly devoid of concepts as the simplest of any of Dennett’s purely syntactical engines, but whose lack of concepts is nevertheless more perfectly undetectable than that of even the most complex and perfect of Dennett’s syntactical engines.  Is this notion even coherent?  I think not, but that is a topic for another time.

62 comments:

  1. A cognitive zombie is ridiculous... to a point. Most people don't know why they do a lot of the things that they do, so it's not completely ridiculous, just ridiculous in this case: if you see a person who finds a problem, solves this problem without help, then you learn that this person has no mind, then the person who told you this is just as ridiculous. The zombie concept was probably dreamed up by someone who wonders how and why people do things without knowing why. But somehow, imitation of other people doesn't seem to get factored in. You can get very, very far in life just by watching what people do and following along... as I learned at Catholic mass. You figure out why later. Some people never figure out why, but take comfort in ritual. They aren't zombies. Quite the opposite, they're responding to rational incentives in the face of incomplete information. All of us do that.

    ReplyDelete
  2. The Sphex is new to me, as is the philosophical theorizing related to it. Very interesting. Also, Hofstadter's work has sounded promising to me in the past, and Prof. Feser's quotations here confirm it. Has anyone read his book Surfaces and Essences? It's at my library, I've been wondering if it's worth reading--it sounds like it could be a very good dissection of analogy and concept formation.

    As for cognitive zombies, they seem to be decisively ruled out by any remotely Aristotelian account of form. Oderberg puts it like this on page 90 of Real Essentialism:

    "[N]ot only does having the H2O structure metaphysically guarantee having the properties of water, but having the properties of water guarantees having the H2O structure, and so Twin Earth is metaphysically impossible. One might call this a two-way supervenience between the properties of water and the H2O structure - no difference in one without a difference in the other."

    I have argued as much in these comboxes before, on the topic of "rational machines". It is impossible to perfectly manifest certain properties (like the behaviors associated with rationality) without possessing the relevant species, and it is impossible to possess that species without falling into the various metaphysical necessities of the Porphyrian tree's genera. Putnam's XYZ, the cognitive zombie and the "rational machine" are all impossible along Aristotelian lines, for the simple reason that they manifest properties that they essentially cannot manifest.

    ReplyDelete
  3. I recall that Hofstadter is a tremendous Dennett fanboy, though at least the former is an entertaining writer.

    ReplyDelete
  4. When Hoffstadter's wife died suddenly of a brain tumor leaving behind two very young children, Hoffstadter had a series of exchanges with Dennett trying to make sense of it. In his book I Am A Strange Loop he gave a retouched version of his side of the exchange, but he also included one response from Dennett as a prelude to that chapter.

    "There is an old racing sailboat in Maine, near where I sail, and I love to see it on the starting line with me, for it is perhaps the most beautiful sailboat I have ever seen; its name is "Desperate Lark", which I also think is beautiful. You are now embarked on a desperate lark, which is just what you should be doing right now. And your reflections are the reflections of a person who has encountered, and taken a measure of, the power of life on our sweet Earth. You'll return, restored to balance, refreshed, but it takes time to heal. We'll all be here on the shore when you come back, waiting for you."

    ReplyDelete
  5. A mouse is trained to run a maze. Then a hurdle is placed in the maze and, after a few confusions, the mouse learns to leap the hurdle. Now the hurdle is removed and the mouse, reaching that point in the maze, will continue to jump as if the hurdle were still there. But this is not sphexishness. The mouse navigates primarily by smell and has "memorized" the smell of the location where the hurdle was encountered. When it encounters these olfactory cues, it will leap.

    The sphex may be doing something similar.

    I'm going to have to contemplate more on the distiction between a sphexlike behavior and a cognitive zombie, as I've been thinking on an SF story.

    Possible relevant datum: One day after picking up dry cleaning, I walked back to my house, which was 2 blocks away. Along the way, I began to think of a statistical problem. The next thing I knew, I had missed the lock on the back door with my key. I had no recollection of the walk or digging the key out. Only when the routine was broken (my personal cricket was moved) did the cognitive powers kick in, drop the stat problem, and contemplate unlocking the door.

    So was I a cognitive zombie for the space of that walk? A sphex? A non-rational animal? (All disregarding the part of me that was "thinking about stats.")

    ReplyDelete
  6. but he also included one response from Dennett as a prelude to that chapter

    All the poetic vagueness in the world can't make Dennett's philosophy of mind anything other than what it is: empty and so, so wrong.

    ReplyDelete
  7. Dr. Feser, it's posts like these that are the reason I read you. You just took what I'd bet is normally a very complex idea and wrote of it in such a way that I reached the end of the post feeling as if I understood things perfectly.

    I don't think there's any other writer out there who can take complex concepts and disseminate them as clearly and concisely as you can.

    ReplyDelete
  8. The Profeser: If it was sphexish, it wouldn’t be behaviorally identical to a normal human being, and thus by definition wouldn’t be a cognitive zombie.

    A philosophical zombie couldn't be identical to a human, but why couldn't it behave identically? Its instincts would have to be a lot more complex than a wasp's, but why couldn't it be "programmed" cleverly enough to act sufficiently like a real human? Or am I missing a technicality... the zombie couldn't have concepts, but it could still operate according to them in a derivative sense (as the functioning of any computer ultimately traces back to the concepts held by its programmer). Indeed, the same applies to the wasp, as per the Fifth Way.


    Rank Sophist: I have argued as much in these comboxes before, on the topic of "rational machines". It is impossible to perfectly manifest certain properties (like the behaviors associated with rationality) without possessing the relevant species, and it is impossible to possess that species without falling into the various metaphysical necessities of the Porphyrian tree's genera.

    I still don't see that — the Porphyrian Tree is empirical, not logically generated a priori; if we discovered something that didn't fit, we would just go back and adjust the tree accordingly to make room. (Oderberg says something to this effect in an example hypothesising that "metallic" might turn out not be the proximate genus of gold.) Now of course an actual machine cannot be rational (because it's not even a substance), but the example previously raised was of a supposed substance that "looked" more like a "machine" than a man in its physical makeup. But God could surely make sophisticated physical bodies that were very different from human ones, and thus it's at least metaphysically possible that He could create rational beings of different species.

    ReplyDelete
  9. @ TheOFloinn

    I guess you could say that part of you was a cognitive zombie (i.e. your body). This was probably Decartes's mistake, thinking that if you are in some sense different than your body, then your body exists independently than you (i.e. is a seperate substance).

    For example, while it's true that water is not hydrogen, it's a mistake to think that hydrogen in water has an independent existence from the water it's a part of.

    And, believe it or not, I think Kant's ideas about analytic predicates, of all things, are a large part of why cognitive zombies are often thought to be metaphysically possible amongst modern philosophers.

    What I mean by this is the fact that Kant believed that ONLY propositions which can be "analytically" derived from the predicate are self-evident; all others must be demonstrated. Analytically derivied propositions are pretty weak tea, since they really are just a restatement of the predicate in different terms. So his belief not only has the odd consequence of implying that 5+7=12 is NOT self-evident, since adding 5 makes 12 is not part the predicate 7, but it also is self-refuting, since this principle cannot be analytically derived from the predicate! (another example would be the fact that three-sided polygon is not the same predicate as triangle, even though we know self-evidently that being one entails the other, contra Kant)

    Accepting Kant's doctrine however does lend great plausibility to the idea that whatever body which acts rationally could in fact could not be rational, since "rational" cannot be analytically derived from the predicate "body which acts like its rational."

    Drop Kant's doctrine however, and it seems unlikely that you could really split rational behavior from rationality, although there is an additional epstiological problem of how we know it's not a fake.

    ReplyDelete
  10. "But as Howard Robinson points out in the introduction to his edited volume Objections to Physicalism, Dennett’s position is a muddle. A purely syntactical engine will not even approximate a perfect semantic engine, because it will fail to be semantic at all. Syntax by itself doesn’t get you imperfect semantics; it gets you exactly zero semantics, just as the ketchup kids use for blood at Halloween time will never get you even imperfect real blood no matter how much of it you pour out."

    Prof. Feser himself has made this point many points, but it is worth repeating: this already concedes too much, as physical systems qua physical systems (with the usual modern, materialist assumptions about matter, etc.) do not even rise to the dignity of being "syntactical engines". I will just give the following quote of Searle's "The Rediscovery of the Mind", chapter 9:

    "This is a different argument from the Chinese room argument, and I should have said it ten years ago, but I did not. The Chinese argument showed that semantics is not intrinsic to syntax. I am now making the separate and different point that syntax is not intrinsic to physics. For the purposes of the original argument, I was simply assuming that the syntactical characterization of the computer was unproblematic. But that is a mistake. There is no way you could discover that something is intrinsically a digital computer because the characterization of it as a digital computer is always relative to an observer who assigns a syntactical interpretation to the purely physical features of the system."

    ReplyDelete
  11. A purely syntactical engine will not even approximate a perfect semantic engine, because it will fail to be semantic at all. Syntax by itself doesn’t get you imperfect semantics; it gets you exactly zero semantics.

    Starting the new year off with a little question begging, are we?

    You call Dennett's position muddled, but consider that your ideas of "syntax" and "semantics" (not to mention "physical") seem completely muddled to me. That they are fairly commonplace ones doesn't make them any less so.

    Yes semantics is built out of "syntax" by which I suppose we mean non-semantic stuff, physical mechanisms and patterns. We don't know exactly how that works, but we have better theories than we did 100 years ago. Whereas your side has no ideas at all.

    Or perhaps I'm wrong, so please tell me your theory of semantics, how it works, and how creatures like ourselves manage to manipulate meanings?

    ReplyDelete
  12. @Anonymous: What is the semantic content of H? Physically, it is two short vertical strokes and a shorter horizontal stroke at the midpoints. Syntactically, what is it? Chance channels of erosion in a mudflat? The cross-section of an I-beam? A letter in an orthographical system? Semantically, assuming it is a letter, does it represent the sound "en", the sound "aitch," or the sound "mi"? Or something else?

    There is no way to discover the semantic content from the physical object, because it is assigned to the object by a mind.

    Similarly, a "cat" may semantically represent the tackle used to raise anchor, a type of boat, an aficionado of jazz or the beat, a brand of road-grading equipment, the Central Arkansas Transportation authority, a woman named Catherine, a special quantum state, the street drug methcathinone, or any number of other things. The meaning of a semicircular line, an ogive with a belly, and a cross does not stem from the physical structure, nor in the syntactical structure.

    ReplyDelete
  13. Yes it is quite obvious and not in dispute that symbols by themselves don't have semantic content outside of an interpreter (mind). The issue is how the minds are constituted. I am not sure exactly what Feser's position is on this. He doesn't believe they can be composed of mere syntax or physical causality, but he also cautions against misinterpreting him to be advocating some "immaterial stuff". Not sure what is left.

    ReplyDelete
  14. The real comedy with our anonymous blowhard here is that what I said about syntax/semantics was just by way of relating Dennett's own views. And last I heard, conceding an opponent's own position and going from there doesn't count as begging the question.

    ReplyDelete
  15. Feser's position doesn't matter right now. The point is that a "semantic engine" cannot be reduced to a "syntactic engine."

    ReplyDelete
  16. The point is that a "semantic engine" cannot be reduced to a "syntactic engine."

    Is this axiomatic somehow, or do you have a proof?

    ReplyDelete
  17. Is this axiomatic somehow, or do you have a proof?

    Feser has written pretty extensively on why this should be...

    ReplyDelete
  18. Mr. Green,

    Oderberg says that the Porphyrian tree becomes gradually more empirical as one gets closer to the individual substance, but that its higher categories are totally metaphysical.

    However, you seem to radically misunderstand the nature of the Porphyrian tree when you say that we could "just go back and adjust the tree accordingly to make room". The tree reflects the real essences of things. A material substance cannot be rational without first being an animal, and an animal must possess vegetative traits, and vegetables must possess other, higher-up of traits as well. When Oderberg suggests that "metal" might not be the proximate genus of gold, he is saying that we may have mistaken what the essence of gold is. There may be some more determinate genus into which gold falls. He is not making the totally unrelated claim--which you seem to be making--that Porphyrian tree's tiers are all simply empirical discoveries that can be tossed out at will. It is metaphysically impossible for an animal not to possess vegetative traits or for a vegetable not to possess organicity. We could never "discover" a non-vegetative animal that would make us reorder the tree.

    Now of course an actual machine cannot be rational (because it's not even a substance), but the example previously raised was of a supposed substance that "looked" more like a "machine" than a man in its physical makeup. But God could surely make sophisticated physical bodies that were very different from human ones, and thus it's at least metaphysically possible that He could create rational beings of different species.

    It is impossible for a substance to perfectly appear to be a machine and yet be rational. The reason is simple: a machine is inorganic and non-living, which are traits that all vegetables and animals must possess. Material rational creatures necessarily possess animality and the nutritive powers, and so they must be organic and living. As a result, it is impossible for a substance to look perfectly like a machine and yet be rational.

    Not even God can change these truths. And, for what it's worth, even if there was another rational animal in the universe, it would not be a different species. Oderberg theorizes about the possibility of alien "ranimals" in Real Essentialism, and he concludes that they will be humans no matter how different from us they might look. The differences will simply be accidental, like those between central Asian peoples and west African peoples.

    ReplyDelete
  19. Oderberg theorizes about the possibility of alien "ranimals" in Real Essentialism, and he concludes that they will be humans no matter how different from us they might look. The differences will simply be accidental, like those between central Asian peoples and west African peoples.

    Are you sure he said this? Because it seems completely untenable to me. What, is a rational animal that is, say, essentially two-winged supposed to be a metaphysical impossibility according to Oderberg? I don't see why it would be.

    ReplyDelete
  20. "I don't see why it would be."

    Nor do I. I'm with Mr. Green on this.

    ReplyDelete
  21. The aliens might well be of two distinct biological species, but as rational animals they would be of the same metaphysical species. See City of God, Book. 16, Chap. 8. For the same reason, we might have a large population of biological humans containing a single metaphysical human, whom we might call "Adam."

    ReplyDelete
  22. "The aliens might well be of two distinct biological species, but as rational animals they would be of the same metaphysical species."

    Agreed.

    ReplyDelete
  23. Scott,

    Agreed.

    If you agree with that, then I'm not sure why you disagree with my argument. The Porphyrian tree deals with metaphysical species only. Biological species, as science currently understands them, are determined cladistically. Two substances of the same metaphysical species with totally different cladistic histories could certainly exist, as Oderberg himself argues.

    George,

    Are you sure he said this?

    RE page 104:

    The better answer, I claim, is that any truly rational animal, if such were metaphysically possible, would still be human. Hence, even if it did not have the body plan or physical constitution were [sic] are familiar with, still, if it were genuinely an animal and genuinely rational it would in fact be one of us; which would only go to show that having what is now thought of as the specifically human body plan or genotype, and so on, were not essential to humans after all, but only contingent accidents much like race, hair colour, or skin colour.

    ReplyDelete
  24. Feser has written pretty extensively on why this should be...

    Oh, well glad that is settled then.

    ReplyDelete
  25. “Biological species, as science currently understands them, are determined cladistically.”

    While it is true that the cladistic theory is certainly the most popular theory of biological species amongst biologists right now, it is by no means the only one, as David Oderberg noted in his book; there are still taxonomist theories and others types of theories out there.

    While I can agree with you that the cladistic theory is all but worthless as a classification system of biological species (what does the origin of species have to do with what a species is?), I don’t think you have given the other common theories their due.


    “What, is a rational animal that is, say, essentially two-winged supposed to be a metaphysical impossibility according to Oderberg? I don't see why it would be.”

    Granting that I’m a little unsure of this issue as well, I think this has to do with the fact that Oderberg is talking about metaphysical infima essences, i.e. the essences of metaphysical species, which has a very specific context in AT essentialism.

    In other words, I think Oderberg would take “two-winged” as an accident of a being, and so that can’t be part of its essence at all. Now of course, we could construct a tree of “accidental essences” which divides “two-winged” and “not two-winged” beings, as we could divide all beings into “red” and “not-red”, but that’s just not the context that Oderberg is talking about.

    Of course, this leaves open the question of where accidents begin, and specific differences end, but asking why you can’t put the former onto a metaphysical classification of the true essences of substances, which uses the latter, is running cross-purposes.

    ReplyDelete
  26. You were right, RS, he did say it.

    He's completely wrong, of course. The term "rational animal" is not an exhaustive description of the human essence; it just suffices as a definition therof because there are no other known rational animals. In the same way, the term "winged animal" would suffice as the definition of a duck if there were no other birds.

    Nevertheless, I can't help but think that we still maybe missing something in Oderberg's meaning. For instance, what does he mean by, "any truly rational animal, if such were metaphysically possible. . ."? Why would there be any question that such a being is metaphysically possible?


    This all reminds me of Mortimor Adler's wacky thesis that there are actually only one species of mineral, one species of vegetable, and one species of non-rational animal.

    ReplyDelete
  27. “Material rational creatures necessarily possess animality and the nutritive powers, and so they must be organic and living.”

    While I do think this true a priori of material rational substances by at least the Principle of Finality**, this claim requires some qualification, lest it be misunderstood.

    Firstly, we’re talking about substances here, which leaves open the possibility that an immaterial rational substance might be conjoined in a Cartesian fashion to a body, just not essentially conjoined to one. Of course, this is not its natural state, but Aquinas thought, if I’m not mistaken, that this is how the angels are punished in hell.

    Secondly, it might be thought that people in a coma are a counter-example here, since they can’t move their bodies. For various reasons, this is not a counter-example, since they still might, arguably, have the capacity for moving, like when we say a sleeping person is still rational even though they are not currently exercising that capacity. A weaker point is that it still might arguably be coherent for a rational animal to lose its animality at some point, and become a rational vegetable, but it is not coherent for it to have never been a rational animal.

    The conceptual waters are definitely deep here, so I won’t pretend to have fully answered the question, but I hope I’ve outlined the basics of the issue for everyone.


    ** The basic idea being that if an essentially material rational substance was conjoined to a merely material body, then its body would serve no purpose to its rationality and vice-versa, since its body couldn’t produce phantasms or benefit its rationality and its rationality couldn’t benefit the body, since it couldn’t change it. These points of course could use further development.

    ReplyDelete
  28. @ George R.

    While I’m not going to say I agree with Oderberg, he did address that very understanding of giving what I’m going to call “practical essences” of animals that you share with Anscombe in the very same chapter of his book Real Essentialism, that Rank Sophist quoted, so I think you’re going to have to give more than curt dismissal to be fair to Oderberg.

    ReplyDelete
  29. Timotheos,

    While I can agree with you that the cladistic theory is all but worthless as a classification system of biological species (what does the origin of species have to do with what a species is?), I don’t think you have given the other common theories their due.

    True enough, and I am, of course, aware of the other theories of species. I simply assumed that by "biological species" TOF was referring to "species in the scientific sense", which is (far and away) cladistic.

    The conceptual waters are definitely deep here, so I won’t pretend to have fully answered the question, but I hope I’ve outlined the basics of the issue for everyone.

    Thank you for the clarifications. I was aware of the points you raised already, but this should be helpful for other people reading.

    ReplyDelete
  30. Rank Sophist: When Oderberg suggests that "metal" might not be the proximate genus of gold, he is saying that we may have mistaken what the essence of gold is.

    Right, and I was sloppy in making it sound as though it were entirely arbitrary. (Indeed, we may not be exactly right about gold, but we aren't going to discover that, say, it's actually a gas.) Of course, anything that is not up for empirical grabs will have a metaphysical argument why, which will explain why it takes the place it does in the tree (not the other way around). And there are metaphysical arguments against, say, a "rational rock" — a stony body does not seem to allow any way to exercise rationality, so its essence would be self-frustrating — but nothing like that applies to George's example of winged rational animals.

    I naturally agree that biological classifications are irrelevant here. (There's no reason why metaphysical classifications need be unhelpful to biologists, but there's no reason why they would have to be helpful either, esp. genealogical classifications.) And I also readily agree that all rational animals could belong to the same species. There's no reason why winged vs. wingless could not be a matter of accidents. I just don't see any argument to show that such differences must be accidental.



    The reason is simple: a machine is inorganic and non-living, which are traits that all vegetables and animals must possess.

    Sure, which is why the key point is that this hypothetical substance would merely "look" like a machine. But I don't think this was clearly explained: human beings "look" like piles of particles. They aren't, but a physicist can treat a person as though he were just a bunch of particles and draw perfectly correct conclusions because human substances virtually look like lots of particles. So there might likewise be a substance that had virtual gears and vacuum tubes (i.e. looked like some kind of robot) even though it was in fact a rational being.

    Furthermore, it is at least theoretically possible to build a computer that fakes intelligence. Not that is really intelligent of course, but one that can do a good enough simulation so as to be empirically indistinguishable from a real human. Now maybe it will turn out that because of, say, the actual laws of physics, fake intelligences or vacuum-tube-looking substances are not in fact possible. That's fine, but that's something to be discovered by physics, not by metaphysics. God could have created a world in which the laws of nature could have allowed such things, or so I argue.

    ReplyDelete
  31. George R.: This all reminds me of Mortimor Adler's wacky thesis that there are actually only one species of mineral, one species of vegetable, and one species of non-rational animal.

    Incidentally, I agree that that's wacky, but not impossible. (I think it's "wacky" in the sort of way that solipsism is: it's not metaphysically impossible for everyone but me to be an illusion, but it is unreasonable actually to believe that.) After all, an animal has to be essentially animalistic: that's the base line, the minimum possible essence for something to be an animal in the first place; but all the rest could be accidents, as far as I can see. And maybe in some sense it's "suitable" for "rational animal" to be the infima species of intellectual creatures; but surely God can choose various combinations of what properties He wants to be essential to a certain nature.

    ReplyDelete
  32. It's both wacky and utterly impossible, Mr. Green.

    The fact that all the parts of living creatures are accidents is undeniable. But that does not mean that there does not have to be parts that are caused by the essence of the thing. For example, say there is an animal that is essentially two-legged, and because of this it has two legs. Now the two legs are accidents, but they are called proper accidents because they belong to the thing by virtue of both its essence and certain determinate matter disposed to receive their forms.

    Now for the proof that there must be parts that are proper accidents.

    Adler says that all animals are the same substance, “animal.” And it’s the substantial form causes a substance to be what it is. Therefore, the substantial form of the animal causes the thing to be an animal. Now an animal is so called because it is capable of sense perception. But in order to have the power of sense perception the animal must have sense organs, such as eyes and ears. Now the substantial form either causes the animal to have eyes and ears or it does not. If it does, then it is the cause of certain parts of the thing. But if it does not, in what possible way can it be said that the substantial form of the animal causes it to be an animal? In no possible way. Therefore, the substantial form of an animal must cause it to have sense organs, and these cannot be purely accidental like hair color and skin color

    ReplyDelete
  33. Mr. Green,

    but nothing like that applies to George's example of winged rational animals.

    I wasn't trying to provide a counterargument to George's point. I'm sympathetic to Oderberg's view, on this issue.

    I just don't see any argument to show that such differences must be accidental.

    See RE 102-105.

    Sure, which is why the key point is that this hypothetical substance would merely "look" like a machine. But I don't think this was clearly explained: human beings "look" like piles of particles. They aren't, but a physicist can treat a person as though he were just a bunch of particles and draw perfectly correct conclusions because human substances virtually look like lots of particles. So there might likewise be a substance that had virtual gears and vacuum tubes (i.e. looked like some kind of robot) even though it was in fact a rational being.

    You're equivocating, here. Human beings look like piles of particles only to people with an ideological commitment to eliminativism regarding macroscopic objects. Gears and vacuum tubes are wholly unrelated.

    Further, you're missing a critical point. Gears and vacuum tubes are inorganic by nature, and thus non-living. Hence they cannot have nutritive or sensitive powers. The inorganic particles that exist virtually within macroscopic organic substances are, as should be obvious, parts of organic substances. Your rational machine would not be organic unless the vacuum tubes and gears were simply inorganic accidents or virtualities within a wider organic substance, much like pacemakers and metal plates are incorporated into human bodies.

    And, unless you can make your substance organic, it cannot possess nutritive, sentient or rational powers. This is metaphysically necessary given the Porphyrian tree.

    Not that is really intelligent of course, but one that can do a good enough simulation so as to be empirically indistinguishable from a real human.

    Which is another metaphysical impossibility, as I have argued. Again, refer to Oderberg's quote above regarding the two-way supervenience of properties and species. A substance or artificial construct cannot present a perfect simulation of a range of properties that it does not possess essentially. There will always necessarily be holes. Putnam's XYZ cannot exist, and neither can a perfectly simulated rationality.

    ReplyDelete
  34. Perhaps the sphex thinks, "Hmm, somebody's been messin' with my cricket! Better check the nest again too."

    ReplyDelete
  35. George R.: Now the substantial form either causes the animal to have eyes and ears or it does not. If it does, then it is the cause of certain parts of the thing. But if it does not, in what possible way can it be said that the substantial form of the animal causes it to be an animal?

    But how is that a problem? Of course the substantial nature will cause the animal to have eyes and/or ears, and so on. As you say, the essence must be to have some sort of sensory organs. But not to have any particular organs — any more than having eyes by nature entails having any particular sort of eyes (e.g. of a particular colour or a particular shape).

    ReplyDelete
  36. Rank Sophist: >"I just don't see any argument to show that such differences must be accidental."
    See RE 102-105.


    Well, I don't get Oderberg's point there. He says, "I conclude that we do not have a good reason for abandoning the definition of man as a rational animal", and I'm happy with that. I fully accept that any rational animals we might discover could quite easily all belong to the same species. But I just don't see how what he says constitutes a metaphysical requirement. A "good reason" is not an "inescapable truth".


    Your rational machine would not be organic unless the vacuum tubes and gears were simply inorganic accidents or virtualities within a wider organic substance

    Yes, that's what I said — there aren't really any gears or tubes, only virtual effects within the actual organism.


    A substance or artificial construct cannot present a perfect simulation of a range of properties that it does not possess essentially.

    That statement sounds too broad, but it doesn't matter because I'm not talking about simulating rationality — only certain effects thereof, such as making noises that sound like English words. And this of course is possible; as I've already demonstrated with my example about answering machines (true, they don't fool anyone for more than a few seconds, but we can keep coming up with cleverer tricks that fool people for longer and longer). I guess talk of a "perfect simulation" is trivially impossible, in that if being a "perfect" fake means having every single attribute of X, then the "fake" would simply be an X and not a fake. But that's why I explicitly referred to a "good enough" simulation, not a "perfect" one.

    ReplyDelete
  37. Mr. Green,

    Yes, that's what I said — there aren't really any gears or tubes, only virtual effects within the actual organism.

    I think there's been a misunderstanding. When I used the word "organic", I was not referring to "organisms": I was comparing organic and inorganic substances, which can also be called animate and inanimate substances (as they are known on the tree). Animate things possess immanent causation; inanimate things do not. Pacemakers and gears and vacuum tubes are inanimate. The only way that a gear or a vacuum tube could be a virtual component of a living substance is if that substance was animate, which is to say a vegetable or animal. There is no such thing as a vegetable or animal composed entirely of minerals, because animacy (nutritivity) and inanimacy (minerality) are contraries. Again, material rational substances must possess animacy and sentience, as the tree tells us. It seems, then, that it is metaphysically impossible for your theoretical rational machine to exist.

    I guess talk of a "perfect simulation" is trivially impossible, in that if being a "perfect" fake means having every single attribute of X, then the "fake" would simply be an X and not a fake. But that's why I explicitly referred to a "good enough" simulation, not a "perfect" one.

    You have to carefully distinguish between the ontological and epistemological aspects of this argument, because they are tightly connected. Ontologically, we are talking about an X and a fake-X, which possesses some semblance of the traits associated with X. We seem to be agreed that there cannot be an X and a fake-X that manifest 100% identical attributes. Epistemologically, this has major consequences. Your "good enough" fake becomes entirely relative. Fake-X may be good enough to fool the average observer, but, because it is ontologically a fake, it will always fail to present a perfect simulation. As such, it is epistemologically possible to distinguish between X and fake-X, if one understands the nature of X well enough. Hence it is impossible for a cognitive zombie or an otherwise perfectly-faked rationality to exist.

    ReplyDelete
  38. Rank Sophist: It seems, then, that it is metaphysically impossible for your theoretical rational machine to exist.

    I think there's still a misunderstanding. It isn't a machine (though it might be mistaken for one because of its virtual appearances).


    We seem to be agreed that there cannot be an X and a fake-X that manifest 100% identical attributes.

    I don't know what you mean by "manifest". A fake-X cannot have all the properties of an X. However, something can produce the same effects as an X — that's just what it means for it to be virtually X. The word "manifest" sounds like it applies to the effects of something, so in that sense it could "manifest" itself as an X. If you mean something else, I'm not sure what that is.


    Fake-X may be good enough to fool the average observer, but, because it is ontologically a fake, it will always fail to present a perfect simulation. As such, it is epistemologically possible to distinguish between X and fake-X, if one understands the nature of X well enough.

    What do you mean by "perfect"? Some idealised situation that includes things like unlimited time and resources, infallible judgements, etc.? What is this guaranteed method for telling whether a proposed intelligence is real or fake? Clearly under practical limitations, a fake is possible.

    Maybe the issue is what "fake intelligence" means. In a way, even a book would count, because it has no intellect itself (of course, since it's not even a substance), but it has intelligent effects because its author was intelligent — it has intelligent effects in a derived way. So I am not claiming that something devoid of even derived meaning can simulate intelligence, only that something derived from intelligence can continue to exercise its effects even after the original designing intellect is gone. In principle, all it requires is that the effects of intelligence (such as speech, writing, etc.) can be recorded, which clearly is metaphysically possible because we do record them all the time.

    ReplyDelete
  39. There really isn't such thing as an intelligent machine. We don't have anything remotely close to a machine that you can point to and say "it is intelligent". So-called "artificial intelligence" says what it is and what is isn't: i.e. artificial (what it is) and what it isn't (intelligent). A computer or a machine is not and cannot be "an intellectual" as we can't artificially produce this as intelligence is fundamentally immaterial and necessarily so. This is why some people are so nervous about human cloning. If a clone is intelligent it means God gave it a rational soul; ultimately, however, that is for God to decide what criteria. We can't replicate it artificially - only imitate or simulate it.

    Why do you think people are so utterly and deeply unnerved if or when a common animal seems to display signs of anything remotely like actual human intelligence? This is always disturbing. Imagine if, for example, a demon possesses an animal and it looks at you with sinister intent and clearly displays this - looks at you not as a possible meal but as a toy to be manipulated for its own pleasure or satisfaction. Manipulation is something only rational animals - man - can do. Hence even if an animal only looks like it's being sadistic or wants to toy with you in that way it will freak you right out in reality because it seems to actually possess it. Artificial intelligences do not freak us out in the same way because we can usually come to realize it is just fakery are not actually there or present.

    ReplyDelete
  40. Mr. Green,

    I think there's still a misunderstanding. It isn't a machine (though it might be mistaken for one because of its virtual appearances).

    This is beyond appearances. It contains no living matter. Its body is fully mineral--inanimate. Thus it cannot progress through the animate -> sentient -> rational genera on the tree. There is no such thing as an inanimate body that possesses animacy. This is pretty basic essentialism.

    However, something can produce the same effects as an X — that's just what it means for it to be virtually X.

    That is not what "virtually" means, though. The virtual distinction is what we appeal to when we analyze the genera within a really whole and singular species, for example. Nothing is "virtually" some other thing: virtuality is a type of containment of one thing by another.

    No fake-X can produce the same effects as X for the simple reason that it is not X, and so it does not possess the same range of powers and properties as X. Fake-X can never be "virtually" X: this is incoherent. It could virtually contain X, but then it wouldn't be a fake-X any more than the sun is a fake microwave oven.

    What do you mean by "perfect"? Some idealised situation that includes things like unlimited time and resources, infallible judgements, etc.? What is this guaranteed method for telling whether a proposed intelligence is real or fake? Clearly under practical limitations, a fake is possible.

    I mean, "A fake that cannot be detected by any expert in the understanding of rationality." An expert is someone with a very wide knowledge of the rational essence, who would be able to spot a non-rational entity by analyzing its properties.

    So I am not claiming that something devoid of even derived meaning can simulate intelligence, only that something derived from intelligence can continue to exercise its effects even after the original designing intellect is gone.

    Well, a cognitive zombie has no derived intelligence. And a rational machine could certainly continue to execute effects, but they would be imperfect imitations of the intelligence that made them, for the simple reason that a computer is an imperfect imitation of its creator.

    ReplyDelete
  41. Is a cognitive zombie a coherent concept?

    Well, the simplest way I can think of to construct one would be to use a look-up table. Construct a list of all possible inputs - sensory experiences, questions, conversational topics, and recent memories - and set down a response for each. Store them in a long list. Then when the zombie experiences an input in the real world, it simply finds the closest match in its list and responds accordingly.

    It doesn't matter if the list would require more storage space than all the computers on Earth. The question is whether such a list could be constructed *in principle*, as a philosophical thought experiment. I don't see any reason why not.

    It would be hugely inefficient, of course, and any practical AI would of course build intermediate concepts and models of the world with which to plan and problem solve. But the idea of a simulator that works by table look-up instead seems to me to be at least *logically* coherent, although physically impractical.

    ReplyDelete
  42. Mr. Green writes:
    But how is that a problem? Of course the substantial nature will cause the animal to have eyes and/or ears, and so on. As you say, the essence must be to have some sort of sensory organs. But not to have any particular organs — any more than having eyes by nature entails having any particular sort of eyes (e.g. of a particular colour or a particular shape).

    I think what you're saying here, Mr. Green, and correct me if I'm wrong, is that it's possible that the form 'animal' be the cause of sense organs in general, but that an altogether different principle(determinate matter?) will be the cause of the sense organs being particular sense organs, such as eyes and ears, etc. This would seem to be the best course of defense of Adler's position.

    But here's why it doesn't work.

    A generic sense organ cannot be instantiated in reality, therefore it's absurd to suggest that something can cause it. If I told you to draw a plane figure without drawing a specific plane figure, could you do it? No. Also, if I told you to draw a triangle without drawing a plane figure, could you do that? No. Nothing can cause a genus alone, and that which causes the lowest species also causes the genus. Therefore, if some other principle besides the form 'animal' is the cause of particular sense organs, that principle is also the cause of the genus 'sense organ;' and if the form 'animal' is not the cause of the sense organs being particular sense organs, neither is it the cause of the sense organs in general, except maybe accidentally.

    Therefore, the substantial forms of animals must be the causes of specific sense organs, and where these sense organs are different, so is the substantial form that caused them.

    I think the points I'm making here are important because there is unfortunately a general cluelessness today about what substantial form is and what it does, even among Thomists and Aristotelians. The cause (and effect) of this cluelessness is, of course, the rise of the perverse cult of evolutionism.

    ReplyDelete
  43. George R.: A generic sense organ cannot be instantiated in reality, therefore it's absurd to suggest that something can cause it. If I told you to draw a plane figure without drawing a specific plane figure, could you do it?

    But nor could I make an eye that was no colour — it has to be some colour, even though whatever colour it ends up having is an accident. A generic colour cannot be instantiated any more than a generic sense. Why cannot the nature of an animal be such that it causes specific sense organs in each instance, but not causing the same organs in every animal?

    Though even if the difference between the various senses required different essences, that would seem to allow most animals to be a single species. Are there any animals with no sense of touch? (Sponges, perhaps; though they can react to stimuli, which seems to be a sort of sense. On the other hand, some plants can react to being touched. I think Aristotle classified sponges as animals, but just barely!) So if there is a specific nature that explicitly causes a sense of touch, then all animals with a sense of touch could belong to that species, with other differences, including additional senses, being accidents. (Having one sense is enough to be an animal, so extra senses would be a bonus.)

    there is unfortunately a general cluelessness today about what substantial form is and what it does, even among Thomists and Aristotelians. The cause (and effect) of this cluelessness is, of course, the rise of the perverse cult of evolutionism.

    I daresay that's true to an extent. Of course there are all sorts of other factors; and evolution is certainly widely misunderstood, being both overestimated and underestimated (sometimes simultaneously!).

    ReplyDelete
  44. Rank Sophist: There is no such thing as an inanimate body that possesses animacy. This is pretty basic essentialism.

    Yes. And the point of the example was a creature that IS animate, is living, is substantial, etc.


    It could virtually contain X, but then it wouldn't be a fake-X any more than the sun is a fake microwave oven.

    Hm, you're right, a fake has to be missing some relevant property or effect for it to be properly called a "fake". So yes, a robot (in this case) is a fake "intelligence" because it cannot have all the same powers a something with an intellect. However, human beings cannot perceive all the attributes of an intellect directly. We're not telepathic, so we have to extrapolate from indirect evidence such as interpreting sounds in the air or squiggles on a page. And machines really can make sounds, even the same sounds as a man makes (cf. answering-machine again!). Thus the question is whether a machine can be made to produce the sort of sounds indirectly (through programming) that a man can make directly (through exercising his intellect).

    I mean, "A fake that cannot be detected by any expert in the understanding of rationality." An expert is someone with a very wide knowledge of the rational essence, who would be able to spot a non-rational entity by analyzing its properties.

    OK, so what knowledge does such an expert have that will allow him to distinguish a computer from a person reliably and accurately? Can you give an example of doing this in practice?

    Well, a cognitive zombie has no derived intelligence.

    Why wouldn't it? Indeed, how couldn't it? A cognitive zombie is an animal, so it must have instincts, or even be trainable like Clever Hans. That's an indirect or derived application of intelligence (of God's, or the trainer's). There's nothing metaphysically impossible about an animal with such sophisticated instincts or capability for training that it could, say, pass the Turing test.

    Similarly for a machine. A computer will always in some sense be an "imperfect imitation of its creator", but again, barring things like telepathy, all communication between humans is imperfect anyway. That's why I keep referring to a "good enough" simulation. Since we are limited to indirectly deducing intelligence within certain practical limits, a fake has to be sufficiently good only to surpass those limits. And that's easy to do in principle — in fact, I was going to use the same example NiV gave, of making an exhaustive list of situations and responses. The list would be huge, but finite, and thus programmable in principle. Given different laws of physics — or clever enough shortcuts given the real laws of nature — anyone could be fooled, consistently.

    ReplyDelete
  45. But nor could I make an eye that was no colour — it has to be some colour, even though whatever colour it ends up having is an accident.

    Yes, but while you could make a blue eye, you could not make a blue generic sense-organ or a blue generic plane-figure. That’s because the eye is an essence, whereas a generic sense-organ is not an essence but a genus, which cannot be instantiated by adding accidents such as color.

    A generic colour cannot be instantiated any more than a generic sense.

    I never claimed otherwise.

    Why cannot the nature of an animal be such that it causes specific sense organs in each instance, but not causing the same organs in every animal?

    Now that’s a good one. So you’re saying that maybe the same form causes all the sense-organs that exist in nature (and there are probably hundreds of different essential forms of them), but since matter only receives those forms it is disposed to receive, each creature only receives a few of them. Is that right?

    Well, I don’t know if I could absolutely disprove that, but I‘ll give it a shot.

    First, I would have to say that the form of each animal would then not be the form of an animal at all, but the form of a insane monstrosity, which would only accidentally be the cause of animals. Moreover, we see that when matter is not disposed to receive a certain form, it does not usually happen in nature that it fails to receive the form at all, but it rather receives it in a deformed manner. Therefore, such a substantial form would only produce monstrosities.

    Though even if the difference between the various senses required different essences, that would seem to allow most animals to be a single species. Are there any animals with no sense of touch?

    That’s a good one, too. I believe it's true that all animals do have the sense of touch. Aristotle even called it the “foundational sense,” or something like that. So if the form ‘animal’ only caused the sense of touch, it could theoretically be the form of all animals. Right?

    Hmmm, I‘ll have to get back to you on that one.

    ReplyDelete
  46. "So yes, a robot (in this case) is a fake "intelligence" because it cannot have all the same powers a something with an intellect."

    That depends on your definition of "intelligence", or what you think the powers of an intellect actually are. Mine is that intelligence is a general problem-solving capability, and as such there's no reason why animals or machines couldn't have it, but evidently other people here are operating with a different definition of "intelligence" to mine.

    Some people count it as the capacity to understand concepts, but that definition raises more questions than it provides answers.

    An AI could only ever be a fake 'human' (which causes much of the difficulty with Turing tests), and you can argue about whether not knowing if they have qualia makes them a fake 'mind'. But it would, in my view, have to be a *genuine* intelligence, with all the powers required of an intellect.

    ReplyDelete
  47. Mr. Green,
    With regard to the objection that, since all animals have the sense of touch, they could theoretically be the same substance, I would reply in several ways.

    First, even though the senses are specified by there object, and, therefore, the sense of touch is specifically the same for all animals, nevertheless, it cannot therefore be said that the sense-organs required for a specific sense must be specifically the same; for the sense-organs themselves are further specified by the mode of achieving their object. For example, in higher order animals the sense of touch is by the operation of nerves in the flesh, whereas in insects it is achieved by antennae. These two modes are obviously specifically different.

    Second, it seems absurd to posit that the sense of touch is caused by the form of the thing, whereas the sense of sight is caused by material accidents, since the power of sight seems far more formal (being, in fact, an almost spiritual activity) than the power of touch, which of all the senses is most closely related to matter.

    Third, although the form ‘animal’ could still be called such by simply causing the sense of touch, remember that it must also be the cause of the animal’s being a living thing. Therefore, the form must also be the cause of the animal’s vital organs, which make life possible for it -- and not only the organs themselves, but also the coordination between them and the rest of the animal‘s system, without which it could not live. Now we know the specific vital organs and how the relate to their proper living systems are vastly different between different animals. Therefore, the forms of different animals are different.

    ReplyDelete
  48. Mr. Green,

    Yes. And the point of the example was a creature that IS animate, is living, is substantial, etc.

    Then you seem to be proposing the possibility of an animate being composed entirely of inanimate matter--indeed, lacking any animate features whatsoever, outside of its essence. And this is a clear metaphysical impossibility given the two-way supervenience of properties and essences.

    Thus the question is whether a machine can be made to produce the sort of sounds indirectly (through programming) that a man can make directly (through exercising his intellect).

    And it is clear that some semblance of those sounds can be produced. But an epistemologically indistinguishable copy is, again, impossible. You point to the interior traits that a real intellect has--and this is true. But even a complete copy of the exterior traits of the intellect cannot be done, for the simple reason that an intellect is an intellect and a computer is a computer. In more complete terms, an intellect is an immaterial, rational existent capable of free deduction, while a computer is a material, rule-following existent that reacts stimuli in a pre-determined fashion. Any faux-rational intelligence would be subject to the quus paradox (among other problems), and a sufficiently knowledgeable subject could therefore find holes in the exterior traits of that intelligence. It would simply be a matter of wrangling the AI into one of its metaphysically necessary blindspots, which no amount of programming could overcome.

    OK, so what knowledge does such an expert have that will allow him to distinguish a computer from a person reliably and accurately? Can you give an example of doing this in practice?

    See above.

    Why wouldn't it? Indeed, how couldn't it? A cognitive zombie is an animal, so it must have instincts, or even be trainable like Clever Hans. That's an indirect or derived application of intelligence (of God's, or the trainer's).

    I suppose this is true. I had something else in mind when I wrote that--your counterexample is valid.

    Since we are limited to indirectly deducing intelligence within certain practical limits, a fake has to be sufficiently good only to surpass those limits. And that's easy to do in principle — in fact, I was going to use the same example NiV gave, of making an exhaustive list of situations and responses. The list would be huge, but finite, and thus programmable in principle.

    It would not be programmable in principle, again, because there are scenarios to which a rule-following "intelligence" cannot react believably. These are the metaphysically necessary blindspots that any AI is going to have, because it lacks the essential makeup to overcome them.

    ReplyDelete
  49. " It would simply be a matter of wrangling the AI into one of its metaphysically necessary blindspots, which no amount of programming could overcome."

    Could you give some more detail on exactly how to do that?

    And since all humans are fallible (apart possibly from the Pope ex cathedra), doesn't the same apply to humans? How do you distinguish a human blindspot from a machine blindspot?

    "It would not be programmable in principle, again, because there are scenarios to which a rule-following "intelligence" cannot react believably."

    Such as?

    ReplyDelete
  50. It would not be programmable in principle, again, because there are scenarios to which a rule-following "intelligence" cannot react believably.

    First, if there are scenarios where the AI runs into trouble, it should be able to do a couple of different things, none of which involve the blue screen of death. It can ask for more information, it can say it doesn't know, or it can say that the problem doesn't make sense. All of those are believable responses when humans encounter a blind spot or receive a "does not compute" error.

    Second, you have a mistaken conception of AI as some sort of rigid closed system. But real AI would be like fluid human intelligence since it is always in learning mode.

    ReplyDelete
  51. @Green, @rank sophist

    "Any faux-rational intelligence would be subject to the quus paradox (among other problems), and a sufficiently knowledgeable subject could therefore find holes in the exterior traits of that intelligence."

    I doubt that there is any reliable way of determining that one is suffering from the quus paradox. Which is to say that there is no algorithm which would make that determination and be certain to halt.

    Reason being that if there was then the machine could be programmed to evade the interrogation using the same rules the expert was using to interrogate.

    So even if @rank sophist is right he is never going to be able to convince @Green in this way because distinguishing genuine intelligence from faux intelligence is not a thing that can be explained with sufficient precision.

    So @Green is asking too much. @rank's argument hinges on the existence of an object which is fundamentally indescribable.

    So @rank sophist cannot convince @Green by construction, and @Green cannot convince @rank sophist by arguing that the non-existence of a description is equivalent to the non-existence of the thing (to do so would, it seems to me, be begging the question).

    Looks like a stalemate to me.

    ReplyDelete
  52. NiV,

    Could you give some more detail on exactly how to do that?

    And since all humans are fallible (apart possibly from the Pope ex cathedra), doesn't the same apply to humans? How do you distinguish a human blindspot from a machine blindspot?


    The blindspot in question isn't simply a lack of knowledge or a quirk. It would vary on a case by case basis, particularly depending on the AI's level of advancement. It is very easy to "break" a current AI simply by making a leap outside of its range of understanding. For example, look at this conversation I just had with Cleverbot:

    Me: Cheese.
    CB: What kind of cheese?
    Me: Smile.
    CB: *smiles*.
    Me: It wasn't a kind of cheese.
    CB: So stop being French.

    By its second response, it's already revealed itself incapable of understanding and responding to context in the way that humans do. By its third response, it's gone totally off the rails. More advanced AI would possibly (and only possibly) be able to follow my train of thought, but, even then, it could be broken.

    Step2,

    Second, you have a mistaken conception of AI as some sort of rigid closed system. But real AI would be like fluid human intelligence since it is always in learning mode.

    Certainly it is in learning mode. No one claimed otherwise. But all AIs are rigid, closed systems in the sense of being wholly dependent on prior data entry. They "learn" only by compiling more data. Humans learn by unraveling the virtual implications of synderesis, via both data collection and rational deduction. Among other things, they are capable of reaching "spontaneous", unpredictable conclusions that a machine can only copy after the fact.

    ReplyDelete
  53. George R.: Moreover, we see that when matter is not disposed to receive a certain form, it does not usually happen in nature that it fails to receive the form at all, but it rather receives it in a deformed manner. Therefore, such a substantial form would only produce monstrosities.

    Thanks, those are good answers. Since I'm trying to push this as far as possible, let me try one more response about monsters: what if we accept that all different kinds of animals would have to be monstrosities on this view, but explain it along the lines of Aquinas's explanation of females. That is, given how the biology of his day considered females to result from defective development of a male, Thomas points out that there's no reason God cannot take advantage of defects in a purely biological respect as part of His deliberate plan. So likewise, could not God have planned for the necessary mutations to result in different animals?

    (This has an interesting evolutionary aspect: certain biologists try to brush off all the amazing "coincidences" that keep cropping up in evolution, such as sight "independently re-evolving" many times over, or genetic capabilities appearing in primitive animals that are not fully utilised until much later down the proposed evolutionary chain... of course, they are desperate not to acknowledge anything more than coincidence in case it sounds like ID!)

    ReplyDelete
  54. Rank Sophist: Then you seem to be proposing the possibility of an animate being composed entirely of inanimate matter--indeed, lacking any animate features whatsoever, outside of its essence.

    Maybe I should try to track down the original example. We seem to be confused over two different scenarios.


    Any faux-rational intelligence would be subject to the quus paradox (among other problems), and a sufficiently knowledgeable subject could therefore find holes in the exterior traits of that intelligence.

    Having holes and knowing them are two different things. I'm subject to the quus-paradox — from your point of view, because although I can know I have a specific concept of addition in mind, you cannot. You can observe only my external actions, so epistemologically, you can never know for sure whether I'm thinking "plus" or "quus" (or whether I am not thinking at all, but acting in a purely mechanical way... perhaps by some rote instinct for doing [qu]addition).

    It would not be programmable in principle, again, because there are scenarios to which a rule-following "intelligence" cannot react believably.

    No, that contradicts the setup. Remember, I started by making a list of all the things you could ever say to the machine, and then I simply recorded my own responses. The machine is (re)producing my responses to whatever you say, so by definition, they are "believable" reactions.

    (Well, assuming that my responses are believable as coming from an intelligence... of course I shouldn't be too presumptuous in that respect! But as Step2 points out, saying "I don't know" or "Oops, let's try that again" are legitimate human reactions. That's another reason I keep insisting on "good enough in practice". It may be possible to detect impostors once we're in heaven — where neither you nor any person you're talking to can make mistakes — but here and now, there are lots of rough edges that we can take advantage of to "cheat" a little.)

    Among other things, they are capable of reaching "spontaneous", unpredictable conclusions that a machine can only copy after the fact.

    True (up to a point?), but also subject to practical limitations. After all, that's why this scenario was defined in terms of "copying after the fact". The only way around that would be to come up with a question/challenge that nobody had ever thought of before. And if it's so clever that I wasn't able to predict it when programming the computer in the first place, then maybe it's just too clever for me to answer at all! For example, if you ask the machine to spontaneously write a brand new Mozart symphony, you will have stumped it. But you will also have stumped me, because there's no way I could ever compose a Mozart symphony either... thus your test is unreliable from being subject to false negatives.

    ReplyDelete
  55. NiV: But it would, in my view, have to be a *genuine* intelligence, with all the powers required of an intellect.

    Well, in a Thomistic context, the point is whether something has an intellect, i.e. is capable of receiving a form intentionally (as opposed to having a form materially). The relevant power is "holding a form intentionally", and something has to be a substance to have an intellect. So "machine intelligence" is literally a self-contradiction, which is why the best we could have is a machine that fakes certain possibly-intelligent effects. If instead we are interested in "problem-solving" in a very general sense, then yes, machines can do that. (If your problem is "move some heavy objects" or "help me perform some arithmetical calculations", then machines do that sort of thing all the time.)


    Reighley: So @rank sophist cannot convince @Green by construction, and @Green cannot convince @rank sophist by arguing that the non-existence of a description is equivalent to the non-existence of the thing

    That is true — but I'm not relying only on the lack of a construction. I'm claiming to provide a construction that demonstrates the metaphysical (though not physical) possibility (by means of pre-recording answers to every possible question). If RS can provide a counter-construction, that might help show a place where mine goes wrong. (Of course, even if he can't, mine still might be wrong! But I'm confident the example holds up.)

    ReplyDelete
  56. rank sophist,

    "For example, look at this conversation I just had with Cleverbot:"

    So far as I can see, cleverbot hasn't gone off the rails. It's first response is perfectly reasonable for a human. The second is interpreted as an instruction, which seems to me like a perfectly reasonable interpretation, although not the only one. And the third makes no sense (even being aware of the culturally-specific connection between the first two words), so it has assumed you're being humorous and has responded with a similarly surreal reply. I suspect *you* might know what you're talking about, but if anyone started talking like that to *me* I'd think they were off their head.

    In any case, I wouldn't expect a present-day AI to be able to match a human - today's computers have the processing power of a cockroach, and the life experiences of an infant. I agree that it isn't that hard to fool them, but that isn't a demonstration that this must always be so.

    Mr Green,

    "Well, in a Thomistic context, the point is whether something has an intellect, i.e. is capable of receiving a form intentionally (as opposed to having a form materially). The relevant power is "holding a form intentionally", and something has to be a substance to have an intellect."

    I don't see any reason why a computer can't hold a form intentionally.

    Substance is that which cannot be predicated of anything or be said to be in anything. What is a computer a computer of? What is a computer in?

    A particular computer is a primary substance. Computers generally are a secondary substance. At least, so far as I understand it.

    ReplyDelete
  57. Among other things, they are capable of reaching "spontaneous", unpredictable conclusions that a machine can only copy after the fact.

    We have different aims for what AI is supposed to be. I'm only interested in weak AI. If you are interested in strong AI you may have to wait until quantum computing becomes a practical reality. This isn't because I think humans are quantum computers but because our wetware has more similarities to that kind of computing than it does to digital systems.

    Following up on what NiV said, I would also say your conversation with the Cleverbot was somewhat free of context, and what context there was seemed strange. Basically, if you had written Cheese! with an exclamation point, it may have provided a clearer context for it to latch onto your gist. Although I'm not sure how the French got involved it is such a hilarious response I'm tempted to start using it myself. So stop being French, George R.!

    More advanced AI would possibly (and only possibly) be able to follow my train of thought, but, even then, it could be broken.

    People can be broken too, sometimes because our cognitive zombie takes over and we "remember" things incorrectly with fatal consequences.

    ReplyDelete
  58. I think he means, the AI can loose the context or train of thought, hence it can be broken, not that it has a mental/circuitry lapse.

    ReplyDelete
  59. You two... NiV and Step2 have to be careful not to confuse your interpretation of what the machine is doing and what it is really doing.

    For instance, the internet genie just looks for correlations, so if everybody lies to him even if the lie is totally stupid, like describing chuck norris and saying the answer is Pewee Hermann or whatever his name is. THe genie will simply correlate, it doesn't know people for real so whatever description is fine as it is.

    ReplyDelete
  60. Regarding the quus paradox, apparently the use of counterfactuals to determine functions is circular/question begging. Can anyone spell the circularity out for me?

    ReplyDelete
  61. Mr. Green,

    I kind of lost interest in this combox after the economics argument started, but I feel like I should respond to you one more time out of common courtesy.

    Having holes and knowing them are two different things. I'm subject to the quus-paradox — from your point of view, because although I can know I have a specific concept of addition in mind, you cannot. You can observe only my external actions, so epistemologically, you can never know for sure whether I'm thinking "plus" or "quus" (or whether I am not thinking at all, but acting in a purely mechanical way... perhaps by some rote instinct for doing [qu]addition).

    The thing is that you are a human with a rational intellect, and so you can't be subject to the quus paradox, and so your behavior cannot actually mirror quus-like behavior. An entity truly subject to the quus paradox could not, like you, check its beliefs against reason. It could only accept certain commands and override others. If you tried to convince it that something other than its commands was the case, then it could only respond in two ways: one, to override its previous command with a new one; two, to continuously reject the new command. But the criteria for accepting or rejecting a new command would also be based on commands, and so on. This is why AI can't ever be convincing for long. Unless an AI could determine its own commands--and do so rationally rather than randomly--it couldn't appear to be human. It is impossible for a command-based entity to emulate the spontaneous-yet-rational behavior of people in reaction to the chaotic input of an everyday scenario. It could only react randomly or in a pre-determined fashion, which wouldn't keep up appearances for long.

    Remember, I started by making a list of all the things you could ever say to the machine, and then I simply recorded my own responses.

    This wouldn't work. It would fail simply because, as a command-based entity, it could only guess in response to indeterminate input. Consider the following scenario. The machine and I are strolling along when he spots a man holding a camera outside of a cheese shop. I look between the two and say, "Oh, cheese." How does the machine react? Does it smile or does it agree with me that, yes, there is cheese in the window? There is a potentially infinite list of indeterminate input that it could respond to in a closed scenario (indeed, it has commands designed for them) but could not comprehend in an open, chaotic scenario.

    For example, if you ask the machine to spontaneously write a brand new Mozart symphony, you will have stumped it. But you will also have stumped me, because there's no way I could ever compose a Mozart symphony either... thus your test is unreliable from being subject to false negatives.

    Spontaneous new conclusions aren't something so complex. Take the cheese shop example. Why would I say, "Oh, cheese"? Simply because I felt like it. It was a spontaneous decision based on the circumstances. A machine could certainly say, "Oh, cheese"--but applying this in a life-like way would be impossible. If it said it every time it passed a cheese shop or cameraman, it would be bizarre. If it always said it in the same way every time, it would be bizarre. If it only said it at random times, it would be bizarre. The precise contextuality and character of human spontaneity is wholly outside of what a machine can access.

    ReplyDelete
  62. "I kind of lost interest in this combox after the economics argument started, but I feel like I should respond to you one more time out of common courtesy."

    I can respect that, and I realise your comment wasn't addressed to me. Likewise, I'm only responding because I still find the conversation interesting. I wouldn't find it at all discourteous if you didn't want to continue the discussion.

    "The thing is that you are a human with a rational intellect, and so you can't be subject to the quus paradox"

    Everyone is subject the the quus paradox, including humans. That was its original point. It was an argument about the impossibility of pure induction as a means for humans to gain knowledge.

    "The machine and I are strolling along when he spots a man holding a camera outside of a cheese shop. I look between the two and say, "Oh, cheese." How does the machine react?"

    That's a scenario of 34 English words from a typical human vocabulary of about 10,000-20,000 words. It can be no further down our cognitive zombie's list than 20,000 raised to the power of 34, and given that sensible sentences are much rarer, probably much higher than that. That's a long, long way from infinity!

    But in any case, this isn't actually all that difficult a problem. An AI would be scanning the environment, identifying the objects it can see. A cheese shop would be identifiable from the prominent "Fromagerie" sign. A camera would be easily recognisable, and human cultural sayings and conventions about cameras would be in the database. In fact, Google the words "camera cheese" and the Wikipedia entry on the relationship is the fourth one on the list.

    It would therefore reply "Ouistiti!" which of course is French for 'marmoset' - the reasons being of course perfectly obvious to an intelligent being like you.

    I routinely use simple AI programs that *do* determine their own commands. They're learnt from observing the environment. They can abstract useful intermediate concepts, recognise patterns, construct chains of reasoning, and weigh up competing lines of evidence.

    There are other AIs that do so using logic and reason. In Prolog, you just type in the logical specification for the problem you want to solve, and it constructs the algorithm for doing so itself. And these are only very simple AIs, of a few tens of thousands of lines of code. The human brain has more like a few hundred trillion, and decades of input and the time to process it. It's not a fair competition.

    You keep on judging the ultimate possibilities of AI by the limitations of AI today, run on today's hardware. You need to think more in terms of philosophical thought experiments. You say that machines cannot be spontaneous, but you don't say what this really means (other than that it is not either random or repetitive), or any details of why they can't. What fundamentally stops them?

    ReplyDelete