Thursday, December 19, 2013

Zombies: A Shopper’s Guide


A “zombie,” in the philosophical sense of the term, is a creature physically and behaviorally identical to a human being but devoid of any sort of mental life.  That’s somewhat imprecise, in part because the notion of a zombie could also cover creatures physically and behaviorally identical to some non-human type of animal but devoid of whatever mental properties that non-human animal has.  But we’ll mostly stick to human beings for purposes of this post.  Another way in which the characterization given is imprecise is that there are several aspects of the mind philosophers have traditionally regarded as especially problematic.  Jerry Fodor identifies three: consciousness, intentionality, and rationality.  And the distinction between them entails a distinction between different types of zombie.

The kind of zombie usually discussed in recent philosophy of mind is a creature physically and behaviorally identical to a human being but devoid of the qualia characteristic of everyday conscious experience.  Call this a qualia zombie.  If you kick a qualia zombie in the shins he will scream as if in pain; if a flashbulb goes off in his face he will complain about the resulting afterimage; if you ask him whether he is conscious or instead just a zombie he will answer that of course he is not a zombie and that the whole idea is absurd.  In every way he will act just like a normal human being would and if you examined his body you would find that both externally and internally it is identical to that of a normal human being.  But if you had the metaphysical equipment with which to peer inside his conscious experience, you would find there is none at all -- no actual feeling of pain associated with his scream, no actual afterimage associated with his complaint, no conscious awareness of any sort associated even with his vigorous protest against the suggestion that he is a zombie.

The point of this sort of thought experiment is to show that there is a gap between the material facts about our nature, on the one hand, and the facts about conscious experience on the other.  This is philosophically significant both because it suggests that materialism is false and because it suggests that there is a problem about knowing other minds.  If the material facts about us could be the same but without the presence of consciousness, then consciousness (so the argument goes) must not be material.  If we could know all the material and behavioral facts about a person but still not know from that alone whether or not he is a zombie, then there is a problem of explaining how we know that he really is conscious and not a zombie. 

This sort of zombie argument is the flip side of Descartes’ argument to the effect that you could exist even if your body did not.  Descartes imagines a scenario (his “evil genius” thought experiment) in which your conscious experiences are as they are now, but there is no material world at all, and thus in which you have no brain or body at all.  Conscious thought could exist in the absence of matter; therefore (Descartes concludes) conscious thought is not material.  The zombie scenario is one in which matter exists in just the way it does in the actual world but without any consciousness.  Once again we seem to have the result that consciousness and matter are distinct.  The divorce between them is absolute.  (Given the provenance of zombies in the everyday sense, Steely Dan fans might call it a “Haitian divorce.”

Are qualia zombies really possible?  I don’t think so, but before we get to that let’s consider the other sorts.  A second kind of zombie would be one devoid of intentionality, i.e. the directedness or “aboutness” of at least some mental states.  Your belief that it is sunny outside is about or directed at the state of affairs of its being sunny; your perception of a dog in front of you is about or directed at the dog; and so forth.  The intentionality of mental states is like the meaning that written or spoken words have, except that where their meaning is derivative -- there is nothing in the physical properties of ink marks or sound waves that gives them any meaning, so that meaning must be conventionally assigned to the marks or sounds by language users -- the intentionality of mental states is somehow “built in.”

Consider a creature physically and behaviorally identical to a normal human being or non-human animal but devoid of intentionality.  Call this an intentional zombie.  An intentional zombie might utter and write the same sounds and shapes a normal human being would, might give what seemed to be the same gestures, and so forth, but none of this would actually involve the expression of any meanings, since the zombie would entirely lack any meaning, intentionality, or aboutness of any sort.  It would react to objects in its surrounding environment in just the way a normal human being or other animal would, but there would be nothing in it that counted as a representation of these objects.  For instance, an intentional zombie “dog” might start wagging its tail in the presence of a bowl of Alpo and run over to the bowl and start eating, but not because there was anything in it that counted as a perceptual state that was about or directed at the dog food.  (Donald Davidson’s “swampman” example is essentially concerned with the question of whether an intentional zombie is possible.)

A third sort of zombie is one we might call a cognitive zombie.  Rationality essentially involves the ability to form concepts, to put them together into propositions, and to reason from one proposition to another in a logical way.  Obviously this involves intentionality, but it goes beyond that.  Anything with rationality has intentionality, but not everything with intentionality has rationality.  A dog has intentionality insofar as its perceptual experience of the dog food points to or is directed at the dog food.  But it does not have a concept of dog food, or any other concept for that matter.  Rationality -- which, again, essentially involves the use of concepts -- goes hand in hand with language.

Now a cognitive zombie would be a creature physically and behaviorally identical with a human being, but devoid of rationality -- that is to say, devoid of concepts and thus devoid of anything like the grasp of propositions or the ability to reason from one proposition to another.  It would speak and write in such a way that it seemed to be expressing thoughts and arguments, but there would not be any true cognition underlying this behavior.  It would be mere mimicry. 

If zombies of any sort are possible at all, then presumably something could be a cognitive zombie without being an intentional zombie or a qualia zombie.  It might lack true concepts (even if it acts like it has them) but still possess intentionality of the sort non-human animals possess, as well as qualia.  In the same way, something could arguably be an intentional zombie without being a qualia zombie (unless we go along with the view that all consciousness is intentional in at least a thin sense).  An intentional zombie would ipso facto be a cognitive zombie, though.  Less clear is whether a qualia zombie need be an intentional zombie.  Plants, after all, are not conscious but do have something comparable to intentionality insofar as they are “directed at” certain ends (sinking roots, growing toward the light, etc.).  So perhaps a qualia zombie could still have at least rudimentary intentionality corresponding to such low level activities (though it would be incapable of any sort of intentionality essentially associated with qualitative conscious states).  Also unclear is whether a qualia zombie would be a cognitive zombie.  Angels, qua disembodies intellects, would certainly lack the qualia associated with corporeality.  However, there is clearly a sense in which they would be conscious.  So, if a qualia zombie is something devoid merely of the qualia we associate with corporeality, then being a qualia zombie would not entail being a cognitive zombie.  But if being a qualia zombie entails being devoid of any sort of consciousness whatsoever, then being a qualia zombie would presumably entail being a cognitive zombie.

We might call a creature that exhibits the entire package of zombie options -- something physically and behaviorally identical to us but devoid of qualia, intentionality, cognition, and indeed mentality of any sort whatsoever -- The Compleat Zombie.  Now available as a stocking stuffer for that special someone.

Or it would be if zombies were really possible.  But I think they are not.  Start with qualia zombies.  The very idea is, as I have noted before, an artifact of the modern post-Galileo, post-Cartesian “mathematicized” conception of matter taken for granted by Cartesians and materialists alike.  If you define matter so that color, sound, heat, cold, etc. as common sense understands them are not really in matter at all but only in our conscious experience of it -- so that color as an objective feature of the world is redefined in terms of surface reflectance properties, sound in terms of compression waves, and so forth -- then naturally color, sound, heat, cold, etc. as common sense understands them are not going to be identifiable with or explicable in terms of “material” features of the world.  This is the reason way materialism will always be afflicted by objections of the sort raised by Jackson, Nagel, Chalmers, et al., and given the conception of matter the materialist takes for granted these objections are unanswerable. 

However, we Aristotelians would reject this conception of matter.  To speak of matter merely in terms of those among its properties which can be described in the language of mathematical physics is to speak of an abstraction.  What physics tells us about matter really is there in matter, but it is only part of what is there.  Now for the Aristotelian, what matter is essentially is the potency for the reception of form, and there are as many kinds of material substance as there are kinds of substantial form.  Given the nature of water or stone, the sort of material substance that results from a composition of prime matter and the substantial form of water or stone is naturally going to be devoid of qualia.  But given the nature of a dog (say) it is metaphysically impossible for prime matter to be informed by the substantial form of a dog while lacking qualia.  Physics gives you, in effect, a lowest common denominator description of material substances.  But material substances are simply not all reducible to this lowest common denominator, nor is the description physics gives us of the “micro-level” in any way metaphysically privileged.  A dog is not less real than the particles that make it up.  Indeed, it is more real, since given Aristotelian hylemorphism, the particles exist only virtually rather than actually in the dog.  The same thing can be said of fish and birds, gold and led, you name it.  None of these things is any less real or less fundamental to physical reality than the micro-level is.

Obviously this is all very sketchy and very controversial.  See Oderberg’s Real Essentialism or my forthcoming Scholastic Metaphysics for the long story.  The point here is not to defend or even give much in the way of an exposition of the Aristotelian account of material substance, but merely to note that given its radical anti-reductionism, the very notion of a qualia zombie cannot get off the ground.  The same thing is true of the notion of an intentional zombie.  For the Aristotelian account of material substances includes the notion of irreducible intrinsic teleology or finality.  It is of the nature of the phosphorus in the head of a match to point to or be directed at the generation of flame and heat; it is of the nature of an acorn to point to or be directed at growing into an oak; and so forth.  In the same way, given what a dog is, it necessarily going to have states which point to or are directed at things like food, mating opportunities, predators, etc.  Hence there is, given the Aristotelian conception of material substance, no such thing as a creature materially and behaviorally identical to a dog yet lacking any “directedness” of any sort.

It is no accident that the Aristotelian tradition regards sensation and imagination as entirely corporeal and in no way supportive of dualism.  What contemporary philosophers call qualia and intentionality (or at least a rudimentary sort of intentionality that involves mere directedness without conceptual content) are, for the Aristotelian, simply ordinary corporeal features of certain kinds of ordinary material substances.  This only sounds odd if you assume that a material substance is “really” “nothing but” something going on at the micro-level -- particles in motion, say.  For naturally (the Aristotelian would agree) it is hard to see how the feel of pain, the way red looks, the way heat feels, etc. can be reduced to or explained in terms of particles in motion, the firing of neurons, or the like.  But that is a perverse way of thinking of material substances.  To describe a dog in terms of the particles that make up its body or the firing of certain neurons in its nervous system is like describing a painting at the level of the splotches of color scattered about on a canvas.  It is to abstract out from a whole certain parts which are metaphysically less fundamental than the whole is.

More interesting is the question of whether cognitive zombies are possible on an Aristotelian view.  For the Aristotelian does regard rationality as at least partially non-corporeal.  Following James Ross, I have defended the claim that formal thinking is immaterial because it has a determinacy of content that purely material systems cannot have.  (I had reason to defend this argument recently against objections raised by Robert Oerter.)  Does this sort of argument entail that cognitive zombies are possible?  In particular, does it entail that a creature could be physically and behaviorally identical to a normal human being but (since cognitive activity has an immaterial aspect) nevertheless devoid of any concepts or the rational activity that presupposes concepts?  (Compare Oerter’s “Hilda” example.)

I think it does not.  Ross’s argument holds, and need hold, only that no set of material facts entails any determinate content; the argument does not and need not hold that the material facts could be just as they are without any content at all.  Consider the following analogy: You might know that Δ is a symbol without knowing exactly what it is a symbol of.  A particular triangle?  Triangles in general?  A dunce cap or slice of pizza?  The material facts about Δ alone won’t tell you, even if you know on independent grounds that it is a symbol of something or other.  Similarly, Ross’s argument does not require that we cannot know from the physical and behavioral facts alone whether a person has thoughts with some conceptual content or other.  It requires only that the physical and behavioral facts alone do not metaphysically determine what, specifically that content is

In that case, though, Ross’s argument would establish that there is an immaterial aspect of thought -- and it is significant that he speaks in his original paper specifically of “immaterial aspects of thought” -- without thereby entailing that a creature could be physically and behaviorally identical to a human being without having any thought content whatsoever.  It would refute materialism without entailing the possibility of cognitive zombies or opening up a problem of other minds.  (Whether Ross’s argument somehow indirectly entails the possibility of a cognitive zombie, or whether any other Scholastic argument for the immateriality of the intellect does so, are questions I will leave for another time.)

68 comments:

  1. So would a sufficiently advanced artificial intelligence not be a qualia zombie? Intentionality zombie? Cognitive zombie? Or to put it more directly, could a sufficiently advanced AI have qualia, intentionality (independent of its makers), or cognition?

    ReplyDelete
  2. An artificial intelligence would be an artifact rather than a true substance; more precisely, it would have a merely accidental form rather than a substantial form. Hence, lacking animality, it would not have distinctively animal properties like qualia, much less rationality. Nor would it even rise to the level of a zombie, because it would not be physically identical to an animal. (The reason not being because it has parts made of plastic, steel, etc. -- that is not a problem per se -- but rather because these parts would not be related the way the parts of a true Aristotelian substance are, but instead would constitute a mere aggregate.)

    ReplyDelete
  3. Also, I should put "intelligence" in scare quotes when speaking of an artificial "intelligence."

    ReplyDelete
  4. Are you planning a response to Oerter's latest rejoinders?

    ReplyDelete
  5. Dr. Feser,

    Thomas Cochran has just responded to your response to his original piece on "Nietzsche and Neo-Scholasticism":

    http://anamnesisjournal.com/2013/12/turning-wine-water-reply-edward-feser/

    Not sure if you were already aware of this. If you were, my bad. If not, go show him what you've got!

    ReplyDelete
  6. "Rabbah created a man and sent him to Rabbi Zera. Rabbi Zera spoke to him, but received no answer. Thereupon he said unto him: "Thou art a creature of the magicians. Return to thy dust."

    Babylonian Talmud, Sanhedrin 65

    ReplyDelete
  7. Great post.

    It makes me wonder how the current scientific endeavor would change if an Aristotelian metaphysics became the default way of thinking about the physical and mental world. Perhaps not the science itself, but at least the assumptions implicit in certain scientific descriptions.

    Certainly the tendency to reductionism would be reduced, or at least tempered by a respect for the reality of whole substances. The idea of essences and form seem already implicit in biological accounts. Perhaps biologists would stop having so much physics envy. :)

    That said, scientists would still be faced with difficulties in explaining qualia, intentionality, and certainly cognition. How could I explain these ideas to my MD cousin who has materialism beat into her throughout med school?

    I realize this goes beyond the intentions of your post, but any insight would be appreciated.

    Cheers,
    Dan

    ReplyDelete
  8. Hmm...how, then does animal sensation "work", if it is purely material? How does a dog, say, perceive the blueness of a chew toy? Certainly not by grasping its form, since it has no immaterial intellect with which to do so. But if it does not do so by grasping its form, then how does the dog perceive it?

    ReplyDelete
  9. Prof Feser (and loyal Feser'ites):

    A dog is not less real than the particles that make it up. Indeed, it is more real, since given Aristotelian hylemorphism, the particles exist only virtually rather than actually in the dog.

    I think I'm close here, but something is eluding me. When the AT'er says the "the particles exist only virtually, rather than actually in the dog," he can't literally mean that if you look at the dog's ear with a super electron microscope (or whatever), you won't see atoms, etc? Right? So what does "virtual" mean in this context? Thanks in advance.

    ReplyDelete
  10. I'd love a whole post on AI, as it's a complicated issue.

    Why can't humans learn to create substantial forms? Biology and evolution learned to turn disparate parts into ever greater wholes; why can't we become conscious of this process and harness it? By analyzing what the necessary and sufficient conditions for something to be a substantial form are, we should be able to control the power of turning aggregates into wholes.

    For example, as some consciousness theories (like the Integrated Information Theory) show, we can see how parts becomes wholes by analyzing the causal structure of system. If the system behaves as a single causal unit it becomes a whole that can't be reduced to its parts...its behavior is irreducible to anything outside itself. Sounds like a substantial form to me.

    If an object is causally efficacious in and of itself how is it an aggregate? Anything with unitary causal power is what makes a whole a whole in the first place. (And, in a way, what makes a conscious state a conscious state.) Why can't we program a machine (I didn't say "computer" for a reason) to behave as a single causal unit, thereby no longer being reducible to its aggregate parts. (Check out Giulio Tononi's book Phi full a full treatment on the topic.)

    ReplyDelete
  11. Why can't humans learn to create substantial forms? Biology and evolution learned to turn disparate parts into ever greater wholes

    Here's my take, given my meagre knowledge of Aristotelian principles: Evolution didn't "create" new substantial forms. It is merely the process whereby certain substantial forms showed up for the first time. Those forms were always "possible", given the structure of reality.

    In the same way, if humans "created" a new animal, say, by tinkering with a genome and stringing together a sequence of DNA that hadn't been seen (yet) in the history of earth, they would not have "created" a new substantial form at all. A better term might be that they "discovered" one that we didn't know about before.

    ReplyDelete
  12. Aquohn,

    Hmm...how, then does animal sensation "work", if it is purely material? How does a dog, say, perceive the blueness of a chew toy? Certainly not by grasping its form, since it has no immaterial intellect with which to do so. But if it does not do so by grasping its form, then how does the dog perceive it?

    It's important to understand what definition of "material" is at play, here. In Thomism, it means something so distant from the modern philosophical understanding that it might as well be a different word. Most of the things Prof. Feser calls "material" would be considered immaterial by an analytic philosopher, simply because his definition of matter is different.

    In the case of a dog, his "material" operations are far more sophisticated than what most materialists would attribute even to humans. The dog's perceptive faculties take on the sensible species of, say, dog food, and his apperceptive faculty (i.e. consciousness) unites these species into a complete phantasm. This is identical to the process by which humans form phantasms. However, the dog lacks the ability to convert his phantasms into concepts, because he lacks an active and a possible intellect. He lives in a sort of representationalist world, similar to Wittgenstein's account of Hume's imagism, where he perceives things but cannot understand what they mean. Nonetheless he can be trained to respond to patterns and memorize images and so forth, even if he does not comprehend any of it.

    Basically, the dog grasps the sensible form of blueness but he cannot extract from it the intelligible form of blueness, and so he can perceive but not understand. Humans go through the same process, except that they have an active intellect capable of finding the intelligible (conceptual, rational) content of blueness within the mental representations.

    ReplyDelete
  13. Tom Carroll,
    It's important to understand that only substantial being is being in a true, unqualified sense. All other categories of being are genera of accidental being, which is being in only a certain qualified sense. Moreover, all of the being that is able to be perceived by the senses is accidental. Substance, per se, on the other hand, cannot be perceived at all; it can only be known by reason. That is why a natural element that goes from being a substance to being merely a constituent part of another substance will not change in any perceptible way.

    ReplyDelete
  14. "The same thing can be said of fish and birds, gold and led, you name it."

    It's true. I've always thought Led Zeppelin was more real than Plant, Page, Bonham, and Jones considered alone.

    ReplyDelete
  15. Alypius - That's a great analysis. So you would say AI is possible as we would just be "discovering" the substantial form of an intelligent machine?

    Conversely, if AI can't be a substantial form, as Feser suggests, does that come with the prediction that it couldn't "seem" like it was intelligent as well? Or, could we be in the troubling situation of confusing a complicated aggregate with the simulacrum of a substantial form because it may be behaviorally indistinguishable. (Imagine a computer that consistently and persuasively passes the Turning Test, for instance.) The subtle implication in an Aristotelian rejection of AI contends that not only is true AI impossible, but AI could never even APPEAR to be intelligent. My guess is that Feser would argue that AI will never pass the Turing Test. I'm not so confident about any of these propositions.

    ReplyDelete
  16. @rank sophist: I see. But what exactly are these phantasms, from a hylomorphic standpoint? Where do they exist?

    ReplyDelete
  17. Aquohn,

    Phantasms are mental representations of sensations (feelings, sights, sounds, tastes, etc.) that remain even after our sensory organs have finished taking on sensible species. They are stored within the brain. But, again, keep in mind that the Thomistic brain is not the brain that we moderns typically think of. It is not a collection of "neuronal firings" and disparate "regions" that "compute" specific functions. It is more like, to quote Gregory of Nyssa on classical physiology, "a foundation for the senses" and "the principle of the motions of the nerves" (On the Making of Man XII.1-3, XXX.9). It was common for the brain to be seen as a holistic and living organ, rather than a computer, in which the senses were unified. This is why it made perfect sense to Aquinas and his contemporaries to say that even the sensus communis (or sensual apperceptive faculty) was located in a ventricle of the brain. As far as I know, phantasms were similarly contained within a ventricle.

    All of this sounds strange and a bit magical to people schooled in neuroscience, which has been badly infected by modern philosophical premises. However, even if we have advanced beyond talk of ventricles and vapors, it remains true that the ancients saw no metaphysical problem with mental images being contained in the wholly "material" brain. I see no reason why we should reject this view. As for where and how they are stored in the brain, your guess is as good as mine. Perhaps neuroscience will one day find the answer, after it's given up its philosophical biases.

    ReplyDelete
  18. If qualia such as pleasure and pain are not required for the complete behavioral functioning of an animal or person (so they provide no 'selective advantage' in the Darwinian sense), then why did they evolve in the first place?

    ReplyDelete
  19. What exactly does the Aristotelian mean by "matter"?

    Jeffrey Brower argues that "matter" is a functional concept - matter of a change is that which accounts for the potential for that change. But if this is what matter is, then (i) it is a bit difficult to see what the Aristotelian means the intellect is immaterial, (ii) this seems to make immateriality extensionally equivalent to immutability, (iii) and the claim that material process cannot be determinate suddenly seem to have no connection to Kripke's, Goodman's, and Quine's arguments, and (iv) this definition of matter allows us to say that angels have matter, but Aquinas denies that angels are material.

    Is Aristotelian matter = mechanistic matter minus mechanistic assumptions? But this is circular, since we have "matter" on both sides of the definition. We are still left in the dark as to what matter is.

    Perhaps we are to define matter in opposition to the mental, but in that case, it is unclear what makes this conception of matter an ARISTOTELIAN conception of matter.

    So I am quite unsure about what the Aristotelian means by "matter". Any help on this question will be appreciated. Thank you!

    ReplyDelete
  20. That said, scientists would still be faced with difficulties in explaining qualia, intentionality, and certainly cognition. How could I explain these ideas to my MD cousin who has materialism beat into her throughout med school?

    Just some further thought on my original post. I've just finished reading your Philosophy of Mind chapter on Zombies (chapter 10). You summarize the idea there as "physical reality does not on its own add up to mental reality".

    Having taken a closer look at this blog post, I understand now that your critique of this statement is that this problem only exists if you start off with a reductionistic view of matter as presented by the physicist who inherits his or her discipline from a Cartesian world view where matter can be mathematized - where the only aspect focused on of Aristotle's 10 categories, is the category of quantity. You are basically saying if this is all you focus on of the 10 categories, then of course you are going to get the problem of qualia Zombie, because you've truncated matter to only that which can be reduced to measurement and ignored all other categories.

    Is this a good summary?

    Cheers,
    Dan

    ReplyDelete
  21. Your argument also applies to chapter 11 "Knowledge of physical reality does not on its own add up to knowledge of mental reality." The Mary argument and Nagel's Bat example.

    I suppose then, the debate is whether everything can be reduced to a mathematization. If yes, the eventually qualia and subjective aspects of experience should succumb to neuroscience. If no, then we have to open up science to seeing reality from different perspectives. For example, returning to an Aristotelian framework where other categories of reality have equal value to quantity....

    I'm no philosopher, so the above is a gross generalization I'm sure, but that seems to be where you are going with this....

    Cheers,
    Dan

    ReplyDelete
  22. I wish you had written Philosophy of Mind with with each chapter including Aristotelian counterexamples. Is there a revision in the works? :)

    Cheers,
    Dan

    ReplyDelete
  23. Matt Sigl: I'd love a whole post on AI, as it's a complicated issue.

    Yes and no... metaphysically speaking, making a fake mind is not unlike making a fake body. Living human bodies have shape and colour; so do marble and paint, so the idea of a sculpture or painting that looks just like a real person is metaphysically not too exciting; the interesting part lies in the talent of the artist to manipulate his materials skilfully enough to make the result look like a real person. Similarly, imitating a rational cause for something like speech can done with a tape-recorder; even simulating a conversation isn't very special, as anyone who has started replying to an answering-machine can attest! The difficulty lies in coming up with tricks and techniques to maintain that illusion for longer than a few seconds, but that all lies on the technical side, not in the metaphysics.

    Why can't humans learn to create substantial forms?

    As Alypius said, nobody creates new substantial forms; however, we can create new substances: organisms do that every time they reproduce. You can take existing substances (like hydrogen and oxygen) and combine them just right to make a new substance (like water). As for wholes that don't reduce to their parts, such claims generally turn on some equivocation of the word "part". A bicycle is "just" the sum of its parts, but nobody thinks you can take a ride on a bunch of bicycle-parts strewn on the floor — they have to be assembled into the form of a bicycle first. But of course in the Aristotelian view, the "form" is a part too. So we can turn parts into wholes; we do it all the time. The catch is that applying a form externally does not a substance make. It makes a machine, which has a certain kind of unity, but only extrinsically imposed. A substance, on the other hand, has unity intrinsically, it has its own single form, as opposed to a machine or artifact, which has a bunch of different substances (parts) aggregated together derivatively.

    So putting parts together can have either of two effects: it can result in an artifact (substances that work together because we put them together); or it can result in a new substance (if the laws of nature work that way, e.g. if a couple of H's and an O are combined in just the right way, the separate substances cease to exist and we get a new molecule of water).


    if AI can't be a substantial form, as Feser suggests, does that come with the prediction that it couldn't "seem" like it

    Now we can see where to go with this: there's no such thing as "the substantial form of an intelligent machine" because if it had a substantial form, it wouldn't be a machine, and vice versa. (Could the nature be such that some combination of computer parts just happens to result in a new non-machine thinking substance? Hypothetically, sure. There have been past threads discussing this idea. Of course, we have no reason to think that anything that looks like a computer would actually do that. We do have reason to think that you can cause a new rational substance to come into being in other ways, i.e. by having a kid.)

    Anyway, since artificial intelligence can't be a substance, it can't be real intelligence (since on A-T, an intellect just is the substantial form of some substance). But there is no reason — again, hypothetically — that a machine couldn't simulate intelligence well enough to fool us. (You just need a really really fancy answering machine!) Whether we can actually build such a machine comes down, then, to practical technological concerns. (And other mundane concerns, such as whether the world comes to an end in the next ten years, or whether civilisation falls apart owing to economic collapse, or a mysterious disease that wipes out most of the population, etc., etc., etc. If technology continues to progress for a few more centuries, my own guess is that we'll see some really impressive results.)

    ReplyDelete
  24. I wonder what hylemorphists have to say about connectionism. It is a growing school of thought in the neuroscience field that treats the brain as a non-symbolic computer using non-symbolic algorithms.
    To clarify those terms, non-symbolic alogorithms are effective procedures that use non-symbolic representations (for instance, a picture is non-symbolic representation).

    ReplyDelete
  25. Hi Ed,

    Question: do you think we'll be able to make artificial "intelligence" that perfectly mimics human intelligence? I.e., will we be able to make machines that pass the Turing Test, or that even seem smarter than any human being ever?

    ReplyDelete
  26. Bobcat,

    Question: do you think we'll be able to make artificial "intelligence" that perfectly mimics human intelligence? I.e., will we be able to make machines that pass the Turing Test, or that even seem smarter than any human being ever?

    You didn't ask me, but I'll weigh in anyway.

    I think one of the problems of the Turing test is that people seem to overlook how problematic it is. It's an irreducibly subjective test where people gauge whether or not a machine is conscious based on the responses it gives. You can argue it's the best test we have for the issue in question (and still maintain, I think, that it's a bad test), but 'passing the Turing test' amounts to 'a specially programmed/engineered computer convinces people it's another person'. From what I call, some computers have been passing the Turing Test since the days of Eliza/Dr Sbatso.

    ReplyDelete
  27. And Parry; don't forget Parry. According to some, it was the first program to pass the test.

    ReplyDelete
  28. I think I remember Raymond Tallis saying, 'any computer can pass the Turing test if the person giving it is stupid enough.'

    ReplyDelete
  29. Hi Crude,

    Fair enough -- I don't like the Turing Test either. It was more my way of asking: do you think there are certain capabilities that AI will be incapable of? And by "capabilities", I don't mean intentionality, or the ability to enjoy qualia. I mean certain third-personally accessible abilities, like the ability to learn a language without having known it before, the ability to come up with a new, good joke, the ability to write a deep and enjoyable novel, the ability to come up with a novel philosophical argument, etc.

    ReplyDelete
  30. @Bobcat:

    Of course I won't presume to answer for Ed, but to me it seems that we could design AI to do/simulate lots of things given that we know how to do them ourselves. I see no reason in principle that we couldn't impart to a computer a derived "ability" to do anything for which we can give formal rules. The real question, I think, is which things can be described by such rules; I suspect that if we knew the answer to that question, we'd know the answer to yours.

    ReplyDelete
  31. Bobcat,

    I mean certain third-personally accessible abilities, like the ability to learn a language without having known it before, the ability to come up with a new, good joke, the ability to write a deep and enjoyable novel, the ability to come up with a novel philosophical argument, etc.

    Well, I think some of those are problematic. A good joke? Back to subjectivity. Same with deep and enjoyable novel. A novel philosophical argument? I'd want to know what that means.

    Learning a language without having known it? That one seems more tractable, but it also seems conceptually easy. I'd assume you mean learn it as in having it be taught to them one form or another - repetition and obvious association. And since we're putting the question of 'actually having knowledge/understanding' in favor of 'being able to spit out the required associations semi-reliably after having been trained to do that', that doesn't even strike me as difficult in principle.

    ReplyDelete
  32. @rank sophist: I see. Thank you for answering my question.

    If you don't mind, I have another, somewhat off-topic query: where do emotions fall into this whole scheme of things?

    ReplyDelete
  33. Hi guys,

    Thanks for your responses. I think Scott's answer is helpful: it depends on whether those tasks can be reduced to a set of formal rules that one can then apply to data. I think it's possible that this can be true of at least some jokes. For instance, the most popular explanation of why things are funny is the incongruity theory, according to which what makes us laugh is if our normal expectations are thwarted in a pleasing way (this is a rough characterization, but it will do for now). So, if you hear a knock on your door, you open it up, and it's the mailman, this will not be funny. But if it's a gorilla eating ice cream, this may be funny. (Well, it wouldn't be funny if it happened in real life, but it may be funny just reading about it.)

    Let's assume, though, that you can give a set of formal rules to activities we typically think of as creative and non-formalizable -- original joke-writing (and telling), novel philosophical argumentation, writing a novel that probes the human condition in a revealing way, etc. If that were true, why should we continue to think that there's anything non-material about our intelligence? Why not think that we're just carrying out a set of rules?

    Note:
    (1) I'm not asking this question because I believe that we are just fancy meat-computers. I hope we're not, and, speaking as someone who has read almost no Thomistic or Aristotelian philosophy, I think that conclusion is far from established.
    (2) I'm not asking this question because I have a well-worked out view of the matter. This is not some trick I'm pulling to elicit answers from you so I can come crashing down upon them with my own answer. I genuinely don't know.
    (3) Although I've been commenting, sporadically, on this blog for years, I read only a few of the posts. I suspect that the answer has something to do with Ed's defense of James Ross's argument, but I've not read Ed's defense, so I can't be sure.
    (4) Finally, if there have been posts on this blog addressing my question, I'm more than happy just to read them and not trouble you with a rehash of an argument you've already gone over countless times -- unless you'd prefer to do that!
    (5) So I guess I'm just asking for links to blog posts, if that's not too much of a pain.

    ReplyDelete
  34. For instance, the most popular explanation of why things are funny is the incongruity theory...

    I think the Benign Violation Theory works better since it directly explains physical aspects like tickling. It is also in strict contrast to what can be called the reverent sense and that may be why Voltaire said God is a comedian playing to an audience too afraid to laugh.

    Speaking of which, on the topic of zombies you have to have the right theme music.

    ReplyDelete
  35. @Bobcat:

    "If that were true, why should we continue to think that there's anything non-material about our intelligence? Why not think that we're just carrying out a set of rules?"

    That something takes place in accordance with rules doesn't imply that it's reducible to those rules or that it consists of rule-following behavior. Supposing, for example, that we could discover a set of rules that reliably generated funny jokes, that wouldn't show that when we told jokes we were "just" following those rules—or even that those rules sufficed to generate all jokes.

    It would just mean that we could program a computer to generate things that sound funny to us, just as we can program one to perform "calculations" that are meaningful to us. In neither case would that imply that the computer even understood what it was doing.

    You're quite right about the relevance of Ross. The most recent posts on the subject are here, here, here, and here.

    ReplyDelete
  36. On the subject of AI here is a look under the hood of IBM’s Watson.

    Best paragraph: The system we have built and are continuing to develop, called DeepQA, is a massively parallel probabilistic evidence-based architecture. For the Jeopardy Challenge, we use more than 100 different techniques for analyzing natural language, identifying sources, finding and generating hypotheses, finding and scoring evidence, and merging and ranking hypotheses. What is far more important than any particular technique we use is how we combine them in DeepQA such that overlapping approaches can bring their strengths to bear and contribute to improvements in accuracy, confidence, or speed.

    ReplyDelete
  37. Let's assume, though, that you can give a set of formal rules to activities we typically think of as creative and non-formalizable -- original joke-writing (and telling), novel philosophical argumentation, writing a novel that probes the human condition in a revealing way, etc. If that were true, why should we continue to think that there's anything non-material about our intelligence? Why not think that we're just carrying out a set of rules?

    It's also worth noting that it's not the behaviors in particular that lead us to conclude that there is something immaterial about man's intelligence. "Intelligent" behaviors (like doing sums and solving problems) are defeasible indications of intelligence. But the reason why we attribute intelligence to them is because we are humans who are aware of our own thoughts. We judge that other human substances are also intelligent, but the inference is based on their similarity in kind, not paradigmatically "intelligent" behaviors per se.

    Oderberg's article on hylemorphic dualism is helpful.

    ReplyDelete
  38. There is a determinate "phenomenology" associated with judgment, which is distinct from simple algorithmic processing that a computer does.

    ReplyDelete
  39. Aquohn,

    I'm glad I could help. As for emotions, this is the modern word for what the ancients called "passions". Passions are feelings that we experience in reaction to our perceptions or imaginings. Many are bodily (again, not in the modern materialist sense), but they were generally not associated with the brain. The heart or the stomach were, to my knowledge, typically considered to be their seats. It should be mentioned that passion is generally a wider category than emotion. It includes pain, for instance. And, unlike emotions, several passions (joy and love are good examples) exist, in part, in the immaterial section of the soul.

    If you're interested in reading more, Aquinas wrote a pretty beefy section on the passions in ST IIa.

    ReplyDelete
  40. @rank sophist: Thanks a lot for the info. I'll read that bit of the Summa when I've the time.

    Once again, thanks.

    ReplyDelete
  41. Mr. Green - Very good points. I'll just add a few comments.

    1. Why should we assume h20 to be a substance in and of itself? We can explain how h20 works in a totally reducible way from the characteristics of its chemical structure. Just as there is no such thing as digital camera in and of itself, only a collection of individual photodiodes that we "perceive" to be a unitary object, why believe there is anything such as water in and of itself? Why is water a substance and not a collection of aggregates? Water may be no more "real" as a substance than a bicycle is; it's just much simpler.

    2. I totally agree that you can define a machine as a "whole" only insofar as its something we perceive as a "unit' or "thing" but it is not, in itself, a true substance. Your bicycle example works fine here. I think we get to a weirder place with complex causal networks in an AI whose behavior as a whole cannot be reduced to parts IN PRINCIPLE. I am talking about a human creation made of silicon with an INTRINSIC causal unity. Further, I believe that the conceptual breakthrough over what kind of networks these would be has been made and proven mathematically. Essentially then, what this machine would be in A-T parlance is a substance, and not a "machine" at all. It doesn't change the fact that given the common usage in culture of what "machine" means, calling it an intelligent or conscious machine would make sense to most of the philosophically unsophisticated who don't care to split such hairs. A conscious computer will still be a kind of a "machine" to most people; it's just the boundaries of what that word means to them will radically change.

    3. You're quite right; there is no such thing as "artificial intelligence," only REAL intelligence. The question really is whether the laws of nature allow for silicon to be organized in such a way that it can become a true substance and acquire real intelligence. I quite dread the social and existential confusion that could result from the creation of a perfect simulation of intelligence but there being "nothing under the hood" as it were. A zombie computer.

    ReplyDelete
  42. The notion of qualia makes no sense at all. It is (the claim goes) a phenomenon whose existence cannot be verified except in my own case. Yet it makes absolutely no difference to anything that I do. And yet it is (we are assured) the fundamental difference between conscious and non-conscious entities.

    But since the presence of qualia cannot be determined by any behavioural or structural analysis, what gives us the right to say that stones lack it, let alone computers?

    ReplyDelete
  43. "original joke-writing (and telling), novel philosophical argumentation, writing a novel that probes the human condition in a revealing way"

    An AI would be genuinely "intelligent" if it could find the original joke funny, understand novel philosophical argumentation (both its novelty and its philosophical and argumentative quality), and not write a novel, but read one, recognizing that it probes the human condition in a revealing way.

    As Scott put it, just because we can create a joke-writing machine that can make us laugh, doesn't mean we've created a joke writer.

    ReplyDelete
  44. Matt,

    why would the human silicon creation in question have intrinsic causal unity any more than the bicycle? Also, as I understand it, what separates the water from the bicycle is that none of the bicycle parts have any tendency to end up bicycle-wise. Water on the other hand just naturally gets itself together. I think that this also goes for the AI, no matter how 'intrinsically' it functions as a whole 'in principle', it's just an aggregate still. My car has intrinsic causal unity insofar as there is an irreducible relation between the parts at a certain point in order for it to be a car and not, say, a fancy piece of furniture.

    ReplyDelete
  45. The notion of qualia makes no sense at all. It is (the claim goes) a phenomenon whose existence cannot be verified except in my own case. Yet it makes absolutely no difference to anything that I do. And yet it is (we are assured) the fundamental difference between conscious and non-conscious entities.

    But since the presence of qualia cannot be determined by any behavioural or structural analysis, what gives us the right to say that stones lack it, let alone computers?


    It does not seem correct to say that qualia "make absolutely no difference to anything that I do." Take the sensation of pain; it obviously effects the way I behave. As does the taste of a good meal.

    The resulting behaviors (my avoiding painful stimuli or opting for a tasty meal) might be realizable without qualia (say, in a robot that responds to pressure that would be painful to a human), but it doesn't follow that there is no difference between the robot and me.

    One is free to say that stones and computers might have qualia. All I will say is that it seems very implausible.

    ReplyDelete
  46. It also doesn't seem correct to say that "[t]he notion of qualia makes no sense at all." Nobody else has to be able to "verify" my sensation of (say) this precise shade of red in order for the "notion" that I'm experiencing it to "make sense." Even if it didn't make any difference in anything I did (which of course it does, it would still "make sense."

    ReplyDelete
  47. @Matt Sheean:

    "why would the human silicon creation in question have intrinsic causal unity any more than the bicycle?"

    I don't think it would, but I think the main point is that this is an empirical question, not a metaphysical one. If it did turn out (very much to my surprise) that arranging silicon in a certain way did naturally generate an intellectual substance in somewhat the same way as human conception does so, nothing in A-T metaphysics would have to change in order to account for that.

    ReplyDelete
  48. Scott,

    I agree, and I don't think I should have asked that question at all, now that you point it out. I was more concerned with the difference between water and bicycles, per matt's first point. If we just focus on the parts, and the fact that water and bicycles have parts then we can say, 'they both have parts, so they're both aggregates." Water, though, is something toward which its parts naturally tend, whereas the parts of bikes are assembled for the sake of something that humans have in mind such as getting from one place to another faster or exercising or something else. We can't talk about bikes qua bikes as chains or handlebars or water qua water as H's and O's, but that is not where the A-T thinker is drawing the distinction between them. So, as you say, the A-T position hasn't been touched yet.

    ReplyDelete
  49. I should say that Matt's point about water is what pushes me in the direction of the A-T view, as my intuition is, "well of course there is something different between myself and a bicycle beyond just the complexity of parts and functions." The realism of the A-T view with respect to forms and such provides the sort of framework one needs to distinguish between water and bicycles.

    ReplyDelete
  50. An analogy that might help explain the difference between true substances and mere aggregates is one I draw from programming.

    In C++, at least, there are certain data types that are “built into the language” so to speak (examples of these would be int, float, char, etc…). These data types are such that practically every program will use them at some point and they are recognized globally.

    Members of these data types are more like what the Aristotelian means by substance; they are “out there” to be discovered, and don’t get defined onto something by us.

    Then there are other data types which are defined by the programmer for the purposes of a specific program they create (examples of these would be struct, union, etc…). These are made up of a number of other data types, even other structures, and usually are useful only for the specific program they are created for. They also look kind of “unnatural” in the sense that they are really just a roundup of the “built-in” data types to simplify the code (for instance, you might make a struct data type, call it EmployeeInfo, that stores the name, salary, and job title of an employee for a company’s database).

    Members of these data types are more like what the Aristotelian means by aggregate; the programmer makes them up to serve some specific function for their own purposes in a specific program.

    Off course, this is not a perfect analogy, but I think it might help illustrate the basic idea (at least to all you programmers out there).

    ReplyDelete
  51. And of course, I meant to say "Of course, not "Off course", in my last post...

    ReplyDelete
  52. Off course, this is not a perfect analogy, but I think it might help illustrate the basic idea (at least to all you programmers out there).

    And to go further off course (not that you did; :-)), in a segmented memory architecture, that word whose LSB is at address 05F5:1E43 is what it is -- regardless of whether the lexical pointer is, say, God_Is_Being_Itself, GodIsABeing, No_Idea_How_Best_To_Name_This_Word, FileHandle01, total_walks, zigamoo, etc.

    ReplyDelete
  53. Following up on what Scott has said, in some sense, nothing prevents humans from creating rational animals made artificially in a lab. Of course, given the AT’s other commitments about what is entailed in being rational, God would have to infuse this substance with its form, and thus, with its rationality. But nothing stops scientists who are clever enough to come up with a process of manipulating various natural substances in such a way that ultimately leads it becoming a rational animal.**

    In some sense, we do this all the time; every time a couple has a baby in fact.

    Whether or not this can be done with silicon is an entirely different matter, and as Scott has said, a largely empirical one.

    Silicon has always been the subject of much idle Sci-fi speculation about alien life due to its ability to easily form into complicated bonds, an ability it shares with carbon. This might bring support to the thought that perhaps silicon might be able to be formed into life. OTOH silicon might not be so well suited for becoming a living substance, so this is a matter that can only be settled by further investigation.

    ** (As a sub-point, it might be wondered if there could be a rational substance that was not an animal but yet still existed physically. Given what is entailed by what the AT means by “animal”, it seems doubtful that this could be the case; it seems to violate the scholastic “Principle of Finality” among other things. But this is a much larger topic than can be dealt with here)

    ReplyDelete
  54. And Glenn...

    There's a reason why I've only taken one programming course... :-)

    ReplyDelete
  55. Timotheos,

    As far as I'm concerned, your analogy succeeds at hinting at if not also conveying the general idea. I was just having some good-natured fun with, let us say, 255 having been accidentally written in place of 15, and sought to employ the mention of programming constructs as a segue to something else on another thread.

    ReplyDelete
  56. Glenn

    And I was jabbing back at your jest by hinting at the fact that you programming types can have a strange sense of humor. (I should have used the phrase “programming class” instead of “programming course”; I think I might have set you off course…)

    ReplyDelete
  57. you programming types can have a strange sense of humor

    Indubitably.

    But think of the valuable service that we programming types perform: we provide non-programming types with extra occasions to experience a sense of gratitude. ("But for the grace of God, there go I...")

    ReplyDelete
  58. Two things.

    Dr. Feser, you wrote,

    The intentionality of mental states is like the meaning that written or spoken words have, except that where their meaning is derivative -- there is nothing in the physical properties of ink marks or sound waves that gives them any meaning, so that meaning must be conventionally assigned to the marks or sounds by language users -- the intentionality of mental states is somehow “built in.”

    Different Thoughts = Different “Mental States”?

    Is thinking about or contemplating some thing really a “mental state”, in the sense that thinking about X then thinking about 7 is to go from one “mental state” to a different one?

    Is the Physical World “Meaningless”?

    Furthermore, is it true that “there is nothing in the physical properties of ink marks or sound waves that gives them any meaning” and the only “meaning” these thing possess is “conventionally assigned” to them by us? Would this not result, necessarily, in the physical world being intrinsically meaningless? That seems rather harsh.

    Moreover, if that is true, and all physical things are meaningless, then what makes some physical things more useful or appropriate than others for conventionally assigning meaning to (e.g., written words (markings) or manipulated sounds)? Is the physical world qua physical all equally and necessarily meaningless? That would seem to threaten the Christian belief in the physical world’s intrinsic goodness and reasonableness.

    In the sense that physical things have explanations (causes) they also have, although perhaps in a sense derivatively, meaning.

    For example: Consider the adage that “where there’s smoke, there’s fire.” Smoke ordinarily indicates – in a sense, “means” – fire; likewise, random scratch marks on a tree might indicate clawing by a large, possibly carnivorous, animal (say perhaps a bear).

    We can extend this logic further. A healthy tree somewhere normally indicates an environment that is also congenial to life or producing or at least maintaining the tree (no doubt, though, a tree could be artificially placed or planted somewhere that doesn’t provide the necessary nutrition for sustaining the tree). Thunder normally indicates lightning; and when we see lightning, we suspect that there was thunder, even if perhaps we fail to hear it. But by now the point is clear and what I mean especially by the “reasonableness” of the world being written intrinsically in the world or in nature. It would seem that at some point the world or nature must stop “deriving” an externally derived and imputed meaning or reason and just possess it; and this not merely in a purely subjective or arbitrary sense, but objectively and of itself.

    That being said, I found your article as ever interesting and deeply informative. I hope in the new few weeks to pre-order your upcoming work, Scholastic Metaphysics, and look forward to it with pleasure.

    Best regards and a merry Christmas to you and yours,

    William M. Dunkirk

    ReplyDelete
  59. @Willian Dunkirk:

    "In the sense that physical things have explanations (causes) they also have, although perhaps in a sense derivatively, meaning."

    The word "derivatively" here concedes the very point you're arguing against. Certainly physical things have "meaning" in a derivative sense; the point is that it is derivative, and dependent on the existence of intellectual substances who can "mean" things in a nonderivative sense.

    To borrow one of your examples, when we say that claw marks on a tree "mean" a bear, we mean not just that the marks were caused by the bear but also that someone could reasonably infer the one from the other. The imputation of meaning to the physical marks is derived from and dependent on the existence of beings who can mean.

    "It would seem that at some point the world or nature must stop 'deriving' an externally derived and imputed meaning or reason and just possess it; and this not merely in a purely subjective or arbitrary sense, but objectively and of itself."

    And so it does—in intellectual substances. When a human being "means" something, we literally have the thing's form in our intellect; nothing else is required for "meaning," as that's just what "meaning" is in its fundamental, nonderivative sense.

    ReplyDelete
  60. Once we produce machines whose appearance and behavior is indistinguishable from that of humans, there will be no justification for denying that machines are conscious and persons, because that appearance and behavior is the only evidence we have for saying that a human has consciousness and is a person.

    ReplyDelete
  61. @ Scott,

    You wrote,

    "To borrow one of your examples, when we say that claw marks on a tree "mean" a bear, we mean not just that the marks were caused by the bear but also that someone could reasonably infer the one from the other. The imputation of meaning to the physical marks is derived from and dependent on the existence of beings who can mean."

    This misses my point. It's not an arbitrary assignment on my part that reads certain scratch marks on a tree as quite possibly meaning "a bear was here". I certainly don't determine that. Surely I can in a sense be said to impute it to the markings as a meaning; my point is that there is always an absolute meaning to or for any physical thing/event or phenomenon.

    ReplyDelete
  62. @William Maximilien Dunkirk:

    "This misses my point."

    On the contrary, I think your reply misses mine. I certainly agree that the causal connection between the claw marks and the bear is objective and non-arbitrary; my point is that this causal relationship is not alone sufficient for a non-derivative sense of meaning. Claw marks that have been caused by a bear don't for that reason mean a bear except to minds capable of drawing the relevant inference (and of "meaning" in a non-derivative sense).

    ReplyDelete
  63. @ Scott

    But if a bear caused the scratch marks, that means a bear was there.

    Now the tree can't know that and certainly a rock couldn't and presumably no other animal besides a man could correctly draw that conclusion - but that's the benefit in being a man. I would say that physical phenomena qua physical have more "meaning" than something like the English alphabetical letter 'H'. Now that physical meaning is of course limited and finite but it's always there notwithstanding. I think only in a world after the thinking of someone like Hume can you imagine that physical phenomena don't have any meaning qua physical and only have meaning by our attributing meaning to it.

    Claw marks caused by a bear do mean a bear or, at least, that a bear was here. We aren't imputing meaning here we are discovering it.

    Science is in large part the effort to correctly interpret the meaning of physical phenomena and identify and determine their causes as exactly and accurately as possible.

    Science is about discovering in part what caused or causes physical phenomena. If the answer itself requires an explanation or cause, then we proceed onward. If not, then we are satisfied.

    ReplyDelete
  64. Hello everyone. The post is very informative and interesting as usual, but I have some difficulty with this:

    "It is to abstract out from a whole certain parts which are metaphysically less fundamental than the whole is."

    What bothers me is the following. In the discussion with Dale Tuggy regarding God's simplicity, an argument was given to the effect that "composites are metaphysically less fundamental than whatever principle accounts for their composition.". I understood this in the way that parts are necessarily more metaphysically fundamental than the whole and it made perfect sense to me then as an argument for God's simplicity. But now it seems Ed claims otherwise in the case of material substances.

    Can somebody explain to me this seeming contradiction? Thanks!

    ReplyDelete
  65. Hello t,

    A composite points to a composer. Anything composite requires a composer. In that sense a composer becomes required as the principle for the composition or the composite being.

    Hence the composite is less fundamental than the composer (the principle) necessary for its actual composition: If there's no composer, then there's no composite possible.

    Hope that helps.

    ReplyDelete
  66. @William Dunkirk:

    "But if a bear caused the scratch marks, that means a bear was there."

    Only in the derived sense that I've already acknowledged.

    Again, you don't need to persuade me that there's a real, non-arbitrary causal relationship between the marks and the bear that made them. Of course there is. My point is that to call this relationship meaning is to use the word in what you've already acknowledged is a derivative sense.[*]

    The basic, underived sense of "meaning" involves the intent of an intellect. To say the marks mean that a bear was there is to rely on this logically prior sense of "mean." Apart from at least the possibility of minds that can grasp causal relations, effects may imply or entail their causes, but they don't "mean" them.

    (This point has nothing whatsoever to do with "imputing." I'm very obviously not saying the causal relation is somehow real only when we "impute" something or other to this or that. I'm simply pointing out that when we say claw marks "mean" a bear, we're actually saying that they "mean" this to a mind.)

    ----

    [*] "In the sense that physical things have explanations (causes) they also have, although perhaps in a sense derivatively, meaning."

    ReplyDelete
  67. William, thanks!

    Sure, anything composite requires a composer. But I would say it also requires the parts it is composed of. Isn't it so? I mean, in order to have a whole, you first must have two categories of things: (i) parts themselves, and (ii) principle that brings and keeps them together - a composer. Or, in another words: you can have parts without having a whole (imagine just a heap of unordered parts), but you cannot have whole without having parts. In that sense it seems obvious to me that parts are metaphysically more fundamental than the composite.

    But then, how can it be true that "parts which are metaphysically less fundamental than the whole is", as Ed says? Perhaps the key is in the fact that Ed speaks of parts abstracted from the whole, while my analysis is concerned with real, non-abstract parts. Just guessing... But I still don't see clearly what difference that makes.

    ReplyDelete
  68. Matt Sheean -
    You ask: "why would the human silicon creation in question have intrinsic causal unity any more than the bicycle?" It has to do with the irreducibility of the systems behavior and its causal autonomy. The behavior of a bicycle is totally reducible to the function of its parts. It has no causal autonomy. (A bicycle doesn't ride itself.) It exists only in our minds as a "whole object"- we've created but it doesn't have a independent metaphysical existence in and of itself. My claim is that you can organize a system with logic gates in a particular way such that the behavior of the system is irreducible to any behavior of its parts. To see how a system would respond to a stimulus you'd have to actually run-the-experiment and see what it "chooses" to do. To know how the system would respond to any possible situation you'd have perturb the system in all possible ways. The system would be a unity and "free." It would also be (so the theory goes) conscious. This is all from the analysis posited by the Integrated Information Theory of Giulio Tononi, but I think its deeply philosophically astute.

    ReplyDelete