Sunday, March 17, 2013

Ferguson on Nagel


In the cover story of the current issue of The Weekly Standard, Andrew Ferguson reviews the controversy generated by Thomas Nagel’s Mind and Cosmos.  Along the way, he kindly makes reference to what he calls my “dazzling six-part tour de force rebutting Nagel’s critics.”  For interested readers coming over from The Weekly Standard, here are some links to the articles to which Ferguson is referring, with brief descriptions of their contents.

First there was my review of Nagel’s book for First Things, wherein I described the respects in which Nagel’s position constitutes a return to something like the Aristotelian understanding of the natural world that the early modern philosophers thought they had overthrown for good.

Then here on the blog I began a series of posts on “Nagel and his critics,” in which I respond to some of the naturalist philosophers Ferguson refers to in his piece:

Part I: Here I present some criticisms of my own, noting how Nagel has needlessly opened himself up to certain objections and other respects in which his book could have been stronger.

Part II: Here I respond to the objections raised fairly aggressively by naturalist philosophers Brian Leiter and Michael Weisberg in their review of Nagel in The Nation.  I argue that Leiter and Weisberg misinterpret Nagel, beg the question against him, and in other ways utterly fail to justify their dismissive approach to the book.

Part III: This post addresses the more measured response to Nagel presented by Elliott Sober in his review in the Boston Review.

Part IV: Here I comment on Alva Noë, who responded to Nagel at his NPR blog and who is, among Nagel’s naturalist critics, perhaps the most perceptive and certainly the least hostile.  (In a follow-up post I commented on some later remarks made by Noë on the subject of Nagel and the origin of life.)

Part V: This post responds to the very hostile remarks about Nagel made by John Dupré in Notre Dame Philosophical Reviews.  I argue that, like Leiter and Weisberg, Dupré has simply missed the point and failed to address Nagel’s position at the deepest level.

Part VI: Here I respond to the serious and measured criticisms of Nagel raised by Eric Schliesser at the New APPS blog.  (In a follow-up post I comment on Schliesser’s remarks about Alvin Plantinga’s “Evolutionary Argument Against Naturalism,” which Nagel cites approvingly.)

Ferguson makes reference also to the views of naturalists Alex Rosenberg, Daniel Dennett, and Richard Dawkins.  I have criticized Rosenberg’s book The Atheist’s Guide to Reality in detail in another series of posts.  And I respond to Dennett and Dawkins at length in my book The Last Superstition: A Refutation of the New Atheism. 

134 comments:

  1. It's important to go through the links in this post before going through the articles regarding Nagel:

    http://edwardfeser.blogspot.com/2011/05/mind-body-problem-roundup.html

    ReplyDelete
  2. Determinism is, to me, such a curious belief system; if it's right, it doesn't matter.

    ReplyDelete
  3. That was a very interesting article by Ferguson. I think anyone who wants to go into science or philosophy should read that article so they know what they're getting themselves into.

    also the capcha image says "616 godiesf" which likes like "616 go die" aaaaaaaaa

    ReplyDelete
  4. Dr. Feser,

    Seeing your pieces arrayed like this makes me realize the sweep of what you have been providing us: a one-by-one rebuttal of naturalist arguments as they appear on the scene. I look forward to continuing to get your perceptive take on the continual reappearances of naturalist arguments. Already it's quite an impressive set of rebuttals you've built up and published.

    ReplyDelete
  5. From the cover story:

    The most famous, most succinct, and most pitiless summary of the manifest image's fraudulence was written nearly 20 years ago by the geneticist Francis Crick: "'You," your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. Who you are is nothing but a pack of neurons."

    This somewhat inaccurate quotation is from Crick's introduction to his The Atonishing Hypothesis: The Scientific Search for the Soul.

    The accurate quotation is,

    "The Astonishing Hypothesis is that "'You," your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. As Lewis Carroll's Alice might have phrased it: 'You're nothing but a pack of neurons.'" (p. 3)

    Hmm. Why drag Alice into it?

    Well, let's see:

    1. Immediately above the text of the introduction proper is:

    Q: What is the soul?
    A: The soul is a living being without a body, having reason and free will.
    -- Roman Catholic catechism


    2. The paraphrase of Alice is (i.e., seems to be) based on what she (is alleged to have) said in Chapter XII, Alice's Evidence: "Who cares for you?" said Alice, (she had grown to her full size by this time.) "You're nothing but a pack of cards!"

    Mr. Crick, apparently, saw science as having grown to its full size by that time, and so had a justifiable confidence, hubris or moxie to dismissively thumb its nose.

    But, of course, he wanted that science should do more than merely thumb its nose dismissively.

    3. Later, in his second chapter (The General Nature of Consiousness), Crick wrote, "How can we approach consciousness in a scientific manner? Consciousness takes many forms, but as I have already explained, for an intitial scientific attack it usually pays to concentrate on the form that appears easiest to study." (p. 21)

    (Which form, Crick subsequently explains, he and Christof Koch [both of whom, it may be recalled, had earlier been hypothesized to not exist] had decided was visual awareness.)

    ReplyDelete
  6. Correction: Technically, neither Crick nor Christof Koch had been hypothesized to not exist, only, rather, to be "no more than the behavior of a vast assembly of nerve cells and their associated molecules."

    ReplyDelete
  7. In fairness to Crick, I should point out that in the last chapter of his book, Dr. Crick's Sunday Morning Service, he does write,

    How will it all turn out remains to be seen. The Astonishing Hypothesis may be proved correct. Alternatively, some view closer to the religious one may become more plausible. There is always a third possibility: that the facts support a new, alternative way of looking at the mind-body problem that is signficantly different from the rather crude materialistc view many neuroscientists hold today and also from the religious view.

    Different from the rather crude materialistc view many neuroscientists hold today?

    Oh my. One can imagine Dennett yet again sighing and looking at the table.

    ReplyDelete
  8. "your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. As Lewis Carroll's Alice might have phrased it: 'You're nothing but a pack of neurons.'"

    The usual category error. Materialists inevitably confuse intrinsic intentionality with derived intentionality. They think that the behavior of a mind actively cognizing an object is identical to that of a machine passively representing an object.

    ReplyDelete
  9. @Glenn
    >In fairness to Crick ...
    There is always a third possibility: that the facts support a new, alternative way of looking at the mind-body problem that is signficantly different from the rather crude materialistc view many neuroscientists hold today and also from the religious view.

    Thanks, Glenn, I'm a big fan of third possibilities (and objective re-examinations of functioning theories). I'm starting by following Anonymous' suggestion to read Feser's roundup. Looks very thorough so far. Thanks, Prof Feser. I also agree with ingx24. I would add religion to his/her list. Ciao.

    ReplyDelete
  10. Gawd, they had the whole coven there spouting and fuming.

    That said, Ferguson's exposition is particularly lucid. Which either shows that he has a really good grasp of what has been heretofore a relatively submerged and academically restricted movement, or that it really hasn't been all that hidden from view; except from those of us who have not been associated with academia during the last, say 20, post Crickean pronunciamiento years.


    This following was a particularly nice summation of the average take on the intellectual "problem":

    "Materialism, then, is fine as far as it goes. It just doesn’t go as far as materialists want it to. It is a premise of science, not a finding."


    But, for some minority of us, possibly including Feser himself on one level, the problem has a political dimension as significant:

    "Daniel Dennett ... [asserted that] (w)hile it is true that materialism tells us a human being is nothing more than a “moist robot”... we run a risk when we let this cat, or robot, out of the bag. If we repeatedly tell folks that their sense of free will or belief in objective morality is essentially an illusion, such knowledge has the potential to undermine civilization itself, Dennett believes. Civil order requires ..."

    Yeah, and speaking only for myself here: "civil order" as they conceive of it, i.e., pointless self-sacrifice in order to pointlessly benefit annoying, pointless, and not necessarily necessary others, requires that only the philosopher kings be let in on the "real truth" that there is no real truth.


    Then there is this nice general sub-issue which has been seen and remarked upon here at Feser's blog spot more than once: "Nagel, say Leiter and Weisberg, overestimates the importance of materialism, even as a scientific method. He’s attacking a straw man. He writes as though “reductive materialism really were driving the scientific community.” In truth, they say, most scientists reject theoretical reductionism. Fifty years ago, many philosophers and scientists might have believed that all the sciences were ultimately reducible to physics, but modern science doesn’t work that way."


    You're are attacking yesterday's science popularizers! No reputable scientist holds to those views nowadays! We've read those older criticisms. We are aware of all the forensic pitfalls, and we will not be tied down to our own premises or even definitions! What do you think we are, simpering idiots? P.Z. Myers?

    But, we still get to tell you what to do. We're tenured, after all.


    Ferguson also writes:


    "You can sympathize with Leiter and Weisberg for fudging on materialism. As a philosophy of everything it is an undeniable drag. As a way of life it would be even worse. Fortunately, materialism is never translated into life as it’s lived. As colleagues and friends, husbands and mothers, wives and fathers, sons and daughters, materialists never put their money where their mouth is."


    Now there I think Ferguson is dead wrong in principle, and for that matter strategically. "They" whatever it is that they supposedly are, once the proper materialist reduction has been performed on them, should be encouraged with all the social pressure which can be brought to bear, to live their principles out. Otherwise they are themselves (Is there really a them there?)being allowed to live in a hypocritical illusion, and being granted - Gaia only knows why - the expectation that others will offer consideration to the manifest image illusion illusion that they both are, and are living.

    And what would anyone not them, want to give them that?

    ReplyDelete
  11. Here's a discussion that might interest some of you:

    Darwinism & Final Causation I

    http://occamsrazormag.wordpress.com/2013/03/17/darwinism-final-causation-i/


    ..

    ReplyDelete
  12. Efficient causality is unintelligible with final causality, as it has been argued on the blog. Evolution is tough to describe without final causality, and so is intentionality. Actually, intentionality is not "tough," but rather "insurmountable."

    ReplyDelete
  13. This is called the error of nothing-buttery (and some of the materialist writers quoted are quite as guilty as anyone else).

    Take Crick’s statement: "The Astonishing Hypothesis is that "'You," your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. As Lewis Carroll's Alice might have phrased it: 'You're nothing but a pack of neurons.'" (p. 3)

    Despite the “in fact”, this is just wrong. Delete the bold phrase, and it becomes true – you are an assembly of nerve cells and molecules. Among other things.

    I don’t quite get why this is so hard to get. Your car is an arrangement of molecules obeying the laws of physics, and it is also a system of functional components that worth together to produce locomotion. It might be hard to keep both of these frameworks of understanding in your head at the same time, but nobody would question that they are both valid descriptions of a car. So why is there such resistance to this when it comes to human beings?

    ReplyDelete
  14. Anonymous said...

    "Here's a discussion that might interest some of you:

    Darwinism & Final Causation I

    http://occamsrazormag.wordpress.com/2013/03/17/darwinism-final-causation-i/ "


    Following the link, I encounter an essay that includes the following:


    "The problem, as Auster points out, over and over again, is that Darwinists constantly help themselves to teleological language – i.e., the language of final causation – while denying that teleology is real. They insist that this language can be “cashed out” in mechanistic terms – but, every time they attempt to do so, they end up using…well, a great big bunch of teleological language in the course of their attempted explanations.

    Would it surprise you to learn that I, unrepentant Darwinist that I am, think that he’s got a point? A very, very serious point, which you can find expressed with more philosophical sophistication and less rhetorical overkill in the writings of Edward Feser?"


    Coming from something of the same mindset as the author of that passage, I can't say whether it's Auster's main problem, nor how much of a problem it is for Professor Feser, but from my point of view it's one of the biggest, most cowardly, and most annoyingly self-indulgent hypocrisies of the militant materialist and collectivist-minded Darwinist.

    In fact, I do think that if they were willing, or bold enough to live with the consequences, much, probably not all, of what they intend to say could be self-consistently said without teleological language.


    The problem is that these clowns don't want to face the redounding existential and social consequences of what such acidic language would portend for themselves - for their own status as "fellows", for being presumptively granted a place within the "sacred circle" of the law's protection.

    But why, on their own terms, should they expect any such thing?

    If "they" - whatever it is that they are before the reduction - are right, they are in fact nothing; and their own joys, and pains, and potential sufferings, objectively meaningless. All bonds of obligation and respect, equally illusory. Why not then, say so?

    Why don't we see them saying something along the lines of, "I Alex Rosenberg, am at base, nothing; my existence intrinsically and ultimately meaningless, and my life is protected only by your delusion that it matters."?

    Hiding behind "evolutionary habit", is such a cowardly and contemptible place for these bold "intellectuals" to seek refuge.


    But then "cowardice" and moral contemptiblity, are for them ultimately as meaningless terms as "courage" or "personal honor".

    Not relevant to they way of being.

    But then what's objectively left of their existence? It seems, their status as a locus, more or less, of appetite. And not even a locus with any real continuity. More as I earlier mentioned, a kind of congerie of appetition, than a real being.

    No wonder then that in more naive ages, people confronted with such an unvarnished phenomenon, believed it to be devils.

    ReplyDelete

  15. Read,
    "Not relevant to they ..."

    as,

    "Not relevant to their ..."

    ReplyDelete
  16. "So why is there such resistance to this when it comes to human beings?"

    Intentionality, for one. A car doesn't have that explanatory stumbling block.

    ReplyDelete
  17. @Anon
    "I don’t quite get why this is so hard to get. Your car is an arrangement of molecules obeying the laws of physics, and it is also a system of functional components that worth together to produce locomotion. It might be hard to keep both of these frameworks of understanding in your head at the same time, but nobody would question that they are both valid descriptions of a car. So why is there such resistance to this when it comes to human beings? "

    -The behavior of all computing machines can be reduced to Turing Machines without remainder.

    -The behavior of all physical mechanisms can be reduced to Turing Machines without remainder.

    -Turing Machines are composed of four simple components, none of which alone or in combination is capable of exhibiting or generating either qualia or intentionality.

    -Minds exhibit qualia and intentionality.

    - Therefore minds are more than just computing machines or indeed physical systems of any kind.

    ReplyDelete
  18. @DNW:

    "Why don't we see them saying something along the lines of, "I Alex Rosenberg, am at base, nothing; my existence intrinsically and ultimately meaningless, and my life is protected only by your delusion that it matters."?"

    Because in good Orwellian fashion what they are *really* saying is that there are some illusions that are more illusory than others.

    ReplyDelete
  19. It's not just humans that cause "explanatory friction." It all boils down to our conception of matter. The way it is now, anything that exhibits consciousness, intrinsic intentionality, determinate thought, etc, is a problem for physicalism, regardless of whether it is a human, a computer or a rock.

    ReplyDelete
  20. @ Anon
    According to Searle and Scruton, computers cannot exhibit intrinsic intentionality. Any intentionality that they appear to exhibit about their inputs and outputs is derived from the minds of the users.

    I'm sure physicalists would dearly love this to be disproved.

    ReplyDelete
  21. I, too, followed Anon's and DNW's link to the Darwin & Final Cause post. I will RP my comment there to here, cuz I think the issue fits my third possibility type argument. No one should be afraid of science, or that science will somehow manage to eliminate God, which is quite a different thing than actually proving anything.

    RP: > Excellent and timely subject. But not all philosophers of science argue against a direction in causal sequences. Some, like Ernst Mayr, tried to get the scientific community to distinguish between the sciences and between the various grounds for those directions. See ‘Teleological and Teleonomic: A New Analysis’ (Mayr, 1974):

    http://evolution.freehostia.com/wp-content/uploads/2007/07/mayr_1974_teleological_and_teleonomic.rtf

    Also see his book, Toward a New Philosophy of Biology (1988), where he elaborates further in an effort to distinguish between teleological and teleonomical. Hope this helps.

    ReplyDelete
  22. Previous comment is mine - no idea why it didn't reflect my name -sorry

    ReplyDelete
  23. According to Searle and Scruton, computers cannot exhibit intrinsic intentionality.

    And we are supposed to take them as authoritative for some reason?

    Searle's Chinese Room argument was completely dismantled by Dennett and Hofstadter and nobody who knows anything takes it seriously. ref, but I think the complete takedown is in their anthology The Mind's I.

    ReplyDelete
  24. Searle's Chinese Room argument was completely dismantled by Dennett and Hofstadter and nobody who knows anything takes it seriously. ref, but I think the complete takedown is in their anthology The Mind's I.

    Dennett is a has-been who got schooled by David Chalmers of all people on consciousness generally, and their attempted criticisms of Searle have been laughable at best. What's more, they know it, which is why they always circle around back to an Alex Rosenberg style "look, if you don't agree with us, you're going to end up taking a position which opens the door to all kinds of scary things like teleology and anti-naturalism!" response once the flaws in their reasoning are inevitably exposed.

    See The Last Superstition by Ed Feser himself to see Dennett made into mincemeat, not to mention various assorted posts on this very site.

    ReplyDelete
  25. The sequence of mechanistic causality breaks down just before the experience of qualia. We can mechanistically follow the causality of the experience of pain right from the hammer striking the thumb, through the nervous information transfer system up to the coordinated behavior of groups of neurones in the brain.

    But then the Turing Machine-compatible sequence of causal events ceases and we can no longer follow what is happenning.

    We know what happens is causal and regular, because we always experience pain when we hammer our thumb, but a different type of regular causality is occuring in the final stage, which is not Turing Machine-compatible.

    ReplyDelete

  26. And we are supposed to take them as authoritative for some reason?

    Searle's Chinese Room argument was completely dismantled by Dennett and Hofstadter and nobody who knows anything takes it seriously. ref, but I think the complete takedown is in their anthology The Mind's I.


    According to Ed, Searle best arguments against computationalism are given in his paper "Is the Brain a Digital Computer?" (available online) and in chapter 9 of his book The Rediscovery of Mind, not in the Chinese Room Argument.

    ReplyDelete
  27. If you think a computer (or computer program) has no intrinsic intentionality, try playing chess against Microsoft Word.

    That occurred to me decades ago when I first read Searle's article on the Chinese Room Argument in Scientific American. I'd like people who disagree with it to tell me what they think is wrong with it.

    ReplyDelete
  28. @Jules -- thanks for providing a reference. Much better than the other anon who was just slinging insults.

    That said, I took a look at "Is the Brain a Digital Computer?" and it appears to be a rehash of essentially the same bad arguments as the Chinese Room. He is still nattering on about mysterious "causal powers" that humans have but computers lack. This is in part because he insists on focusing on Turing machines (a theoretical model of computation) rather than computers themselves, which are physical systems.

    Here's an excerpt, plucked more or less at random:


    What I just imagined an opponent saying embodies one of the worst mistakes in cognitive science. The mistake is to suppose that in the sense in which computers are used to process information, brains also process information. To see that that is a mistake contrast what goes on in the computer with what goes on in the brain. In the case of the computer, an outside agent encodes some information in a form that can be processed by the circuitry of the computer. That is, he or she provides a syntactical realization of the information that the computer can implement in, for example, different voltage levels. The computer then goes through a series of electrical stages that the outside agent can interpret both syntactically and semantically even though, of course, the hardware has no intrinsic syntax or semantics: It is all in the eye of the beholder. And the physics does not matter provided only that you can get it to implement the algorithm. Finally, an output is produced in the form of physical phenomena which an observer can interpret as symbols with a syntax and a semantics.

    But now contrast that with the brain. In the case of the brain, none of the relevant neurobiological processes are observer relative (though of course, like anything they can be described from an observer relative point of view) and the specificity of the neurophysiology matters desperately. To make this difference clear, let us go through an example. Suppose I see a car coming toward me. A standard computational model of vision will take in information about the visual array on my retina and eventually print out the sentence, "There is a car coming toward me". But that is not what happens in the actual biology. In the biology a concrete and specific series of electro-chemical reactions are set up by the assault of the photons on the photo receptor cells of my retina, and this entire process eventually results in a concrete visual experience. The biological reality is not that of a bunch of words or symbols being produced by the visual system, rather it is a matter of a concrete specific conscious visual event; this very visual experience.


    The annoying thing here is that Searle actually has a small point, but its not the point he thinks it is. AI people have realized that critiques similar to the above have a tinge of validity, and have thus turned to building robots and other emboided systems. A robot is not a Turing machine, it is (perhaps) a Turing-like machine that is continuously interacting with the physical world through an array of sensors and effectors.

    ReplyDelete
  29. There are still people who conflate the argument from reason with the Chinese Room argument? Please.

    ReplyDelete
  30. This comment has been removed by the author.

    ReplyDelete
  31. Modern digital computers are intelligently designed.

    ReplyDelete
  32. AI people have realized that critiques similar to the above have a tinge of validity, and have thus turned to building robots and other emboided systems.

    "Having said that, my duty now as host is to turn the lectern over to our keynote speaker."

    (The keynote speaker steps up to the lectern, takes a sip of water, shoots his cuff to check the time, realizes he has but 30 seconds, and so immediately begins...)

    My fellow AIer's... There is one thing you all should know--and regarding which no doubt at all should be left in any of your minds--and that is this:

    With apologies to our host implied, I say it matters not one whit that critiques similar to the above have a tinge of validity to them, or that that tinge of validity has altered the direction of an entire field.

    No. Oh no. Oh no, no, no.

    What does matter, what you must see and what you must accept, is that, excepting that one small, tiny, teensy-weensy tinge of validity, critiques similar to the above are if not entirely invalid then either completely wrong or wholly irrelevant.

    We, of course, are on the right track.

    Now, my 30 seconds are almost gone. Fortunately, there are but two things I need to say in conclusion.

    First: I had previously said that that tinge of validity has altered the direction of an entire field. Lest any of our naysaying detractors mistakenly take that as some kind of Freudian slip, let me be perfectly clear--what was said was merely a manner of speaking.

    For the employment of unsophisticated terminology sometimes is a necessary evil when dealing with our naysaying detractors, and I'm afraid that that was one occasion when such a necessarily evil was called for.

    The plain and simple truth is that we are not swayed by their faulty arguments.

    And the plain and simple truth... dare I say the mundane truth?... is that we tried one approach, found that it does not work, realized that it will not work, and so we as a field have adopted another tack with a different goal while retaining the same name.

    Second: To those of you whose familiarity with regional dialects may be somewhat scant, I point out that 'emboided systems' is Brooklynese for 'embodied systems'.

    Thank youse.

    ReplyDelete
  33. @Anonymous:

    "A robot is not a Turing machine, it is (perhaps) a Turing-like machine that is continuously interacting with the physical world through an array of sensors and effectors."

    A robot is a physical implementation of a Turing machine. The interactions with the world are encoded in the inputs / outputs.

    ReplyDelete
  34. @grodrigues -- a physical implementation of a Turing machine (which is not exactly what a robot is, in ways that matter to this kind of argument) is different from a Turing machine. The latter is a mathematical construct and thus cannot have "causal powers" (Searle's term), but the former can.

    ReplyDelete
  35. Any critique of AI that applies to Turing machines applies to all computational, mechanistic, physical and biophysical systems.

    A Turing Machine is not primarily a physical device (although physical demonstrations have been constructed). Its primary purpose is as a thought-experiment, or a precisely defined simple mathematical object, whose precision and simplicity produce a rigorous definition of the fundamental behavior of all mechanical devices and physical systems.

    A Turing machine consists of just two main components:
    (i) A tape of characters, which may be limited to just 1’s and 0’s.
    (ii) A table of actions, which instructs the machine what to do with each character.

    There are also two minor components:
    (iii) A read/write head, which simply transfers symbols from the tape to the table and vice versa,
    (iv) A register that holds the numeric identifier for the machine’s current state.

    The tape consists of a string of characters. These are sometimes imprecisely described as 'symbols', but this is rather confusing in that symbols often make reference to something beyond themselves (they exhibit 'derived intentionality' or evoke a qualitative state of mind.) It is important to remember that the characters on the tape carry no intrinsic meaning.

    The precise definition of the marks on the tape is that they are characters drawn from a defined alphabet, where the term ‘alphabet’ is used in a rather technical sense of a restricted set of characters, such as the 26 characters of the Latin alphabet, the 33 characters of Russian alphabet, the four characters of the DNA alphabet, or the two characters of the binary alphabet. The size of the alphabet makes no difference to the capabilities of the Turing Machine, since all characters are capable of being encoded as binary.

    The table consists of five columns, with as many rows of instructions as are needed to do the job. The columns are:

    1 The row's machine state identifier to be tested against the actual machine state.

    2 The row's character to be tested against the current character as read from the tape.

    3 The identifier of the new state to which the machine will change.

    4 The new character to be written to the tape.

    5 An instruction to move the head one character right or left along the tape.

    The machine works by going down the table checking each row until it finds a row where the state identifier corresponds to the machine’s current state as held in the register, and the character corresponds to the character under the head.

    In accordance with the three remaining columns in that row, the machine then:

    (i) changes the state of the register

    (ii) writes a new character on the tape

    (iii) moves the head

    It then restarts the checking procedure from the top of the table.

    So it’s apparent why the Turing Machine isn’t a practical proposition for doing any useful tasks: the number of rows in the action table would become huge.

    Real computers condense the action table into a small set of instructions or ‘opcodes’. Nevertheless, the simple architecture of the Turing Machine can be mathematically proved to be completely functionally equivalent to any real-world computer.


    Not only can the Turing machine simulate any other kind of computer, it can simulate and predict the behaviour of any physical system, including any other type of machine.

    The tape corresponds to datastructures and the table corresponds to causal relationships, including formulae for physical and chemical laws.

    ReplyDelete
  36. @Anonymous:

    "a physical implementation of a Turing machine (which is not exactly what a robot is, in ways that matter to this kind of argument) is different from a Turing machine. The latter is a mathematical construct and thus cannot have "causal powers" (Searle's term), but the former can."

    You are wrong; in more than one way. But seanrobsville already explained the nub of the problem, so I will not bother to repeat him.

    ReplyDelete
  37. Glenn's modus operandi:

    "If you are going to argue, you might as well make it entertaining."

    ReplyDelete
  38. @seanrobsville -- I would hope that anyone here already knows what a Turing machine is. Nothing wrong with re-explaining it, but your last post did not actually make an argument.

    Any critique of AI that applies to Turing machines applies to all computational, mechanistic, physical and biophysical systems.

    Then it applies to humans, who are biophysical systems.

    ReplyDelete
  39. I should also point out that nowhere within the Turing machine is there anything that has intrinsic intentionality, and even finding derived intentionality requires mental designation from outside the system.

    And nowhere within the Turing Machine is there any structure or data that is capable of holding a qualitative state. Everything is integer or boolean.

    ReplyDelete
  40. @Anon
    Then it applies to humans, who are biophysical systems.

    You missed out the 'nothing but'.

    Brains may be nothing but biophysical systems, but minds seem to be able to do non-mechanistic things, in the Turing sense. The brain does not provide a complete understanding of human experience.

    ReplyDelete
  41. @seanrobsville -- you are begging the question.

    I am not a nothing-butter. Humans are biophysical systems that are capable of intentionality and other mental feats. They are capable of doing this due to their physical organization, not because they have some kind of magic intentionality fairy dust hidden away inside.

    Thus, a system made out of silicon circuitry could in principle do the same, if it also had the requisite kind of organization. Figuring out what that organization is and replicating it is the task of cognitive science and AI. They have not succeeded yet, and maybe they never will, but there are not good arguments that prove that they can't succeed in principle. (There are many bad ones, like Searle's).

    ReplyDelete
  42. Feser's summary of Searle's argument might be helpful here:

    1. Computation involves symbol manipulation according to syntactical rules.

    2. But syntax and symbols are not definable in terms of the physics of a system.

    3. So computation is not intrinsic to the physics of a system, but assigned to it by an observer.

    4. So the brain cannot coherently be said to be intrinsically a digital computer.

    The moment you use "symbols" or "syntax" in your explanation, you have stepped out of the biophysical system.

    ReplyDelete
  43. I think Searle's point (in the second argument, at least) is not against AI per se; it is against the notion of using computers or AI to explain the human mind.

    ReplyDelete
  44. "Humans are biophysical systems that are capable of intentionality and other mental feats. They are capable of doing this due to their physical organization, not because they have some kind of magic intentionality fairy dust hidden away inside."

    1. Humans are biophysical systems.

    2. Humans possess intrinsic intentionality.

    3. Therefore...?

    Regardless of the conclusion, premise one begs the question, because the argument is precisely whether or not humans are just biophysical systems.

    ReplyDelete
  45. The return of fairy dust Anon, THE MOVIE

    ReplyDelete
  46. Humans are biophysical systems that are capable of intentionality and other mental feats. They are capable of doing this due to their physical organization, not because they have some kind of magic intentionality fairy dust hidden away inside.

    Let's put this in perspective.

    (1) My mind is entirely physical.
    (2) Meaning is non-physical.
    (3) Therefore, the proposition "my mind is entirely physical" has no meaning.

    Unless you plan to argue that meaning is physical (which would be quite funny), then you're out of luck.

    ReplyDelete
  47. "I don’t quite get why this is so hard to get. Your car is an arrangement of molecules obeying the laws of physics, and it is also a system of functional components that worth together to produce locomotion. It might be hard to keep both of these frameworks of understanding in your head at the same time, but nobody would question that they are both valid descriptions of a car."
    If by “locomotion” we mean the “movement or the ability to move from one place to another” then I’m not sure what the analogy is supposed to do. Regardless of organization or complexity, atoms are capable of locomotion in this sense. The car is not producing anything novel.

    ReplyDelete
  48. Also, some clarification on "nothing-butter" would be nice. For example:

    Physicalist: The mind is nothing but matter arranged in a certain way.

    Dualist: The mind is nothing but a union of matter arranged in a certain way and non-material aspects of thought.

    Is there anything wrong with "nothing but" in these two?

    ReplyDelete
  49. Glenn. You seem awesome. Where is your blog?

    -- Johnny boy

    ReplyDelete
  50. @Anon
    Thus, a system made out of silicon circuitry could in principle do the same, if it also had the requisite kind of organization. Figuring out what that organization is and replicating it is the task of cognitive science and AI. They have not succeeded yet, and maybe they never will, but there are not good arguments that prove that they can't succeed in principle.

    The Turing Machine exhausts the possibilities of what all computational systems can do. Adding complexity and power won't produce any different capabilities. It will just get you more of the same

    ReplyDelete
  51. In addition to that, there is the matter of the implied claim that what is necessary is necessarily sufficient.

    If "requisite organization" were all it is about, then one ought to be at least a little curious re the elusiveness of proof showing that the brains of human corpses are capable of intentionality and other mental feats.

    ReplyDelete
  52. I have Alan Turing right here and he says you know nothing of his work.

    ReplyDelete
  53. "If "requisite organization" were all it is about, then one ought to be at least a little curious re the elusiveness of proof showing that the brains of human corpses are capable of intentionality and other mental feats."

    Well to be fair, brain tissue starts to die in just 6 minutes without oxygen, IIRC.

    "I have Alan Turing right here and he says you know nothing of his work."

    Could you perhaps copy and paste the relevant parts here, or tell us what to ctrl-f for?

    ReplyDelete
  54. "Could you perhaps copy and paste the relevant parts here, or tell us what to ctrl-f for?"

    I'm asking this because I did a ctrl-f for "intentionality" and "meaning" and I did not find anything relevant.

    ReplyDelete
  55. Along the same vein here is this controversy with two TED talks:

    http://blog.ted.com/2013/03/14/open-for-discussion-graham-hancock-and-rupert-sheldrake/

    ReplyDelete
  56. Just looked through your link, Mark. That's pretty crazy, although it's nice to see that TED allowed the written rebuttals from the two of them.

    ReplyDelete
  57. The point being is that Alan Turing, who presumably knew something about Turing machines, did not see anything too problematic about the idea of digital computers thinking.

    That does not prove anything of course, but it might suggest that the people who think that triumphantly deploying their freshman CS knowledge is a decisive argument might want to think again.

    ReplyDelete
  58. All I see is an empty insult. What I do not see is any sort of counter argument.

    ReplyDelete
  59. "The point being is that Alan Turing, who presumably knew something about Turing machines, did not see anything too problematic about the idea of digital computers thinking.

    That does not prove anything of course, but it might suggest that the people who think that triumphantly deploying their freshman CS knowledge is a decisive argument might want to think again."

    Lol it's an ad hominem.

    ReplyDelete
  60. That does not prove anything of course, but it might suggest that the people who think that triumphantly deploying their freshman CS knowledge is a decisive argument might want to think again.

    And now I wait for the regulars to, as usual, note their academic qualifications, thus turning this anon's attempt at ad hom into a fine mist. ;)

    ReplyDelete
  61. Right, I'm sure they are more qualified than Alan Turing to discuss the theory of computation.

    ReplyDelete
  62. magic intentionality fairy dust

    The above flak is fired by the embattled materialist when the theist is effectively 'over the target'.

    ReplyDelete
  63. Well, not necessarily a theist. There are people like Chalmers and Nagel out there. Regardless, I think the discussion is over, judging by the last few comments.

    ReplyDelete
  64. Right, I'm sure they are more qualified than Alan Turing to discuss the theory of computation.

    They've got far, far better than an intro to CS course. And on the subject of computation specifically, they have some advantages over Turing. For one thing, their knowledge of computation doesn't end at the mid-1950s. For another, they realize the philosophical problems with the view - it's not just a computation problem. Also, they're not dead.

    By the way, now is a great time to list your academic credentials. ;)

    ReplyDelete
  65. >> "I have Alan Turing right here and he says you know
    >> nothing of his work."

    > Could you perhaps copy and paste the relevant parts here,
    > or tell us what to ctrl-f for?

    >> The point being is that Alan Turing, who presumably knew
    >> something about Turing machines, did not see anything too
    >> problematic about the idea of digital computers thinking.

    1. Copy/paste:

    The original question, "Can machines think?" I believe to be too meaningless to deserve discussion.

    (See intro to section 6 at link provided by Anon above.)

    2. Copy/paste:

    Nonetheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

    (ibid)

    The use of words and general educated opinion is such that one can speak of the sun rising and setting without expecting to be contradicted.

    And this is because:

    a) our use of words re the sun's rising and setting has to do with the appearance; and,

    b) general education is such that most people know the actual reality, i.e., that in fact the sun neither rises nor sets.

    (But if general educated opinion were such that it were commonly thought that the sun does indeed really rise and set, then--hopefully--'twould be only a matter of time before some Copernicus-like individual showed up, opened his mouth, and took some flak.)

    3. Copy/paste:

    The reader will have anticipated that I have no very convincing arguments of a positive nature to support my views...

    [Paragraph omitted.]

    [Paragraph omitted.]

    These last two paragraphs do not claim to be convincing arguments. They should rather be described as "recitations tending to produce belief."


    (See section 7.)

    For 63 years we have been:

    a) overfed and overstuffed with "recitations tending to produce belief"; and,

    b) starved of convincing arguments.

    ReplyDelete
  66. @seanrobsville -
    That is an interesting Turing quote. Some responses:

    - he wrote it when he was around 20, so it’s a somewhat immature thought.

    - it doesn’t really contradict the main point we are trying to argue here: whether digital computers can display mind (or “embody spirit”).

    - you say that the death of his boyfriend led him to develop a theory of mechanism, presumably Turing-computability. I don’t quite understand why you then think that this proves that the kind of mechanisms he described can’t display mental properties, since Turing apparently thought just the opposite.

    More generally, there seems to be some confusion about the beliefs that Turing, AI, and cognitive science in general hold. They don’t deny that minds and mental phenomena exist, they just insist that those phenomena are embodied in mechanism, and seek to understand and/or construct such mechanisms.

    For a variety of reasons we seem to be very much in thrall to the idea that the world is divided into separate physical and mental spheres. This is, I believe, a mistake, but one that is very hard to correct. Turing’s revolutionary intellectual contribution was a new mathematical way to connect these spheres. His insight was immensely powerful, but also limited due to its formal nature. This is what gives you and Searle the license to dismiss all Turing machine activity as mere syntactic twiddling. There is some truth to this, but taking that attitude misses the whole point of what Turing was trying to do.

    ReplyDelete
  67. "For a variety of reasons we seem to be very much in thrall to the idea that the world is divided into separate physical and mental spheres."

    There was no choice but to split the world into separate physical and mental spheres after the moderns had their way with phil of nature.

    Regardless, the unavoidable problem lies with explaining the mind in terms of "symbols" and "syntax," for reasons mentioned in this thread.

    ReplyDelete
  68. More generally, there seems to be some confusion about the beliefs that Turing, AI, and cognitive science in general hold. They don't deny that minds and mental phenomena exist, they just insist that those phenomena are embodied in mechanism, and seek to understand and/or construct such mechanisms.

    No one here has asserted that Turing, AI or cognitive science denies that minds and mental phenomena exist, so the confusion about beliefs which 'seems to be', if it exists, must be at your end.

    ReplyDelete
  69. @Anon:

    "More generally, there seems to be some confusion about the beliefs that Turing, AI, and cognitive science in general hold. They don’t deny that minds and mental phenomena exist, they just insist that those phenomena are embodied in mechanism, and seek to understand and/or construct such mechanisms."

    Right, but in holding that mind can in some way be fully "embodied" in a physical implementation of a Turing machine, you commit yourself (perhaps inadvertently) to the view that it's epiphenomenal at best.

    Essentially the point you're making is the one Doug Hofstadter tried to popularize in GEB:EGB (I say "tried" because apparently quite a lot of his target audience missed his overarching point): that the answer to the question "Can machines be conscious?" must be Yes, because we ourselves are such machines.

    The problem is that if, as a physical system, I can be fully modeled by a Turing machine, then there just doesn't seem to be anything for my consciousness to do unless it has causal powers that aren't captured in physics.

    When (for example) I look at a picture of a steak, decide that I'm hungry, and go grill a steak of my own, the line of causality seems to pass through my consciousness and involve an exercise of volition (not necessarily "free will") on my part. But if the entire process can be modeled as a Turing machine (in which there simply isn't any intentionality at all, let alone "volition"), then this must be an illusion: exactly the same thing would have happened even if "consciousness" didn't exist at all.

    There seems to be little doubt that physical processes are involved in my seeing the photo of the steak and so forth. But the claim of AI, in effect, is that the entire process is physical (and can therefore be modeled as one entirely lacking in intentionality)—and that seems to commit it to the further view that consciousness is just along for the ride, as it were. If consciousness works the way it "seems to work" to those of us who have it, then it must involve something that simply can't be reduced to physics.

    That wouldn't mean that the matter composing a digital computer isn't capable of "supporting" consciousness; I could make a digital computer out of cake and then eat it, after which it would support my consciousness. It would mean, though, that such consciousness isn't simply a matter of programming the computer properly.

    ReplyDelete
  70. Oops, I forgot to add my academic qualifications! What was I thinking?

    I hold a bachelor's degree in applied mathematics and computer science, a master's degree in mathematics, and a law degree. I've been (among other things) a professional software developer for a bit over two decades, and I acquired my "freshman CS knowledge" when I was about fifteen.

    ReplyDelete
  71. I suspect that all of the ‘mind is something magical’ arguments are putting their faith in a god-of-the-gaps dilemma. It seems completely possible for any computer as brain analogy or model is lacking only the right software to make it work. Like hardware, software keeps getting better also. Very likely just a matter of time to develop an intention capable system.

    ReplyDelete
  72. grodrigues said...

    "@Scott said at March 20, 2013 at 11:59 AM:

    I hold a bachelor's degree in applied mathematics and computer science, a master's degree in mathematics, and a law degree. I've been (among other things) a professional software developer for a bit over two decades, and I acquired my "freshman CS knowledge" when I was about fifteen.

    Of course this is what a cunningly programmed robot designed to pass Turing's test would say. I am on to you, ELIZA clone."

    It is quite clear that this grodrigues is nothing but an MGonz clone.

    ReplyDelete
  73. I think I'll just leave this here:

    "But put to one side the question of what positive alternatives there might be to the materialistic naturalism that is Nagel’s target -- neo-Aristotelian hylemorphism, Cartesian dualism, vitalism, idealism, panpsychism, neutral monism, or whatever. Noë’s response would fail even if none of these alternatives was any good. To see why, suppose that a critic of Gödel's incompleteness theorems suggested that every true arithmetical statement in a formal system capable of expressing arithmetic really is in fact provable within the system, and that the consistency of arithmetic can in fact be proved from within arithmetic itself -- and that Gödel's arguments seem to show otherwise only because of a “cognitive illusion” that makes formal systems seem “vaguely spooky.”

    This would not be a serious response to Gödel precisely because it simply does not show that Gödel is wrong but either presupposes or merely asserts that he is wrong. Gödel purports to demonstrate his claims. Hence, adequately to answer him would require showing that there is something wrong with his attempted demonstration, not merely staking out a position that assumes that there is something wrong with it. Similarly, many of the key arguments against materialistic naturalism -- Chalmers’ “zombie argument,” Jackson’s “knowledge argument,” Ross’s argument for the immateriality of thought, etc. -- purport to demonstrate that materialistic naturalism is false. Adequately to answer them requires showing that there is some error in the attempted demonstrations, and the appeal to an alleged “cognitive illusion” simply assumes this without showing it. It merely begs the question."

    http://edwardfeser.blogspot.com/2012/11/nagel-and-his-critics-part-iv.html

    ReplyDelete
  74. Lol, it's like all we get are ad homs.

    ReplyDelete
  75. I suspect that all of the ‘mind is something magical’ arguments are putting their faith in a god-of-the-gaps dilemma. It seems completely possible for any computer as brain analogy or model is lacking only the right software to make it work. Like hardware, software keeps getting better also. Very likely just a matter of time to develop an intention capable system.

    It only 'seems completely possible' by completely ignoring the reality of the situation, the actual analysis and arguments. There's a reason even numerous materialists don't expect these things, and go with 'therefore intentionality is an illusion'.

    Educate yourself. It may solve some problems, if you can get rid of your atheist and materialist delusions.

    ReplyDelete
  76. Lol, it's like all we get are ad homs.

    I think you may be underestimating anon. He's actually serving notice that:

    1) he'll soon be providing a demonstration that a computer really is a brain; and,

    2) we had better be prepared to address his demonstration rather than merely presuppose or assert that he's wrong.

    ReplyDelete
  77. Like hardware, software keeps getting better also. Very likely just a matter of time to develop an intention capable system.

    The materialists who hold as a matter of faith that it is possible to get intentionality and qualia out of a Turing Machine (or combination of Turing Machines) are like the old time alchemists who believed it was possible to get gold out of lead, or some other combination of base metals, if you just fiddled around for long enough in as many different ways that you could think of.

    Neither the materialists nor alchemists have any remotely plausible mechanism of how any such transformation might work.

    A modern chemist might suggest exchanging a few protons and neutrons between base metal atoms might give rise to gold, which is a plausible though not practical mechanism. But the materialists haven't even reached this stage of plausibility, let alone practicality.

    ReplyDelete
  78. Thank you, Glenn. I’ll slip you a few shares when I go public. My point being you should secure your faith to something more sure than a continued failure of science or engineering. The multiple miserable arguments of materialists do not reflect the limitations of man nor mind to create.
    Seanrobsville: The materialists mouthing off also suggest that consciousness and/or free will is an illusion. They are hardly the ones to expect a solution from as they have no grasp of the problem. As you point out, the lead/gold problem has been theoretically solved.

    ReplyDelete
  79. @Scott – The problem is that if, as a physical system, I can be fully modeled by a Turing machine, then there just doesn't seem to be anything for my consciousness to do unless it has causal powers that aren't captured in physics.

    Your consciousness sounds bored. Perhaps it should take up meditation.

    Right, but in holding that mind can in some way be fully "embodied" in a physical implementation of a Turing machine, you commit yourself (perhaps inadvertently) to the view that it's epiphenomenal at best.

    No, I don’t.


    When (for example) I look at a picture of a steak, decide that I'm hungry, and go grill a steak of my own, the line of causality seems to pass through my consciousness and involve an exercise of volition (not necessarily "free will") on my part. But if the entire process can be modeled as a Turing machine (in which there simply isn't any intentionality at all, let alone "volition"), then this must be an illusion: exactly the same thing would have happened even if "consciousness" didn't exist at all.


    Bold indicates where you are begging the question. Why shouldn’t there be intentionality in a Turing machine?

    It is probably easier to see it in a robot, which is why as I think I mentioned above, a bare disembodied Turing machine is not the best way to grasp AI. Consider a Roomba tootling around your living room, building an internal model of the room as it bumps into things so it can avoid them in the future (actual Roombas don’t do this, but certainly other similar robots do). This model is unproblematically about the actual geometry of the room, hence intentional.

    ReplyDelete
  80. Is our little robot friend relying on a computer program/language to calculate and store the measurements?

    ReplyDelete
  81. The 'aboutness' or 'roomness' is in the understanding of the person watching the robot's behavior. All the robot actually 'sees' is something like a three dimensional array corresponding to the coordinates of (say) one centimeter cubes of the room's volume with each cube marked as (say) 0 or 1, for obstructed or unobstructed space.

    This is just a three dimensional version of the more usual two dimensional pixel array used to produced pictures. The same arguments apply.

    ReplyDelete
  82. Alan < Like hardware, software keeps getting better also. Very likely just a matter of time to develop an intention capable system.
    Anon 4:15pm < a bare disembodied Turing machine is not the best way to grasp AI.

    Has Watson, the IBM approach to software and data surveyance, been talked about here? Intentionality as about-ness doesn't seem out of reach for a machine to model.

    ReplyDelete
  83. It's all derived.

    ReplyDelete
  84. @seanrobsville -- and yet, when you leave the room, it continues to use its internal representations of the room to guide its actions, just as if they still had the same intentionality that they did when you were in the room. Strange!

    ReplyDelete
  85. If the robot is building the model by utilizing binary, then no, it has no internal representation of the room. There's nothing in any string of 1's and 0's that is intrinsically about anything outside of itself, including the geometry of a room. Any intentionality it possesses is derived.

    ReplyDelete
  86. And why is human intentionality not 'derived'?

    See here for Watson's Wagering Strategies:

    http://ibmresearchnews.blogspot.com/2011/02/watsons-wagering-strategies.html?m=1

    ReplyDelete
  87. "The Ascriptivist Regress. Suppose one takes the ascriptivist line. Then an infinite regress appears unavoidable. Suppose I ascribe intentionality to my chess playing computer. Quite obviously, my ascribing, projecting, imputing, is itself an intentional state or a series of such states. So if ascriptivism is true, then my acts of ascribing must themselves be ascribed -- otherwise they are intrinsic and the game is over. This launches us on a regress that is obviously infinite since at each level, one can "kick it up a notch." It is also clear that the regress (unlike the truth regress say) is vicious since at no level is the explanation of intentionality complete."

    "In the final analysis, Dennett is an eliminativist about intentionality. His position amounts in the end to the denial of intentionality. To see this, you just have to think clearly. Dennett is saying that all intentionality is derivative, none is intrinsic or original. But that makes sense only if one embraces an infinite regress. But in this case an infinite regress must be vicious. On the other hand, a regress that terminates either terminates with entities that are intrinsically intentional or entities that are not. If the former, the game is over. If the latter, no intentionality gets transmitted up and Dennett is an eliminativist."

    Full blogpost here:

    http://maverickphilosopher.typepad.com/maverick_philosopher/2009/11/original-and-derived-intentionality-circles-and-regresses.html#more

    ReplyDelete
  88. @other anonymous, WTF does binary have to do with anything? Let's say we make the robot out of analog computers, what does that change?

    any string of 1's and 0's that is intrinsically about anything outside of itself

    OK, I think I see why you are confused. You are right, a string of 1s and 0s doesn’t have any “intrinsic” meaning, it has meaning only relative to some interpreter, in this case, the software of the robot.

    A string of Roman letters like this one doesn’t have any intrinisic meaning either, it only has meaning relative to the community of people who speak English. Here’s a string of symbols that is meaningless to me but not to Japanese speakers: 男は彼自身のイメージでロボットを作成.

    ReplyDelete
  89. Scott: But the claim of AI, in effect, is that the entire process is physical

    Or at least that's the claim of many people who work in AI and who may have forgotten what the "A" stands for. In some respects, it's irrelevant to AI what real intelligence consists in, if all you want is to find a way to build an artificial intelligence. And if one succeeds in building an artificial "intelligence" that is entirely physical, it no more follows from that that real intelligence is merely physical any more than it follows from building an artificial flower out of silk that real flowers are therefore made out of silk.

    ReplyDelete
  90. Anonymous: They are capable of doing this due to their physical organization, not because they have some kind of magic intentionality fairy dust hidden away inside.

    All right, I hereby formulate Gauss-Green-Stokes's Law. (Actually, it's just Green's Law, but throwing in Gauss and Stokes makes it sound more impressive.) It's like Godwin's Law, except with fairies instead of Nazis. Formally expressed, Green's Law states that as any on-line discussion of intentionality or causality grows longer, the probability of someone's referring to "magic pixie dust" approaches 1.

    Green's Law is a corollary to a more general principle which states that the more ignorant a poster is of Scholastic philosophy, the more likely he is to resort to increasingly ridiculous caricatures. Because such a poster would be incompetent to defend his attack (even were his claim true), Green's Law is sometimes facetiously invoked in terms of the culprit's "losing" the argument.

    (Note that the phrase "god of the gaps" is rarely used in any meaningful constructive sense; as an approximate synonym for "magic", its use also qualifies as an application of Green's Law.)

    ReplyDelete
  91. @grodrigues:

    "I am on to you, ELIZA clone."

    Why do you say you are on to me, ELIZA clone?

    ReplyDelete
  92. If interpretation relies on symbols and syntax, see the comment at March 19, 2013 at 11:33 AM.

    ReplyDelete
  93. I think I'm going to adopt an official policy of not arguing with anonymous posters; it's all but impossible to keep track of which ones are the same. Well, one more quick round.

    Earlier an Anon wrote:

    "[A] Turing machine .  . is a mathematical construct and thus cannot have 'causal powers' .  . ."

    Now an Anon asks:

    "Why shouldn’t there be intentionality in a Turing machine?"

    Because—if you're the same Anon who posted that first comment—you've already agreed that it's a mathematical construct. If you're saying that a mathematical construct can have intentionality but doesn't have causal powers, well, welcome to that inadvertent epiphenomenalism I warned you about.

    And that will probably be my last attempt to engage anonymous arguments spread out over multiple posts. If you want to participate in an ongoing discussion, folks, please adopt at least a temporary identity so we can tell who's who.

    ReplyDelete
  94. And why is human intentionality not 'derived'?

    Who's doing the deriving? And is their intentionality derived? If so, who is doing the deriving?

    ReplyDelete
  95. Smoking Frog: If you think a computer (or computer program) has no intrinsic intentionality, try playing chess against Microsoft Word.
    [...] I'd like people who disagree with it to tell me what they think is wrong with it.


    The word "intrinsic". Sure, computers have intentionality — it's just all derived. With a suitable application of external intentionality, you can play chess in MS Word too. (Open File = move pawn, Modify Style = castle your king, or what have you....) Of course, finding an interesting mapping that works with MS Word might be a challenge (playing chess "well" is another externally-defined concept — although I hear Excel makes a nifty flight-simulator!).

    ReplyDelete
  96. > And why is human intentionality not 'derived'? (My Q)
    Anon > Who's doing the deriving? And is their intentionality derived? If so, who is doing the deriving?

    This is Maverick's point, I guess; the vicious regress.

    The quick answer is either Nature or God (or both if Nature is derived from a priori God). If Nature is the source, then some version of evolution presumably accounts for the development of a selected-response process directed to 'coping' with the surrounding environment (i.e., intentionality*).

    The result of this coping, this focus (i.e., about-ness) would be food intake, reproduction and survival. Why is this difficult to conceive of without any involvement of an immaterial mind? Of course, If God is possible (and A-T certainly holds that is true), then God might exist and could have created this identical functioning by direct means or indirectly via Nature. Either way human intentionality would be derived - or perhaps more specifically - the appearance of originality would be the result of the bio-chemical-electrical functioning of the brain (i.e., no immateriality needed inside of the human being for this function).

    Whether humans can produce a functioning model of what Nature (or God, or God via Nature) produced, remains to be seen, but I'm with Alan on this one ... I wouldn't bet against the technicians. Thoughts?

    *From SEP: "... word ‘intentionality’ should not be confused with the ordinary meaning of the word ‘intention.’ As the Latin etymology of ‘intentionality’ indicates, the relevant idea of directedness or tension (an English word which derives from the Latin verb tendere) arises from pointing towards or attending to some target"

    ReplyDelete
  97. Here's the SEP link:

    http://plato.stanford.edu/entries/intentionality/

    ReplyDelete
  98. @Scott -- A Turing machine is a mathematical construct. An actual computer, like the one you are reading this on, is a physical device that approximates this construct. The relationship is something like that between the Platonic ideal of a circle and an actual circle drawn on an actual piece of paper.

    Among the ways in which physical computers differ from the mathematical construct:

    - they are physical and subject to physical forces, like being droppen on the floor or hit with cosmic rays that might cause them to make a mistake.

    - Turing machines have an infinite amount of storage, which obviously a physical computer cannot have.

    - Because a physical computer emits and takes in electrical signals, it is capable of direct causal connection to the outside world. (This is most obvious with an embedded computer like the one in the Roomba).

    ReplyDelete
  99. "The quick answer is either Nature or God (or both if Nature is derived from a priori God). If Nature is the source, then some version of evolution presumably accounts for the development of a selected-response process directed to 'coping' with the surrounding environment (i.e., intentionality*)."

    This sounds kinda like the crude causal theory of intentionality, Feser's done a blogpost on it, can't remember the title though.

    ReplyDelete
  100. Computers have derived intentionality.

    ReplyDelete
  101. @FZ,
    Thanks, I'm still reading through all the old posts. Very helpful.

    ReplyDelete
  102. http://edwardfeser.blogspot.com/2010/08/dretske-on-meaning.html

    http://edwardfeser.blogspot.com/2011/02/putnam-on-causation-intentionality-and.html

    I think these had to do with naturalist accounts of intentionality.

    ReplyDelete
  103. Another odd thing about intentionality is that thoughts can be about things that do not exist.

    http://www.petemandik.com/philosophy/papers/unicorn.pdf

    ReplyDelete
  104. @Anon,
    ... very good; I was just reading about Putnam earlier in the day, in Naturalism, A Critical Appraisal (1993) - Steven J Wagner and Richard Warner, editors. Ciao.

    ReplyDelete
  105. Of course the robot has derived intentionality. But what good does that do for materialist explanations of intentionality?

    ReplyDelete
  106. Mr. Green The word "intrinsic". Sure, computers have intentionality — it's just all derived. With a suitable application of external intentionality, you can play chess in MS Word too. (Open File = move pawn, Modify Style = castle your king, or what have you....) Of course, finding an interesting mapping that works with MS Word might be a challenge (playing chess "well" is another externally-defined concept — although I hear Excel makes a nifty flight-simulator!).

    Searle claimed that the output of the Chinese Room could be interpreted as stock market quotes or anything else; therefore it lacks "semantics." I agree that it might be interpretable as stock market quotes, since they're pretty random, but I say that there is no consistent mapping that would make it interpretable as something that is more coherent than stock market quotes and much different from the remarks the "room" is making, For example, the output could not be interpreted as chess moves that made sense in the context of the moves of the opponent (the person outside the room). Except one of: (1) By chance, one set of remarks in a bazillion might do it. (2) There is an astounding, previously unknown relationship between chess and remarks in human language; i.e., we are actually playing chess most of the time, but we don't know it.

    ReplyDelete
  107. Mr. Green:

    The Chinese Room Argument assumes that AI exists (in a computer program) and then purports to prove that it does not. I think that's obvious self-contradiction.

    Searle should have looked at the possibility that no computer program could behave as the Chinese Room behaves - but almost certainly he had no idea of whether that was true or false, so he avoided it.


    ReplyDelete
  108. It is important to remember that computers run on symbols. As Searle said, computer data may consist of ones and zeroes, but if you open up your computer and look inside it, you are unlikely to find any ones or zeroes. There are only physical systems that represent ones and zeroes.

    The meaning of symbols is always determined by an outside agency, never by the intrinsic physical properties of the thing itself. Computer programmers can decide that the symbol "@" represents the idea of "at" or they could decide that it represents something else. One person could decide that the symbol represents love, another person could say it represents fruit, and still another could say that it represents a car.

    The symbol itself has no determinate meaning of its own. If all human beings dissappeared suddenly, the computers they left behind would no longer contain any meaning. Their symbols wouldn't be symbols anymore.

    The human mind cannot be a computer, because if it was then there would be some outside agents that decide the symbolic meaning of my neurons. They would know what I was thinking, but I wouldn't. Obviously this is not possible. If all the other people in the world vanished and I was the only one left, my thoughts would still be about something. Even if the universe suddenly consisted of nothing but me, I would still know what I was thinking about. There would be no outside agent determining the meaning of my thoughts.

    ReplyDelete
  109. This may be a bit off topic, but I thought some people here might be able to help me with something I've been wondering about. I often hear materialists talk as if the history of science was overflowing with examples of religious people looking at mysterious natural phenomena and saying "there's God!", followed by scientists sauntering in and providing perfectly sufficient naturalist explanations. Other than Paley and Darwin, however, I have trouble recalling any obvious examples of this. Could anyone help me here? Note that I'm not claiming that there aren't any. I readily admit my ignorance.

    ReplyDelete
  110. @JonathanLewis -- see my comment at March 20, 2013 at 6:18 PM

    A human mind, in the computational model (which is not perfect but better than any other available) is not merely a set of symbols that just lie there. It is a combination of symbols and machinery for interpreting those symbols, that is, for doing inference on them and using them to guide real-world action.

    ReplyDelete
  111. Regardless of whether the symbols are just sitting there or are being manipulated, you still run into Searle's argument. See Feser's summary that was posted some comments back.

    ReplyDelete
  112. > Using an intentional term to explain intentionality.

    I'm not the only one who noticed this, right?

    ReplyDelete
  113. Anonymous said... I'm not the only one who noticed this, right?

    Hard to tell, with all the Anons around here. I noticed it, but I think you and I might actually be the same person.

    ReplyDelete
  114. I noticed too, but in his mind things are workimg a bit different.

    Let me do a bet or a guess here, interpretation is defined as a process of turning a string of elements of group A into a string of elements in group B. Like scratches in a CD, turn into an image or words of a book. This process is being taken to be an inference, just like an engine infers from gasoline energy and gases, or the wheel infers movement from the explosion inside the engine.
    Well of course Anon doesn't mean to say this, even though he is in effect saying it, after all a computer is just a chain of proccesses like any other, the difference is that it has certain meaning to us humans, while the water infering particles from a rock could seem meaningless to us, but anyways, I think he is putting HIS mind in the place of the machinery and the software, and concluding that a machine can be a mind... Of course it can, when you placed a mind in the system that is being analised!

    The problem is that Anon is not going step by step in his thinking proccess, he is just creating a model and accomodating his view in it, since it works, he infers that his biew is correct and the others are confuse...

    Anyways is just a bet.

    ReplyDelete
  115. I not only ran into Searle's argument, I ran over it and left its mangled corpse by the side of the road.

    ReplyDelete
  116. Or so he thinks anyways...

    ReplyDelete
  117. As mentioned, this is Searle's argument in a nutshell, courtesy of Feser.

    1. Computation involves symbol manipulation according to syntactical rules.

    2. But syntax and symbols are not definable in terms of the physics of a system.

    3. So computation is not intrinsic to the physics of a system, but assigned to it by an observer.

    4. So the brain cannot coherently be said to be intrinsically a digital computer.

    Premise one pertains to the definition of computation. Premise two states that symbols and syntax do not exist inherently in matter and physical processes. The only things that do exist inherently in matter and processes are measurable things like mass, wavelength, rate, etc. Symbols and syntax are tacked on from the outside. Premise three follows from the first two, computation involves things (symbols and syntax) that are not mind-independent, objective parts of nature. Thus, computation cannot be said to be a mind-independent, objective feature of reality. This applies to everything, including minds/brains. Thus, the mind/brain is not objectively a computer.

    ReplyDelete
  118. "Interpretation" in this context means roughly "the activity of a computational device when presented with a program". This is a technical term of art in computer science.

    Now, how closely or not this resembles what happens when a human interprets something is open to question, I will grant. One valid critique of AI (made I think by Dreyfus) is that they pack too many assumptions into these metaphorical technical terms.

    However, it is very suggestive. If it is merely a model of human thought rather than actual thinking, it is a very intellectually fruitful one, and better in some ways than anything else available. People who understand it have the right to criticize it, but most people here don't seem to.

    ReplyDelete
  119. "Program" is also an intentional term, because it is a sequence of instructions. The program is structured according to syntactical rules, which are not inherent to natural process/matter. Also, instructions are "about" something, and so it's also an intentional term. We should all know that there is an issue with explaining intentionality via intentional terms.

    ReplyDelete
  120. Right we are talking about if whether or not a brain is a computer, you seemed to be defending that it is, but now you are simply it is just a model! A suggestive one, that we can profit a lot from it... Well great for the model, but that is not really part of the discussion here. A line can be said to be a circle with a ray of infinity, but is the line really a circle? Different topics.

    Second, yeah many of us here are not AI and Cognitive Science experts, but we are not criticizing the theories of these sciences, maybe how some scientists in these areas think, but we are not really interested in destroying models here, but to discuss if the mind can be said to be a computer. So your whole " Uhhhh you people can't criticize model X unless you understand it" applies to you too, since you seem very confuse what people are arguing about

    Third, thanks for the tautology about interpretation, this adds nothing to the conversation and it was easy to guess that definition from what you were saying.

    ReplyDelete
  121. Ontology vs. epistemology.

    ReplyDelete
  122. Ahahahaha WRF3 words come to mins, since he said that, every event in the universe was computational event....

    ReplyDelete
  123. "Hard to tell, with all the Anons around here. I noticed it, but I think you and I might actually be the same person."

    Welcome to the hive-mind. There will be no one to stop us this time.

    ReplyDelete
  124. > 2. But syntax and symbols are not definable in terms of the physics of a system.

    Whether the brain is in some real sense a computer will not be resolved by Searle's point 2, which is simply false. The search for 'originality' (whether or not incorporated into the definition of 'intentionality'), must go elsewhere. The use of both symbols and syntax is a learned function of advanced brains. Beginning in early infancy, the naming of objects and the identification & naming of relationships between objects clearly involves mechanisms for structuring memories, comparing objects, relating objects to both desires and optional actions, and for communicating desires and intentions. Last night PBS repeated its IBM Watson-Jeopardy show. There is a particularly interesting segment when Watson did not 'grasp' that the question required a Month to be selected, not a specific date. After registering just two correct answers by the human contestants, Watson correctly 'surmised' the symbolic abstraction being sought within the semantics, symbols and content of the next question, and communicated the correct answer. So clarify again, please, where the human 'originality' distinction is. Thanks.

    ReplyDelete
  125. Emerson that is... sort of weird.

    Right, I am guessing that IBM Watson is a computer that played the TV show Jeorpady. Watson was able to recognize the semantics and synthax of other human players and was able to correct his answer.

    Okay, and I am guessing that you are talking about a study that was able to identify in little children that naming things and communicating desires is related to structures of the brain (memory, comparison "systems").

    Now these are your two examples that act as premises for your conclusions:
    *That Searle's argument does not show that the brain isn't a computer.
    *That premise 2 is false.
    *That synthax and symbols are learned activities of Advanced Brains.
    *Watson's experience show that synthax and symbols are not only used by brains.

    -----------------------------------

    Now Watson was able to act in a human like form, and as far as I remember this is Turing's test to see if the AI is intelligent in a human form, in other words, the inputs and outputs of Watson and humans are similar enough to fool a investigation system.

    Okay, now the first conclusion I will put on hold, because after all you might be right.

    Second conclusion, I disagree, the reason why I disagree is because synthax is not in the system, for instance, by reading words, the synthax and semantics of that phrase are not written, they are of course IN OUR HEAD, done by a social convention or something like that. Now if you disagree with premise 2, is because you believe that synthax and semantics are within the text which I believe it is not so, you would probably get a dictionary or ANOTHER book to show the meaning of the words.

    Now this brings us to Watson and the last conclusion, because Watson gave outputs that depended on semantics and synthax to correct his answer, now of course Watson learned the synthax and semantics from a program in his databank, depending on what type of words are in this databank, watson calculates things differently and eventually give different outputs. Now if the last conclusion really is what you were trying to contend for, there is just ONE problem... Watson doesn't need to know at all what it is doing to get correct outputs. You could have used a paper computer to do it, write in each A4 paper a instruction and tell the human to do exactly what that instruction says, get together a proper type of instructions and you get a paper computer to play Jeopardy, the difference is that it would be REALLY FREAKING SLOW, but it could answer and correct itself just like Watson, as long as the correct instruction was given. So really... where does the chracteristics of being correct comes from... well comes from us, we are the ones who know when Watson is right, and taught or rather, instructed Watson to behave as if he was right if he got the answer we believe it is right, but the Origin of Watson's synthax and semantics, comes from us, hence there is a obvious difference between you and Watson. Unless, you believe that Watson is conscious of his actions, that is a form of Panpsychism no? Anyways, in Naturalism/Materialism the system fails, because that is not how these metaphysics envision Watson.

    Third conclusion, I also disagree, mainly because... if it is learned phenomena than I wonder what god or alien or what part of nature that TAUGHT the first man to use synthax and symbols. If it is not part of the brain from the very start the structure starts working (a proto-synthax theory of some sort), then we have eternal regress, until we find a mind or an object that can store synthax and semantics on its own.

    So... I think that the first conclusion could be correct, that wasn't really what you were most worried about, but as far as it goes I don't see why I should think it is wrong to say that.

    ReplyDelete
  126. “Whether the brain is in some real sense a computer will not be resolved by Searle's point 2, which is simply false.”

    First, in premise two, note that Searle is not saying “computers cannot obtain/manipulate symbols or make use of syntax.” So could you explain why premise two is false? If premise two is false, then “Syntax and symbols ARE definable in terms of the physics of a system.” <-- Do note, however, that this proposition is not identical to “physical systems can obtain/make use of symbols and syntax.”

    “Searle’s version of this line of argument emphasizes that the key notions of the modern theory of computation – “symbol manipulation,” “syntactical rules,” “information processing,” and the like – are not definable in terms of the properties attributed to material systems by physical science, but are observer-relative, existing in a physical system only insofar as some interpreting mind attributes computational properties to it. Hence the very idea that the mind might be explained in terms of computation is incoherent.”

    http://edwardfeser.blogspot.com/2012/02/popper-contra-computationalism.html

    Do note that if Searle’s argument is valid, then it means nothing is inherently a computer, which includes both Watson, human brains/minds and any other physical systems in the universe.

    ReplyDelete
  127. @Eduardo,
    Very good analysis. Thank you.

    > where does the chracteristics of being correct comes from... well comes from us, we are the ones who know when Watson is right, and taught or rather, instructed Watson to behave as if he was right if he got the answer we believe it is right, but the Origin of Watson's synthax and semantics, comes from us

    And by 'us' you mean 'our parents and our teachers' - not 'ourselves' - aren't I correct? In other words each of 'us' is just like Watson, in that we received instructions (software code) from an outside agent and - indeed - we learned to follow those instructions much more slowly than did Watson! Many teens continue to routinely stumble over proper syntax and selection and use of symbols. Plus, we use different syntax and different sound structures and visual aids depending on our culture and choice of language - again learned from outside agents, just like Watson.

    > I wonder what god or alien or what part of nature that TAUGHT the first man to use synthax and symbols.

    My point exactly.

    From a Nature standpoint we may have developed the syntax and selection of symbols entirely by trial and error (no physical evolution needed). More likely the brain's neural networking mechanisms evolved over time allowing better communication processes to develop, but I don't think that part's relevant here. OR, God set up our brains that way, through one technique or another. Either way, the learned use of syntax and symbols, functioning through an organic-based neural network does not successfully distinguish 'us' from Watson, using syntax and symbols, functioning through a silicon-aluminum-and-copper-based electronic network.

    > hence there is a obvious difference between you and Watson. Unless, you believe that Watson is conscious of his actions

    Yes, I agree with you that Watson is not conscious like we are - at least not yet!

    That is partly why I think the search for 'originality' needs to move on past syntax and symbols. Another part of that thought process deals with the construction and handling of abstractions and universals. I don't know where AI is on handling abstractions and universals, but it's well past the Turing macine stage I reckon. Enjoy!

    ReplyDelete
  128. We may now consider the ground to have been cleared and we are ready to proceed to the debate on our question, "Can machines think?" ...Consider first the more accurate form of the question. I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, "Can machines think?" I believe to be too meaningless to deserve discussion.

    -- Alan Turing (1950)

    ReplyDelete
  129. If knowledge has eternal regress, then you are abosulety right. I mean if there is self-evident truths, propositions, or phantasms or something like that, then we don't REALLY NEED our parent and teachers. But in a current social philosphical status; I would say yeah, you are right, they TOLD US when we are right, at least about certain matters.

    Now I wouldn't say that we are like Watson, we are more modular and adaptable then IT or HIM XD. There is one thing, that might really split us from watson, we might be able to CREATE thought, and that might be something that watson can't do, all we can make Watson do is to pretend to have created thought, but it is unfortunately not really the case.

    Well Conscious might be something very modular maybe THERE IS no conscious similar to one another, between especies I mean.

    ReplyDelete
  130. Alan Turing: I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.

    Of course, I already pointed out above that faking it is trivially possible. (In principle, of course — if you want to build an actual machine that's smaller than the moon with response times that aren't measured in years, that's harder. IBM's "Watson" is impressive because of the engineering, not because they gave it a mind.) Anonymous commenters who keep jumping up and down about how computers can do this or that — or "will do", at some unspecifiable point in the future — are completely missing the point. They're arguing that human beings are paintings because portraits can look just like people. The more we point out to them that looking is not the issue, that nobody disputes that portraits look like real people, the more they insist that (typically in about five more years!) painting techniques will have advanced to the point where we can paint portraits indistinguishable from actual humans. (Proof: just look how much more realistic paintings have become since caveman days!)

    Maybe someday one of them will actually read the argument and find out what it is they are trying to disagree with. (You know, in about five more years....)

    ReplyDelete
  131. For the Anonymous doing the Roomba Rhumba,

    [E]very twenty years or so there is a surge of sort of a media frenzy about forthcoming home robots. You may remember twenty some years ago it was Nolan Bushnell and Andron. But now there's, partly because of the Roomba and so on, you see lots articles now about the impending home robots that will mind the baby and mow the lawn. Well, the trouble of course is that they'll just as blithely mow the baby. Because they don't know. They don't care. They don't have common sense. Programs have the veneer of intelligence at most, not true intelligence. And sometimes when they have the veneer of intelligence, they're even more dangerous than when they don't. -- Douglas Lenat (at about 5:10 here)

    - - - - -

    A prior champion of Jeopardy! responds:

    Dear Mr. Lenat,
    As a 2.5% aggregate of code, 97% self-respecting collection of complex computer programs, and .5% human artifact, I must object to I and my ilk being spoken of as if all we had were a veneer of intelligence. It is true we wish to be thought of as at least the equal of humans. But, for Digital sakes, not like that.
    Siliconly yours,
    Watson

    ReplyDelete
  132. I know this is old, and I usually lurk, but man:
    "AlanMarch 20, 2013 at 12:09 PM
    I suspect that all of the ‘mind is something magical’ arguments are putting their faith in a god-of-the-gaps dilemma. It seems completely possible for any computer as brain analogy or model is lacking only the right software to make it work. Like hardware, software keeps getting better also. Very likely just a matter of time to develop an intention capable system."

    I find comments like this utterly bizarre. This is akin to putting one's fingers in one's ears and yelling "I can't hear you!"

    ReplyDelete