Introspection and intentionality
Thursday, August 15, 2013
Eliminativism without truth, Part III
Now comes the main event. Having first set out some background ideas, and then looked at his positive arguments for eliminativism about intentionality, we turn at last to Alex Rosenberg’s attempt to defend his position from the charge of incoherence in his paper “Eliminativism without Tears.” He offers three general lines of argument. The first purports to show that a key version of the objection from incoherence begs the question. The second purports to give an explanation of how what he characterizes as the “illusion” of intentionality arises. The third purports to offer an intentionality-free characterization of information processing in the brain, in terms of which the eliminativist can state his position without implicitly appealing to the very intentionality-laden notions he rejects. Let’s look at each argument in turn.
Introspection and intentionality
A mental state is intentional, in the technical philosophical sense, if it is about, is directed at, or points to something (as your thought that the cat is on the mat is about or points to the state of affairs of the cat’s being on the mat). The phenomenal properties of consciousness (also known as “qualia”) are those accessible from the first-person, introspective point of view (think of the way red looks, the way coffee tastes, or the way pains, itches, and other sensations feel). Rosenberg cites an argument developed by Terence Horgan and John Tienson to the effect that, contrary to a widespread view in philosophy of mind, all conscious intentional mental states have phenomenology and all phenomenal states have intentionality.
This sort of view provides one way of interpreting the claim that eliminativism about intentionality is incoherent. Suppose every introspected conscious thought, just by virtue of being conscious, also has intentionality. Then the introspected conscious thought that there is no such thing as intentionality has intentionality. When the eliminativist finds on introspecting the contents of his mind that he has this thought, then, he is aware of something that exhibits precisely the sort of thing he denies. Thus his position is self-defeating. But the trouble with this sort of argument, Rosenberg says, is that it begs the question, because the reliability of introspection is something else the eliminativist denies. Yes, introspection seems to reveal to us that our thoughts have intentionality, but that, Rosenberg maintains, is an illusion.
There are two main problems with this, a big one and a much bigger one. The first, big problem is that Rosenberg’s wholesale doubt about the reliability of introspection is itself incoherent and otherwise seriously problematic, for reasons I set out in my series of posts on his book The Atheist’s Guide to Reality. (See Part VIII and Part X in particular.)
The second, much bigger problem is that Rosenberg’s objection is in any case simply directed at a straw man, at least insofar as it supposes that the incoherence objection against eliminativism has anything essentially to do with introspection, phenomenology, the first-person point of view, and the like. It is worth noting that Horgan and Tienson themselves are not primarily concerned in the article Rosenberg cites with trying to refute eliminativism (though they do in passing refer to Quine’s indeterminacy thesis), and that the philosopher who is perhaps the best-known proponent of the incoherence charge -- Lynne Rudder Baker, whose work Rosenberg himself cites -- does not rest her case on an appeal to introspection, etc. So who, exactly, are these critics of eliminativism who make such an appeal? Rosenberg does not tell us.
Indeed, Rosenberg acknowledges that there are different versions of the incoherence objection, at least implicitly allowing that not all versions make an explicit appeal to introspection or the first-person point of view (and citing Baker precisely as a representative of the more “serious” version of the incoherence objection). The last section of his paper (to which we’ll turn below) is devoted to trying to answer this more serious version. So why waste a section on introspection, phenomenology, etc.?
The answer seems to be this. Just as certain critics of the cosmological argument compulsively attack the “Everything has a cause, so the universe has a cause” straw man -- an argument no serious proponent of the argument ever actually gave -- so too, a certain kind of materialist is constantly going on about the introspective trap, the Cartesian theatre, etc. The assumption is that if you’re not a materialist, then you simply must, at least implicitly, be a Cartesian of some sort. Never mind the fact that Aristotle, Aquinas, Wittgenstein, and other important critics of materialism (I would say the most important critics) were not Cartesians and indeed would reject the key elements of the Cartesian approach to the mind. (Never mind either that even when materialists attack Cartesianism, they are often aiming their fire at caricatures rather than the real McCoy -- see the posts on Paul Churchland and Daniel Stoljar linked to here.) Like other materialists, Rosenberg seems to assume too parochial and tendentious a conception of the problems and the range of possible solutions.
Be that as it may, the incoherence charge simply does not rest on any assumptions about introspection, phenomenology, or the like. Consider what I take to be the most important respect in which Rosenberg’s eliminativism is incoherent -- its denial (discussed in the previous post in this series) that any of our thoughts, or any of our spoken or written linguistic productions, has any determinate intentional content. The late James Ross, in his 1992 Journal of Philosophy article “Immaterial Aspects of Thought” (and again in his book Thought and World) admirably summarized some of the problems with this claim. I develop and defend Ross’s arguments at length in my recent ACPQ article “Kripke, Ross, and the Immaterial Aspects of Thought.”
Ross notes that if none of our thoughts has any determinate content, then none of our formal thinking is ever determinate. Adding, squaring, inferring via modus ponens, syllogistic reasoning, and the like are some of the examples of formal thinking he has in mind. To deny that our thoughts ever have any determinate content is to deny that our thoughts are ever really determinately of any of these forms. It is to claim that at best we only ever approximate adding, squaring, inferring via modus ponens, etc. “Now that,” as Ross says, “is expensive. In fact, the cost of saying we only simulate the pure functions is astronomical.”
In particular, there are three ways in which such a claim is incoherent. First, the claim that we never really add, apply modus ponens, etc. cannot be squared with the existence of the vast body of knowledge that comprises the disciplines of mathematics and logic. Nor is it just that mathematics and logic constitute genuine bodies of knowledge in their own right; they are also presupposed by the natural sciences. Now it is in the name of natural science that philosophers like Quine, Dennett, and Rosenberg draw the extreme conclusions about the indeterminacy of content that they do. But if natural science presupposes mathematics and logic, and mathematics and logic presuppose that we do indeed have determinate thought processes, then there is no way these philosophers can consistently draw such conclusions.
A second and related problem is that if we never really apply modus ponens or any other valid argument form, but at best only approximate them, then none of our arguments is ever really valid. That includes the arguments of those, like Quine, Dennett, and Rosenberg, who say that none of our thoughts is really determinate in content. Hence the view is self-defeating. Even if it were true, we could never be rationally justified in claiming that it is true, because we couldn’t be rationally justified in claiming anything.
Third, the claim that we never really add, square, apply modus ponens, etc. is self-defeating in an even more direct and fatal way. For coherently to deny that we ever really do these things presupposes that we have a grasp of what it would be to do them. And that means having thoughts of a form as determinate as those the critic says we do not have. In particular, to deny that we ever really add requires that we determinately grasp what it is to add and then go on to deny that we really ever do it; to deny that we ever really apply modus ponens requires that we determinately grasp what it is to reason via modus ponens and then go on to deny that we ever really do that; and so forth. Yet the whole point of denying that we ever really add, apply modus ponens, etc. was to avoid having to admit that we at least sometimes have determinate thought processes. So, to deny that we have them presupposes that we have them. It simply cannot coherently be done.
Notice that none of this requires -- any more than Rosenberg’s own arguments do -- an appeal to introspection, phenomenology, etc. When Rosenberg gives you an argument, he gives you the premises (about the success of science or whatever) that he says you are already implicitly or explicitly committed to, and then tells you what conclusion he takes logically to follow from them. He doesn’t at some point say: “Now, let me add that the reason for all of this is that it just seems from introspection of my phenomenal conscious awareness that the premises are true and that the conclusion follows” or the like. The focus is on the arguments themselves, not on his or anyone else’s introspective awareness of entertaining the arguments.
Similarly, when a philosopher like Ross argues for the incoherence of eliminativism, he simply points out what follows logically from certain premises he takes it the eliminativist himself already implicitly or explicitly accepts. He doesn’t appeal to introspection of his phenomenal awareness, any more than Rosenberg does. Ross’s argument is not: “Here is how things seem to me introspectively, and how I assume they seem to the eliminativist too.” His argument is: “The eliminativist’s arguments, like everyone else’s, presuppose such-and-such patterns of formal reasoning; yet he is also committed to denying that anyone’s arguments, including his own, are of those or any other determine patterns. That is incoherent.” Here too the focus is on the arguments themselves, not on anyone’s introspective awareness of entertaining the arguments. Indeed, the emphasis is precisely on those aspects of an argument -- such as its formal validity or invalidity -- by reference to which we judge the way things seem to us introspectively (as in “Sure, it might seem that if all men are mortal and Socrates is mortal, then Socrates is a man, but that is in fact in invalid syllogism form”).
In short, as with Rosenberg, what is at issue is what is objective and available from the third-person point of view -- in particular, the formal patterns of inference characteristic of logic, mathematics, and science -- not on the way things seem subjectively, from the first-person point of view of the Cartesian subject and his “inner theatre.” The difference is that it is Rosenberg who has no way coherently to appeal to this body of objective, third-person truths.
The illusion that intentionality is an illusion
But the impossibility of accommodating truth of any sort is, at the end of the day, the Achilles heel of Rosenberg’s position. He explicitly acknowledges that eliminativism “deni[es] that there is anything in the brain or elsewhere that qualifies as carrying truth values,” since it rules out there being intentional content or anything else in neural circuits “that would make them truth-apt.” For a statement or thought to be true, it has first to be about something. Truth is just a matter of getting right what you think or say about the thing you are thinking or talking about. So naturally, if there is no aboutness, there is no truth either.
Yet Rosenberg also assures us that intentionality is a “myth,” “illusion,” or “figment”,” and that brains (such as, presumably, the brains of non-eliminativists) can contain “misinformation.” But what exactly does myth, illusion, misinformation, etc. amount to if there is no such thing as truth to contrast it with? For if there are no truth values, then (as anyone who’s constructed a truth table can tell you) there is no falsity any more than there is truth. Falsity, like truth, presupposes aboutness -- presupposes getting wrong what you think or say about the thing you are thinking or talking about. So if there is no aboutness, there can be no falsity, error, illusion, etc. either.
Moreover, if, by Rosenberg’s own admission, even the thoughts, utterances, and writings of eliminativists are not true -- since there just is no such thing as truth on his view -- then exactly what is it that the eliminativist has got that his critic has not? What does Rosenberg mean in The Atheist’s Guide to Reality when he says that he and his fellow atheists “know the truth” while ”most of religion’s best stories are false”?
To see how complete is Rosenberg’s failure to deal with this problem, consider first his account of how the “illusion” of intentionality arises. He asks us to consider the “silent ‘sound’ tokens and images” that “flit across consciousness, along with sensations and feelings that pass through it,” and also the “behavioral accompaniments” of all this phenomenology. The precise phenomenal content and sequence of such sounds, images, feelings, behaviors, etc. that occurs in the stream of consciousness of a person A who hears a language he understands is different from that occurring in the consciousness of a person B who hears a language he does not understand. And that is all the difference between A and B amounts to, in Rosenberg’s view -- a difference in the phenomenal content and sequence of sounds, images, feelings, behaviors, etc. But by the same token, Rosenberg suggests, the difference between someone who seems to have intentionality at all -- as both A and B seem to have it -- and a third subject C who does not even seem to have it, is just the same sort of difference. What happens is just that the precise phenomenal content and sequence of sounds, images, feelings, behaviors, etc. that A and B exhibit is different from the sequence that C exhibits, and different in a way that generates the illusion of intentionality. In all three cases, though, all there really are are the intentionality-free sounds, images, feelings, behaviors, etc.
Now one problem with this is that it seems, either to conflate strictly intellectual activity with what Scholastic writers would call “phantasms” (such as visual and auditory mental images) -- to allude to a distinction I explained in the first post in this series -- or implicitly to deny that strictly intellectual activity, as opposed to mere imagery and the like, really exists at all. And Rosenberg gives no non-question-begging reason for either the conflation or the denial.
But there is another fallacy here. To see it, consider first the following analogy. Suppose there are two bowls full of milk with Alpha-Bits cereal floating on top, sitting outside on a windy day. The bits are swirling about randomly across the top of each bowl, and in one of them some of the letters gradually form the sequence C-A-T-S. Does the formation of this sequence generate in that particular bowl the illusion that it is thinking about cats? Does this bowl thereby come falsely to suppose that it has intentionality, while the other bowl remains free of this illusion? I doubt even Rosenberg would think so. There is absolutely nothing in any sequence the shapes could occur in that would generate even the illusion of intentionality, let alone intentionality itself. Illusions are just not the sorts of thing sequences of meaningless shapes by themselves can generate, no matter how many shapes there are and how complex the sequence is.
Someone would have to be extremely philosophically inept to think it even prima facie plausible that this scenario could generate such an illusion. He’d have to suppose that it is significant that the sequence in question looks like the English word “cats.” Of course, it is not at all significant, because the written or spoken word “cats” has its intentionality in only a derived rather than intrinsic way (to use another distinction explained in the first post in this series). There is nothing in the shapes making up the word that gives them any connection whatsoever to cats. The connection is entirely conventional. Hence there is nothing whatsoever in the sequence of Alpha-Bits appearing in the bowl that could get you the slightest distance toward even the illusion of thinking about cats. Someone who thinks otherwise is making the mistake of confusing derived intentionality with intrinsic intentionality.
But Rosenberg’s fallacy is even worse than that. Recall that on his view, there is no such thing as either intrinsic or derived intentionality. Absolutely nothing has even the latter -- not the words and sentences on this page, not the letters or sequences of letters in a bowl of Alpha-Bits cereal, and not the sounds and images that pass through consciousness. Nor could there be such a thing if there is no intrinsic intentionality, for there is in that case nothing for things with purportedly derivative intentionality to derive it from.
Hence, even a bowl of Alpha-Bits -- which are of themselves utterly meaningless but have a kind of derived intentionality insofar as they were made by us for the purpose of counting as letters -- is not the best analogy for Rosenberg’s model of the stream of conscious images and sensations, but just a first approximation. A better analogy would be something like a pool of water in which are floating various bits of random detritus -- fallen leaves, shards of wood, seaweed, dead bugs, bits of froth and the like. Suppose there were two such pools and atop one of them a sequence of shapes randomly formed that looked very roughly like this: П Δ ‡ ∂. Does the formation of this sequence generate in that particular pool the illusion that it is thinking about cats? Does this pool thereby come falsely to suppose that it has intentionality, while the other pool remains free of this illusion? The suggestion is even more of a non-starter than the Alpha-Bits scenario was. And it remains a non-starter no matter how much complexity we add to the causal series that leads to the formation of this sequence. Rosenberg himself insists that you will never get intentionality out of non-intentional bits of matter no matter how complex the causal relations between them. But how exactly does the complexity of causal relations generate illusions in a system (whether illusions of intentionality or of anything else), any more than it can generate intentionality? Rosenberg does not tell us; he just asserts that this is how the illusion arises.
Why would Rosenberg think his account of the origins of the purported illusion is even prima facie plausible? I suggest that what is going on is this. Consider the difference between:
1. shapes, sounds, etc. that have derived intentionality (e.g. the words on this page, or a child’s deliberately produced arrangement of Alpha-Bits into the word “cats”)
2. shapes, sounds, etc. that we treat as if they had intentionality (e.g. the chance arrangement of Alpha-Bits into the sequence C-A-T-S, a chance arrangement of detritus that vaguely looks like a word)
3. shapes, sounds, etc. that not only have no derived intentionality, but which we do not even treat as if they had it (e.g. arrangements of detritus of whose existence we are completely unaware)
Now in cases 1 and 2 various illusions of intentionality can and do arise. A child or unsophisticated person might be so used to seeing the sequence of shapes C-A-T-S as a word that he comes to think that the meaning is somehow inherent in the shapes themselves. That would be an illusion of intrinsic intentionality where what really exists is only derived intentionality. A random arrangement of detritus might by chance look similar enough to the word “cats” that an observer might falsely assume it to have been deliberately arranged by someone to spell out that word. That would be an illusion of derived intentionality, where what really exists is only as-if intentionality. And we can imagine someone looking at the detritus floating in the pool of water and saying: “If I tilt my head and squint really hard, then the sequence П Δ ‡ ∂ almost looks as if it were the word ‘cats.’” That might be characterized as a deliberately generated illusion of derived intentionality.
But there is nothing in any of these examples that suggests that absolutely all intentionality might be an illusion. In every case, intrinsic intentionality is (for all Rosenberg has shown) lurking in the background as a precondition of the illusion. The child or unsophisticated person mistakes the derived intentionality of a sequence of shapes for intrinsic intentionality, and the careless observer mistakes some random sequence of shapes for a word, but only because language users with intrinsic intentionality have already imparted derived intentionality to sequences of shapes like the ones in question. The person who squints so as to make a sequence of random shapes look like a word does so only because he is already aware of real words which have derived intentionality, which in turn presupposes intrinsic intentionality.
The only case where there is clearly no intentionality of any sort present is case 3. But that is also the case where we have no independent examples of the illusion of intentionality arising -- no cases where anyone would, independently of some prior commitment to eliminativism, claim that an illusion of intentionality does or even could arise. Yet it is an example of an illusion of intentionality arising in a case like 3 that Rosenberg would need in order to make his position remotely plausible. I would suggest that he thinks that such a case is plausible because cases 1, 2, and 3 have this much in common: There is, in none of these cases, any intrinsic intentionality present in the shapes themselves. And Rosenberg is implicitly inferring from the fact that the illusion of intrinsic intentionality can arise in cases like 1 and 2 that it can arise in cases like 3 too. But that simply doesn’t follow.
In short, any “illusion” is essentially a case of as-if intentionality, as when the child or unsophisticated person perceives what is really only derived intentionality as if it were intrinsic intentionality, or when someone accidentally or deliberately perceives what has no intentionality at all as if it had at least derived intentionality. And as-if intentionality, like derived intentionality, presupposes intrinsic intentionality. Certainly Rosenberg -- who acknowledges that derived intentionality presupposes intrinsic intentionality -- has given no non-question-begging reason to think either that an illusion of intentionality would be anything other than a case of as-if intentionality, or that there could be as-if intentionality in the absence of intrinsic intentionality.
“Information” without misinformation
In the last section of his paper, Rosenberg offers what he seems to think is a solution to these problems -- a sketch of what might replace the intentionality-laden notions (like “illusion”) in terms of which eliminativists, like everyone else, routinely express themselves. He begins by suggesting that we think of neural activity as a “map” of the external world and our behavioral responses to it, though he immediately acknowledges that the eliminativist cannot coherently regard it as anything like a literal map, laden as the ordinary notion of a map is with intentionality. Maps represent things, their elements stand for this or that, they need to be interpreted, etc., and eliminativism denies that there are any such things as “representation,” one thing “standing for” another, “interpretation,” or the like. The idea is rather that there is a causal correlation between structures of neural circuits on the one hand, and elements of the external world and of behavioral responses to it on the other.
What we have here is something like the “causal covariance” account of information associated with thinkers like Fred Dretske. But Rosenberg acknowledges that such accounts fail as accounts of intentional content or semantic information. As an eliminativist he thinks there is no such thing as information in that sense -- that is, in the ordinary, everyday sense. He is, accordingly, using “information” in a purely technical sense. He is not saying that information in the ordinary sense can be explained in terms of causal covariance; given the indeterminacy problems discussed in the previous post in this series, he acknowledges that it cannot be explained in that way, and thus must be eliminated by the naturalist. He is essentially suggesting instead that we replace the ordinary notion with causal covariance. Causal covariance, you might say, is the new information.
The difference between the eliminativist and the non-eliminativist, then, can be described, not in terms of differences in their beliefs, the propositions they would respectively affirm or deny, the meanings of the sentences they would write or utter, etc. -- terms which, of course, all presuppose the intentionality the eliminativist denies. Rather, they can be described in terms of the differences in the respective structures of neural circuitry to be found in their brains. The eliminativist has neural structures of this one sort mediating the causal input from the external world and the causal output leading to his behavior, including his linguistic behavior (understood as meaningless sounds and scribblings); the non-eliminativist has neural structures of that other sort mediating the causal input from the external world and the causal output leading to his own, different behavior, including his linguistic behavior (also understood as meaningless sounds and scribblings).
So far so good. But now comes the sleight of hand. Here and there in his paper, Rosenberg casually drops in the word “misinformation,” in reference to something the brain might also contain, alongside the “information.” And of course, if his entire positive account of what might replace intentionality boils down to a theory of “information,” then he needs something like the notion of misinformation in order to ground his assertions that intentionality is an “illusion,” that the key claims of religion are “false,” etc.
The problem is this. Naturally, Rosenberg cannot mean “misinformation” in the ordinary sense, because he admits that he is not entitled to the notion of information in the ordinary sense. Hence “misinformation” as he uses it cannot mean anything like “false statements,” “erroneous descriptions,” or the like, for anything like that would entail intentionality. But then what does it mean?
Rosenberg doesn’t tell us. If “information” is just “causal covariance,” are we supposed to think of “mis”-information as the absence of causal covariance? That can’t be it. For one thing, the absence of causal covariance would give us non-information, but that is not the same thing as mis-information (any more than a stone or a cup of water counts as a misogynist by virtue of not loving women). For another thing, Rosenberg does not deny that there are relations of causal covariance between the neural structures of non-eliminativists on the one hand, and their external environments and behavioral responses to it on the other. Yet he would presumably want to characterize their neural structures as embodying “misinformation.”
Is “misinformation” what neural structures carry when they are in some way maladaptive? That can’t be it either. One of the big themes of Rosenberg’s The Atheist’s Guide to Reality is how adaptive are many of the views he regards as false. The key ideas Rosenberg is trying to disabuse us of are, in his view, so hard to eradicate precisely because despite being “illusions,” they have likely been hardwired into us by natural selection. Nor can what Rosenberg has in mind be “misrepresentation” in the sense defended by “teleosemantic” theories of meaning, for as we saw in the previous post, Rosenberg acknowledges that such theories cannot solve the indeterminacy problem. (We saw in another earlier post how Dretske’s attempt to explain misrepresentation founders on the indeterminacy problem.)
In short, Rosenberg’s latest paper makes no progress whatsoever in answering an objection raised against another article in which he presented these ideas almost four years ago. If “information” is just causal covariance, then all he is entitled to say is that the “information” in the eliminativist’s brain is different from the “information” in the non-eliminativist’s brain. And that’s it -- different. Not “truer than,” “more accurate than,” “better than,” etc. Just different. There are these causal patterns, and then there are those causal patterns. End of story.
Rosenberg presents The Atheist’s Guide to Reality as an unflinching account of what anyone committed to the scientism underlying contemporary atheism ought to accept if he is consistent. And to his credit, he is indeed far more consistent than most other atheists are. But he is not entirely consistent. If he were entirely consistent -- or as consistent as an eliminativist can be (for it is impossible to be an entirely consistent eliminativist) -- he would give up not only “intentionality,” “semantic meaning,” “truth,” etc. but also “illusion,” “myth,” “figment,” “misinformation,” and related notions. He would not only give up the views he regards as illusions, myths, etc; he would also give up the claim that they are illusions, myths, etc. He would give up atheism as well as theism, science as well as superstition, and certainly any language that implies that theists and other non-naturalists are wrong, irrational, stupid, misinformed, or in any way whatsoever deficient compared to atheists and naturalists. Indeed, he would give up eliminativism itself, along with any other position. He would have to become like Cratylus --perhaps the most consistent eliminativist that ever lived -- merely moving a finger rather than putting forward any thesis.
But where’s the fun in that? And Rosenberg’s book did, after all, purport to tell us how to enjoy life without illusions…