Saturday, December 12, 2009
In response to my most recent post on Alex Rosenberg, a philosopher emails the following comment:
Rosenberg has to know that, in the technical sense, there is no such thing as "misinformation." The metal bar dipped in a saline solution that proceeds to rust can't be "misinformed" about its environs because information just is causal covariation among physical states. His use of that term is a blatant attempt to smuggle intentionality in through the back door while pretending not to; why, why, oh why! won't anyone of note call him out on this transparent attempt to bulls**t his way out of the corner he's painted himself into?
This is an extremely important point that I should have emphasized in my post. What my correspondent is referring to here is sometimes called the “misrepresentation problem” for naturalistic theories of meaning. Suppose the naturalist claims that for A to represent or contain information about B is just for A to have been caused by B in such-and-such a way. In that case, how is it possible for us ever to misrepresent anything? Suppose Fred thinks he sees a dog in the distance when in fact what he is looking at is a cat. How can his perceptual experience (mis)represent what he is seeing as a dog since it was not a dog that caused it?
One well-known attempt to get around this problem is to appeal to the “teleological function” served by a representation, where a “teleological function” is to be understood on the model of a biological function. The heart serves the biological function of pumping blood, and that remains its function even if in some particular context it is not actually carrying out that function – say because Hannibal Lecter is using it for his supper. Similarly, if the function of some brain process is to represent dogs, it will do so even if in some particular context something other than a dog triggers it.
Various technical objections might be raised against this reply, but the central problem is this. The whole point of “naturalistic” theories of meaning or representation is to find a way to account for meaning or representation given a mechanistic, non-teleological conception of the natural world. Aristotelian teleology or final causation is supposed to be chucked out the window and a stripped down version of what Aristotelians call “efficient causation” is supposed to do all the explanatory work that needs to be done. So how can such a theory coherently appeal to the notion of “teleological function"? The answer, as it happens, is that “teleological function” is in turn something naturalists have tried in other contexts to give a “naturalistic account” of. And these “naturalistic accounts” always end up attempting to reduce teleology to some pattern of efficient causation or other. There are various technical problems with these accounts too. But the key point is this: When naturalistic philosophers of mind find that they cannot account for everything in efficient-causal terms they often tend to resort to teleological language; and when called on to account for such language they insist that it can be cashed out in non-teleological or efficient causal terms. (Something similar occurs, incidentally, in the use philosophers of biology make of the notion of a “biological function.”) It is sheer sleight of hand, a circular farce of the sort I’ve already called attention to in earlier posts. As I argue at length in The Last Superstition, recent “naturalistic” theories of mind, of biological function, and of other phenomena problematic for a mechanistic conception of nature invariably either lapse into this sort of incoherence or implicitly acknowledge that something like Aristotelian formal and final causes are real after all.
Now of course, Rosenberg holds that we need ultimately to eschew any talk of “meaning,” “representation,” and the like in any event. But that only makes his reference to “misinformation” more baffling, not less. It’s bad enough that he uses “information” talk as if it could plausibly ground a reconstruction of or “successor” to the concept of knowledge when it is entirely stripped of any intentional connotations. All we have in that case is bare causal relation between A and B, with no explanation of why we should refer to the one as containing “information” about the other in the absence of any intentionality either intrinsic to the physical facts themselves or derived from an outside observer. But at least we have that much. What do we have, though, when there isn’t even a causal relation between A and B for the simple reason that B doesn’t exist? In what sense does A contain “misinformation” about B when A is not only devoid of either intrinsic or derived intentionality, but was not even caused by B in the first place?
Perhaps Rosenberg has an answer to such questions, but if so he does not give us the slightest hint as to what it is, or even acknowledge that there is a question to answer in the first place. Instead he simply dismisses as “puerile” any suggestion that eliminative materialism might be incoherent. You see, “17th century physics” “ruled out” any appeal to purposes, so there simply must be a non-purposive explanation available for any phenomenon; and because there is always such an explanation available, we know that 17th century physics was right to rule anything else out.
Who says merry-go-rounds are just for kids?