Sunday, January 11, 2009

Computers, minds, and Aristotle

The recently published Philosophy of Computing and Information: 5 Questions, edited by Luciano Floridi, is a collection of quasi-interviews with prominent philosophers, cognitive scientists, and computer scientists. (The same five questions were sent to each of the contributors, who were asked to respond to them either question-by-question or in the form of an informal essay. Hence my label “quasi-interviews.”) Several of the contributions are particularly interesting from an Aristotelian point of view.

As readers of The Last Superstition know, I argue there that the “computationalist” view that the mind should be thought of as “software” run by the “hardware” of the brain is either incoherent or (if it is to be made coherent) implicitly committed to a broadly Aristotelian metaphysics. And in neither case can it vindicate a materialist conception of human nature. One reason for this is that the key concepts required to spell out this position – “software,” “program,” “information,” “algorithm,” and so forth (all of which are somehow to be understood as purely physical properties alongside mass, electric charge, and the like, if the materialist is going to make hay out of the view) – are suffused with intentionality, the “directedness” of a thing toward something beyond itself. Now on at least one common interpretation of the computationalist view, intentionality is among the features of the mind the view is supposed to explain – in which case it cannot coherently appeal to notions which presuppose the existence of intentionality. Even those versions of computationalism which do not claim to explain intentionality face the problem that nothing like intentionality is supposed to exist at the level of physics, at least given the mechanistic conception of nature materialists are implicitly committed to. As Jerry Fodor puts it in Psychosemantics:

“I suppose that sooner or later the physicists will complete the catalogue they’ve been compiling of the ultimate and irreducible properties of things. When they do, the likes of spin, charm, and charge will perhaps appear on their list. But aboutness surely won’t; intentionality simply doesn’t go that deep.” (p. 97)

Hence the notions in question are simply not available to a consistent materialist. And if a materialist nevertheless digs in his heels and insists that “information,” “algorithms,” and the like really are somehow intrinsic to the physical world, then he will in effect have conceded that something like Aristotelian final causes exist after all, and thus abandoned materialism. For if purely physical processes embody genuine “information,” follow “algorithms,” etc., then that entails that of their nature (by virtue of their form, as Aristotelians would say) they point beyond themselves as toward a goal, after the manner of a final cause. (“Information” is information about something; an “algorithm” has an inherent end the rules it embodies are meant to lead to; and so on.) Materialists fail to see this because, like most modern philosophers, they have only the vaguest idea of what Aristotelian formal and final causes are, and labor under all sorts of crude misconceptions (e.g. that for a physical process to have a final cause is for it to seek a goal in something like a conscious way).

For the details, see The Last Superstition (especially pp. 235-47). It was interesting, though, to see that at least one contributor to Philosophy of Computing and Information seems to have come to something like the conclusion I defend in the book. Specifically, the neuroscientist Valentino Braitenberg says:

“The concept of information, properly understood, is fully sufficient to do away with popular dualistic schemes invoking spiritual substances distinct from anything in physics. This is Aristotle redivivus, the concept of matter and form united in every object of this world, body and soul, where the latter is nothing but the formal aspect of the former. The very term ‘information’ clearly demonstrates its Aristotelian origin in its linguistic root.” (Floridi, p. 16)

In other words, to describe some physical process as inherently embodying “information,” while it does rule out dualism of the Cartesian sort, nevertheless is not consistent with the crude materialist claim that “matter is all that exists”; for it is implicitly to accept something like Aristotle’s notion of formal cause (precisely, I would add, because it is implicitly to accept something like his notion of final cause). As I have put it in earlier posts, the neural processes underlying e.g. a given action are merely the material-cum-efficient causal side of an event of which the thoughts and intentions of the agent are the formal-cum-final causes, to allude to all of the famous Aristotelian four causes. (I develop the point a little bit in this review of the psychologist Jerome Kagan’s An Argument for Mind.)

To be sure, Braitenberg’s own claims are only suggestive, and I do not claim he would accept everything I say about this issue in my book (much less everything, or anything, else I say there!) But he clearly sees that the standard materialist assumptions are faulty, as do some other contributors to the Floridi volume. Brian Cantwell-Smith’s chapter, which is among the more lengthy and philosophically substantial contributions, is very good on the deep conceptual problems underlying much work done in this area. Key concepts are ill-defined, and unjustified slippage between or conflation of various possible senses of crucial theoretical terms (including “computation” itself) is rife. But the key problem, as he sees, is what he calls the “300-year rift between matter and mattering” that opened up with Descartes (p. 46) – that is to say, the early moderns’ conception of matter as inherently devoid of meaning or significance. Cantwell-Smith calls for a new metaphysics to “heal” this rift (being apparently unaware of, or at least not reconsidering as Braitenberg does, the old Aristotelian metaphysics the rejection of which was precisely what opened up the rift in the first place).

In his own chapter, Hubert Dreyfus, summarizing themes that have long characterized his work, also criticizes “Descartes’ understanding of the world as a set of meaningless facts to which the mind assigned what Descartes called values” (p. 80). Attempts to find some computational mechanism by means of which the brain assigns significance or meaning to the world always end up surreptitiously presupposing significance or meaning, and attempts to avoid this result tend to lead to a vicious regress. (This, as Dreyfus argues, is what ultimately underlies the well-known “frame problem” in Artificial Intelligence research and the “binding problem” in neuroscience.) As is well-known, Dreyfus makes good use of the work of writers like Heidegger and Merleau-Ponty in criticizing AI, and in particular the notion that we can make sense of the idea of a world inherently devoid of significance for us. But this phenomenological point does not answer the metaphysical question of how and why the world, and ourselves as part of the world, have significance or meaning in the first place. For that – as I argue in The Last Superstition – we need to turn to the Aristotelian tradition, to the concepts of formal and final causation rejection of which set modern thought, and modern civilization, on its long intellectual and moral downward slide.

3 comments:

J said...
This comment has been removed by the author.
Thoughts said...
This comment has been removed by the author.
Thoughts said...

A well written article, thank you.

You say: "Attempts to find some computational mechanism by means of which the brain assigns significance or meaning to the world always end up surreptitiously presupposing significance or meaning, and attempts to avoid this result tend to lead to a vicious regress"

I would add the Symbol grounding problem as an example of the regress. Searle's Chinese Room is an example of Aristotle's intuition that a sense would need to be self aware to avoid the regress - Searle has the translator of Chinese hand the results to a self aware, external observer. (See Symbol grounding problem). There seems to be a recursion of the regress in philosophy of mind - the inability of philosophers to solve this problem led directly to eliminativism, for instance Dennett uses it in Consciousness Explained to simply mock the idea of "mind", however, ignoring something does not make it go away!