We’re
looking at Alex Rosenberg’s attempt to defend eliminative materialism from the
charge of incoherence in his paper “Eliminativism without
Tears.” Having set out some
background ideas in an
earlier post, let’s turn to the essay itself. It has four main parts: two devoted to arguments
for eliminativism, and two devoted to responses to the charge of
incoherence. I’ll consider each in turn.
Neuroscience and eliminativism
Rosenberg evidently
supposes that eliminativism is so clearly correct that there must be some way of making it coherent. Hence he devotes the first half of the paper
to setting out some arguments in its defense.
If those arguments are powerful, then (he seems to think) the reader
will have to concede that there must
be some way of making eliminativism coherent even if it turned out that
Rosenberg fails to show exactly what
that way is in the second half of the paper.
(Though again, he does try to provide a way.)
The first
argument claims that “neuroscience makes eliminativism about propositional
attitudes unavoidable.” A propositional
attitude is a relation between a thinker and a certain proposition or
content. When we say that Fred believes
that it is raining, we are attributing to him the attitude of believing the
proposition that it is raining; when
we say that Ethel hopes that it is sunny, we are attributing to her the
attitude of hoping that the proposition that it is sunny is true; and so forth.
Neuroscience, Rosenberg tells us, shows that there is no such thing as
believing, hoping, fearing, desiring, or the like.
Now in fact,
it takes very little thought to see that “neuroscience” shows no such
thing. For there is nothing in the neuroscientific
evidence cited by Rosenberg that couldn’t be accepted by an Aristotelian, a
Cartesian, a Wittgensteinian, a Whiteheadian, or an adherent of some other metaphysics. What Rosenberg should say is: “Neuroscience, when conjoined with the specific version of
naturalism taken for granted by many (though by no means all) contemporary academic
philosophers of an analytic bent, makes eliminativism about propositional
attitudes unavoidable.” That claim is plausible, if, for obvious
reasons, not quite as earth-shattering as Rosenberg’s way of putting it
was.
The reason
it is plausible is as follows. Suppose
you assume -- as Aristotelians, Whiteheadians, Russellians, panpsychists, et
al. would not, but most contemporary philosophical naturalists within analytic
philosophy do -- that there is nothing more to matter than what natural science
attributes to it. And suppose you assume
also -- as Aristotelians, Cartesians, et al. would not but most contemporary
philosophical naturalists within analytic philosophy do -- that if there is
such a thing as thought then it must be entirely embodied in some sort of
corporeal process. Then you will
naturally suppose that thinking, if there is such a thing, must involve a
corporeal process having no properties over and above those described by
physics, chemistry, neuroscience, and the like.
And if there is no plausible candidate for such a process, then you will
have reason to conclude that there is no such thing as thinking.
Now perhaps the
only plausible candidate for such a process -- not that it actually is
plausible full stop (it is not) but plausible relative to the assumptions in
question -- is something like the “Language of
Thought” hypothesis (LOTH). This is
the view that a thought is a sentence-like symbolic structure in the brain, and
that thinking -- the process of going from one thought to another -- involves
the transition from one such symbolic structure to another in accordance with
the rules of an algorithm. The
postulated symbolic structures are sentence-like insofar as they have syntax
and semantics, just like the sentences of the languages we are familiar with. Only they are not sentences of English,
German, or any other natural language, but rather of “Mentalese” -- a
hypothetical language below the level of consciousness. The attraction of the idea is that it seems
to give us something that has the characteristic features of thought (semantics
and syntax) and yet is purely material (as in one sense a written or spoken
sentence is).
An obvious
problem with this theory is that nothing counts as a sentence apart from language
users who form the convention of using it as a sentence. Sentences and the like are not natural kinds but
artifacts. The hypothesis that there are
naturally occurring sentences in the brain thus makes about as much sense as
the hypothesis that there are naturally occurring sewing needles in the bones,
naturally occurring purses in the skin, or naturally occurring money in the
teeth. You could use teeth as money, skin as a purse, or a piece of bone as a sewing
needle, but until you decide to do so the body parts in question don’t count as
any of these things. Similarly, someone
could decide to count some brain
process as a sentence, but until someone
does so it is not a sentence.
Putting that
aside, though, Rosenberg says there is another problem with the idea that there
are sentence-like symbols in the brain. Consider
the well-known philosophical distinction between “knowledge how” and “knowledge
that” -- that is to say, between the having of certain dispositions and abilities,
and the grasping of certain propositions. Rosenberg holds -- correctly in my view --
that these are mutually irreducible. Having
dispositions and abilities cannot be analyzed in terms of having propositional
knowledge, and having propositional knowledge cannot be analyzed in terms of having
dispositions and abilities.
But now consider
relatively simple organisms like sea slugs, worms, and fruit flies. No one would attribute propositional
knowledge to them, and unsurprisingly, their nervous systems exhibit only the
sort of stimulus/response wiring needed for the sorts of dispositions and
abilities we know them to have. Now our
nervous systems are vastly more complicated than theirs, but the evidence seems
to show that the difference from what is going on in sea slugs and the like is
one of degree rather than kind. But if
what is going on in them is just a rudimentary sort of “knowledge how” rather
than “knowledge that,” then a (far more complex) “knowledge how” rather than
“knowledge that” is all that can be going on in us. Neuroscience, Rosenberg concludes, simply
doesn’t leave any room for propositional knowledge, and in particular no room
for anything like the LOTH and its sentence-like structures in the brain.
Again,
though, it isn’t really neuroscience
per se that rules out propositional knowledge, but rather neuroscience as seen through the lens of Rosenberg’s
philosophical assumptions. Rosenberg’s
conclusion is “So much the worse for the propositional attitudes.” But of course, it is perfectly open to anyone
to conclude instead “So much the worse for the specific form of naturalism
Rosenberg and his circle of friends in current academic philosophy are
committed to.” And since the
eliminativism that results is (as we shall see) incoherent, that is the
conclusion we should draw. (Rosenberg
has a bit of the Hegel complex -- supposing he has brought philosophy itself to a climax when in fact the most he has done is
bring a certain culturally parochial,
merely decades-old style of philosophy to a climax.)
It is
obvious enough how the evidence Rosenberg cites might be interpreted by someone
coming at the issue from a different philosophical perspective. A Cartesian, for example, would say that we
shouldn’t be looking for propositional attitudes in the brain in the first
place, but rather in the res cogitans. The neurological evidence, he might even say,
is exactly the sort of thing we should expect given that (as Cartesians themselves
have of course claimed for centuries -- no scrambling to deal with novel
evidence here) res extensa is of its
nature entirely devoid of thought.
For very
different reasons, an Aristotelian or a Wittgensteinian would say that looking
for beliefs and desires in neural structures is as silly as looking for foliage
in chloroplasts or surface tension in individual H2O molecules --
and sillier still is denying that foliage and surface tension exist when one
doesn’t find them there. It is only the
human being as a whole who can properly be said to have propositional
attitudes. Nor is having them like
having a coin in one’s pocket, a crook in one’s spine, or a limp in one’s
gait. That is to say, it isn’t a matter
of possessing a kind of object, or a bodily attribute, or a behavioral
tendency. Nor does the point have anything
to do with “emergence,” if that is understood as the idea that “lower-level”
features are as metaphysically fundamental as the reductionist supposes, but
the “higher-level” ones “emerge” from them in a way that is either
metaphysically exotic or simply too complex for us practically ever to know all
the details. “Lower-level” features aren’t more fundamental in the first
place. A brain and nervous system are if
anything less fundamental than the
organism of which they are a part, since they are what they are only relative
to the whole.
From the
point of view of Aristotelians, Wittgensteinians, and other radically anti-reductionist
and metaphysically pluralist philosophers, Rosenberg’s position is a classic
example of Procrustean dogmatism -- of forcing the richness of the real world
into one’s simplistic ontology, rather than making one’s ontology fit the
richness of the real world. To see how
this works in the case at hand, consider the following analogy. Suppose that among the innumerable pebbles,
bits of driftwood and seaweed, etc. on a vast beach there were here and there a
number of such objects that by chance had appearances roughly like the following:
A, B, C, D … No collection of these objects that arose through chance would
amount to anything more than a meaningless string of shapes, not even if the collection
happened to look like this: FOUR SCORE AND SEVEN YEARS AGO. Now of course, the beginning of the Gettysburg
Address looks like that. But it is a sentence
fragment rather than a random collection of shapes. You will never find the difference between
them, though, if you look only at the two sets of shapes qua shapes. The shapes are only part of the story, and
not the most important part.
Similarly,
the neural similarities between sea slug and human being are only a part of the
story, and not the most important part. Trying
to understand human beings in terms of what they have in common neurologically with
sea slugs and other lower animals is like trying to do linguistics exclusively in
terms of what the Gettysburg Address has in common with the bits of matter one
might find washed up on the beach. It
is, the Aristotelian would say, to focus exclusively on material and efficient
causes and to ignore formal and final causes, thereby simply ignoring rather
than explaining the totality of the evidence.
As with the difference between the shapes on the beach and the
Gettysburg address, the difference between sea slug and man is not merely a
matter of quantitative differences between aggregates of homogeneous
elements.
Of course,
Rosenberg would deny that there are formal and final causes. He would deny that Aristotelianism,
Wittgensteinianism, Cartesianism, etc. constitute viable alternatives to
naturalism. He would say that there is, ultimately,
no deep metaphysical difference in kind between the Gettysburg Address and the
random arrangement of shapes on the beach -- the former, he would say, is just
what you get when meaningless shapes are arranged by highly complex but equally
meaningless neural processes in a highly complex but equally meaningless
cultural and historical context, rather than by relatively simple meaningless processes
like tidal action. The point, though, is
that his interpretation of the neuroscientific evidence is at best one interpretation
among others, and he has given us no non-question-begging reason for preferring
it to the others. You certainly won’t
find such a reason in the paper, or in The Atheist’s Guide to Reality.
Rosenberg
insists in his defense that natura non
facit saltum. What counts as a saltus or jump is itself a
metaphysically complicated question, but I would certainly agree with the
more general point that you can’t get an effect that isn’t somehow prefigured in
its total cause -- that is, after all, a hoary Scholastic principle. What that shows in the present context,
though, is that since there is intentionality
in us, there must be something in our total cause capable of getting it into us. If Rosenberg’s dogmatically anti-teleological
metaphysics can’t handle that, so much the worse for it. Nor do you have to be a theist or an
old-fashioned Scholastic Aristotelian like me to draw that conclusion. Just
ask Thomas Nagel.
Teleosemantics and eliminativism
Rosenberg’s
other argument for eliminativism about intentionality is this. All naturalistic theories of intentionality,
including the Darwinian teleosemantic approach he thinks is the most plausible,
founder on indeterminacy problems. First,
for reasons of the sort familiar from Quine’s indeterminacy of translation argument,
Fodor’s disjunction problem, and the like, no naturalistic theory can account
for how a thought or utterance can have this
determinate content rather than that
one -- to use Quine’s example, for how it can be a thought about (say) rabbits rather than undetached rabbit parts or temporal
stages of a rabbit. Second, he also
notes that there is what he calls a “proximal/distal indeterminacy problem” insofar
as there are indefinitely many links in a causal chain leading to any neural
structure in which the naturalist would want to locate a thought, and there is
no principled reason to think that the structure represents this link in the chain rather than that one. (Though Rosenberg doesn’t note the connection,
this is a problem to which, as I have often noted, Karl Popper and Hilary
Putnam have in different ways called attention.)
I have
developed and defended both sorts of indeterminacy argument many times -- in
the first case most systematically in my American
Catholic Philosophical Quarterly article “Kripke,
Ross, and the Immaterial Aspects of Thought,” and in the latter case most
systematically in my Advances in Austrian
Economics article “Hayek,
Popper, and the Causal Theory of the Mind.”
Interested readers are referred to those. (I have also addressed these issues here at
the blog -- see the posts on intentionality, Kripke, Dretske, Popper, Putnam,
etc. linked to here.)
Suffice it
for present purposes to say that I think Rosenberg is absolutely right about
this much: There is no way to reconcile naturalism with our having determinate
thought contents. Where we differ is
over the lesson to be drawn from this.
Since he is a committed naturalist, Rosenberg concludes that we simply
do not have any determinate thought contents.
Since I maintain that it is impossible in principle coherently to deny
that we have determinate thought contents, I conclude that naturalism is false.
Why is it
impossible to do so? And how does
Rosenberg try to get around this incoherence problem? I’ll turn to those questions in the next
post.
In the very first comment on Part I, I wrote, "One thing I think Rosenberg is right about is that 'brain states don't have propositional content,' but even there he draws the wrong conclusion. James F. Ross would have taken that as a point in favor of the immateriality of mind."
ReplyDeleteAnd now Ed writes, "Suffice it for present purposes to say that I think Rosenberg is absolutely right about this much: There is no way to reconcile naturalism with our having determinate thought contents. Where we differ is over the lesson to be drawn from this."
So please permit me to quote Jayne from "Firefly": Saw that comin'.
"Second, he also notes that there is what he calls a “proximal/distal indeterminacy problem” insofar as there are indefinitely many links in a causal chain leading to any neural structure in which the naturalist would want to locate a thought, and there is no principled reason to think that the structure represents this link in the chain rather than that one."
ReplyDeleteThis is interesting in respect to the Libet experiment that many reductionists claims "shows" there is no free will. (Notably, Libet never made that claim, and, in fact concluded the opposite.) They simply pick some point of neural activity before the person is conscious of making a decision and say, "See: the decision was really made there!"
What in the world causes them to claim THAT bit of neural activity is "the decision," other than the desire to deny free will, is not clear.
The question I'm never clear about in these discussions of reductionism vs. A-T philosophy is whether the debate is purely about the most fundamental or true ontological description of the world, what "objects" and "causes" exist in the world, or whether it goes beyond that to be a debate about the actual empirical findings we can expect to see from continued scientific investigation. The thought-experiment I always come back to in debates about reductionism is the idea of a mind upload, where future scientists would be able to map out a real human brain down to every synapse, and then create a simulated brain with the same neural layout, and with accurate rules for how individual neurons interact with their immediate neighbors based on their connections and other physical properties (but the simulation wouldn't be programmed with separate explicit rules for any higher-level properties of the brain above the neural level--things like language usage would be expected to "emerge" from the layout and local interactions of the neurons). "Reductionism" in the purely predictive, empirical sense would suggest that if the simulators get the details right, the simulated brain would behave just like the original living brain--it would pass long-term Turing tests, people who knew the person the original brain came from could converse with it at length and not detect any difference in its thinking or personality, etc. I think one commenter I discussed this with previously said he thought A-T philosophy wouldn't definitively rule out the idea that this would work, but some of your comments make me wonder if you would agree, as with the argument above against the position that "thinking, if there is such a thing, must involve a corporeal process having no properties over and above those described by physics, chemistry, neuroscience, and the like". Would it be a sort of falsification of A-T ideas if an upload of this type was successfully created and displayed the same wide-ranging thinking abilities as a normal human, or would the argument just be that despite all external appearances and abilities, it isn't really "thinking" in terms of what's really happening here at an ontological level?
ReplyDeleteWell for starters, "mapping out" is controversial in the first place, since you run into intentionality issues. Also, expecting emergence =/= understanding/explaining emergence.
ReplyDeleteJesseM: "the simulated brain would behave just like the original living brain--it would pass long-term Turing tests, people who knew the person the original brain came from could converse with it at length and not detect any difference in its thinking or personality, etc." - That seems to invite the question, *does* the original living brain in fact do all that?
ReplyDeleteJesseM: That seems an odd kind of question. For what it comes to is asking about an empirical test for the truth of an a priori argument.
ReplyDeleteThat seems to invite the question, *does* the original living brain in fact do all that?
ReplyDeleteYes, I probably should have said "behaves just like the original person the brain came from", but the premise of the question was that this experiment does succeed in producing an entity that's behaviorally identical, and in that case wouldn't we have pretty persuasive evidence that the brain *does* do that, at least from an empirical/predictive point of view if not from the perspective of talking about the ultimate nature of causality on a metaphysical level? My question is just whether the A-T philosophy as Dr. Feser understands it would say this though-experiment definitely wouldn't work, or whether the A-T view is purely a matter of metaphysics and doesn't make any empirical predictions about experiments like this.
That seems an odd kind of question. For what it comes to is asking about an empirical test for the truth of an a priori argument.
ReplyDeleteIf you think A-T metaphysics is "a priori" in the sense that we can't even imagine a possible world where it is false, then since we can certainly imagine a world meeting the empirical description I gave (where an upload would be behaviorally just like a person), then I suppose that means you would answer my question by saying that the A-T philosophy doesn't rule out the possibility that the uploading experiment would work. My question was designed to try to understand if A-T is purely a priori in that sense, or if does in fact make empirical predictions about this experiment which could potentially be falsified. (did either Aristotle or Aquinas even make a clear distinction between a priori and a posteriori philosophical claims, or was Kant the first to do so? It certainly seems as though Aristotle based a fair amount of his philosophy on observations of the living world...)
"wouldn't we have pretty persuasive evidence that the brain *does* do that, at least from an empirical/predictive point of view if not from the perspective of talking about the ultimate nature of causality on a metaphysical level?"
ReplyDeleteBut it seems indisputable, from an empirical pov, that the original living brain does *not* do that - even if the original living brain was removed from the person and kept alive somehow, how is anybody supposed to "converse with it at length" so as to somehow observe the identity or non-identity of its 'personality' (or would it be 'intellectuality' - or 'cerebrality')?
"would the argument just be that despite all external appearances and abilities, it isn't really "thinking" in terms of what's really happening here at an ontological level?"
ReplyDeleteTo me the problem seems to be that your "despite" is not really justified - it is not at all clear that the "external appearances and abilities" of a computer program with which one can have a (simulated?) 'conversation' should naturally be interpreted as the external appearances and abilities of 'thinking.' Why would you make this assumption?
But it seems indisputable, from an empirical pov, that the original living brain does *not* do that - even if the original living brain was removed from the person and kept alive somehow, how is anybody supposed to "converse with it at length" so as to somehow observe the identity or non-identity of its 'personality' (or would it be 'intellectuality' - or 'cerebrality')?
ReplyDeleteI didn't say the brain had to be in complete isolation, obviously it needs both sensory inputs and some sort of body in order to communicate with others, and the same would go for the simulated brain of the upload (through either a simulated or robotic body). But if all the information-processing of responding to the sensory input and producing meaningful behavioral outputs (in the form of outgoing signals to muscles, say those responsible for speech) is happening within the circuits responsible for the brain simulation, it seems reasonable to use the shorthand that the simulated brain is the one "doing" these meaningful things, and this would be strong evidence that physical brains are "doing" all the work of producing meaningful behavioral responses to sensory information too.
it is not at all clear that the "external appearances and abilities" of a computer program with which one can have a (simulated?) 'conversation' should naturally be interpreted as the external appearances and abilities of 'thinking.'
ReplyDeleteI just used that language of "despite all appearances" because I think that's the obvious commonsense interpretation that people would default to, in the absence of religious or philosophical arguments that would convince them to reject these initial intuitions--in a world where this was commonplace and one had AI coworkers and so forth, all of whom acted exactly like regular people in every conceivable sense (the external appearance of reasoning, creativity, emotion, humor, spirituality, etc.), one would naively tend to assume that they had the same types of "thoughts" that we have, no?
Anyway, my basic question doesn't turn on the use of the word "despite", I just want to know whether the A-T philosophy predicts the uploading experiment could never produce something behaviorally indistinguishable from a human, or whether it allows that this is possible but says that it wouldn't actually be "thinking". (And given the premise that it works, even the latter seems questionable, I would think an A-T advocate who says uploading might work could be dubious an upload would have true consciousness and thought but would allow it's possible, since God might have intended from the beginning that uploads be a type of thinking substantial form that comes into being through proximate cause of human ingenuity, just like humans came into being through the proximate cause of evolution. After all, we can't entirely rule out the possibility that the original life that appeared on our planet didn't have the proximate cause of being designed by a race of mortal aliens!)
"it seems reasonable to use the shorthand that the simulated brain is the one "doing" these meaningful things, and this would be strong evidence that physical brains are "doing" all the work of producing meaningful behavioral responses to sensory information too."
ReplyDeleteI suppose - assuming that these things it does *are* meaningful. But you might wait for Feser's part III before rushing to make that assumption. We'll then have to ask what makes them meaningful. (What is the meaning of 'meaningful'?) And in regard to the physical thing the scientists have created, we still have to ask, *what is it?* You can't just insist that we know that it is nothing more than physical stuff, because hey, the scientists (and I think it's metaphysically irrelevant whether they're 'aliens' or not) made it *out of* physical stuff, they made it *using* physical stuff. That simply doesn't follow.
The problem is with matter, not brains, simulations or computers.
ReplyDeleteAnd again, there are some preliminary issues: Information processing and meaningful behavior are not uncontroversial. Searle, Putnam and Popper all come to mind.
IIRC, under AT, sensing and imagining are physical processes. Of course, AT has a different conception of physical matter.
"humans came into being through the proximate cause of evolution"
ReplyDeleteI think it's important to note that this statement is not correct. Evolution is not a cause: not a formal cause, not a material cause, not an efficient cause, and not a final cause. Evolution is the gradual mutation of one species into another. And there is no naturalistic process by which intelligence can emerge from non-intelligence. I believe this is an A-T empirical claim.
(i.e., an empirical claim that follows from A-T metaphysics)
ReplyDeleteI suppose - assuming that these things it does *are* meaningful.
ReplyDeleteAs I said, my thought-experiment is just about empirical behavior, not about the "true" metaphysical description of what's really going. So in the context of this thought-experiment, whenever I use a word that could have both sorts of interpretations, like "meaningful" that could mean either "appears meaningful to humans observing it" or "really is meaningful on an intrinsic metaphysical level", please assume I am using the word in a purely empirical way that is not concerned with what anything "really is" metaphysically. Can I take it that for you at least, the answer to my question is that the construction of an upload behaviorally indistinguishable from a human, with its programmed designed on a purely reductionist understanding of brain function (the program not being given any rules for high-level brain function, only the layout of neurons and general rules for how any neuron interacts with its immediate neighbors) wouldn't in principle conflict with A-T beliefs?
"--in a world where this was commonplace and one had AI coworkers and so forth, all of whom acted exactly like regular people in every conceivable sense (the external appearance of reasoning, creativity, emotion, humor, spirituality, etc.), one would naively tend to assume that they had the same types of "thoughts" that we have, no?"
ReplyDeleteI'm not sure what to say about this. A lot of people naively tend to assume that animals have the same types of "thoughts" that we have. Doesn't prove much. Personally I tend to doubt whether even other people have the same types of "thoughts" that I have. (Of course, it's not at all clear what I mean when I say that.)
I think it's important to note that this statement is not correct. Evolution is not a cause: not a formal cause, not a material cause, not an efficient cause, and not a final cause.
ReplyDeleteWhy is it not a type of efficient cause? Efficient causes can include complex processes made up of many sub-causes, no? Can't we say an "avalanche" is an efficient cause of a house being crushed even though it involves many different causal interactions between different rocks tumbling down and hitting one another?
And there is no naturalistic process by which intelligence can emerge from non-intelligence.
Well, related to my original question, are you saying there's no naturalistic process by which the behavioral correlates of intelligence (the types of things that the upload would possess) can emerge from non-intelligence, or would you agree that this is possible but just say that we need something more to explain the emergence of intelligence at the level of metaphysical essences (and of subjective qualia, perhaps)?
""meaningful" ... could mean either "appears meaningful to humans observing it" or "really is meaningful on an intrinsic metaphysical level", please assume I am using the word in a purely empirical way that is not concerned with what anything "really is" metaphysically"
ReplyDeleteBut "appears meaningful to humans" is simply to vague a criterion to be helpful. Humans disagree about stuff like this. And that disagreement can't be neatly sorted out into empirical disagreement and metaphysical disagreement. That is a false distinction.
A lot of people naively tend to assume that animals have the same types of "thoughts" that we have. Doesn't prove much.
ReplyDeleteBut I didn't say it proved anything, I just casually used the word "despite" in a way that, as I said, plays no important role in any question or argument I'm making.
If the simulated brain only has derived intentionality, then its not an issue. If, like human brains, it has intrinsic intentionality, then its a problem for metaphysical views which claim that there is nothing more to matter other than its quantitative features.
ReplyDelete"Why is [evolution] not a type of efficient cause?"
ReplyDeleteBecause it is not a real thing and is not the efficient cause of anything. You could just as well say that *evolution*, or *the universe*, or *natural processes*, or *stuff happening*, is the efficient cause of *everything*. (An avalanche, in contrast, is a real thing and thus can be the efficient cause of stuff like crushing a house.)
But "appears meaningful to humans" is simply to vague a criterion to be helpful. Humans disagree about stuff like this.
ReplyDeleteIf you prefer, for the phrase "meaningful behavioral responses" substitute the phrase "behavioral responses indistinguishable from the responses of a normal human". Since I am using "meaningful" purely in an observational/empirical way I don't really see a difference between these too (since presumably a human's conscious responses to what they see and hear are "meaningful" ones), but if you would distinguish between them for some reason, the latter works just fine for interpreting my point about it being a reasonable shorthand to talk about a brain "doing" things (which, as with my use of the word "despite", is a basically unimportant sidetrack from my main question--leaving aside quibbles about various words and phrases I've use in a casual way, can you give an answer to the basic question about whether A-T philosophy is incompatible with the idea than an upload of the kind I describe would be behaviorally indistinguishable from a human?)
"are you saying there's no naturalistic process by which the behavioral correlates of intelligence (the types of things that the upload would possess) can emerge from non-intelligence..."
ReplyDeleteWell... again, the question is, what are these "behavioral correlates of intelligence"? - and thus first: what is intelligence? Why are you assuming that a 'naive' take on this issue is adequate for addressing the question? I would say that the important thing to note is that there is no conceivable naturalistic process by which the (correct) answer to this question could emerge. I think your question might boil down to this: Is it possible for a naïve scientist to fail to understand what 'thought' is? And the answer is obviously: yes, that's very possible.
"...or would you agree that this is possible but just say that we need something more to explain the emergence of intelligence at the level of metaphysical essences (and of subjective qualia, perhaps)?"
So your second alternative begs the question, since it presumes that we have already naively/empirically (i.e., NON-metaphysically) correctly determined what intelligence is. But that is simply not possible.
Because it is not a real thing and is not the efficient cause of anything. You could just as well say that *evolution*, or *the universe*, or *natural processes*, or *stuff happening*, is the efficient cause of *everything*. (An avalanche, in contrast, is a real thing and thus can be the efficient cause of stuff like crushing a house.)
ReplyDeleteEvolution is a very specific type of natural process, just like an avalanche--one in which novel adaptations arise due to a long process of random variants of genes having different survival rates in the populations. Perhaps a better analogy than an avalanche is that if you went panning for gold and ended up with a substantial pile of gold bits, an efficient cause of that pile would be the physical process of repeatedly scooping up "random" clumps of mud and then letting lighter grains of dirt rise to the top of the water in the pan and get dumped out, eventually selecting out all but the heavier bits of gold that remain at the bottom. If we use the phrase "gold panning" to refer to the physical process (leaving out the element of human intentionality, the final cause of desiring gold), then can't we say the "gold panning" process is the efficient cause of the large pile of gold that eventually results? If so, why can't the "evolution" process be the efficient cause of eventually getting organisms with large numbers of useful genes that aid their ability to survive and reproduce, since the process is a similar one of many possibilities arising and being continually subjected to a filtration process?
"can you give an answer to the basic question about whether A-T philosophy is incompatible with the idea than an upload of the kind I describe would be behaviorally indistinguishable from a [normal] human?"
ReplyDeleteIt's certainly inconceivable to me how such an upload could be behaviorally indistinguishable from a normal human. Could it be indistinguishable in terms of some arbitrarily defined *subset* of 'normal human behaviors'? Sure, maybe. But I don't know how interesting that conclusion is.
"This is the view that a thought is a sentence-like symbolic structure in the brain, and that thinking -- the process of going from one thought to another -- involves the transition from one such symbolic structure to another in accordance with the rules of an algorithm."
ReplyDeleteI'm not sure that algorithms can manipulate or recognize symbols. There's no evidence that they can manipulate anything more meaningful than strings of characters drawn from defined alphabets, where any meaning or 'symbolism' must be assigned to those characters from outside the system.
Well... again, the question is, what are these "behavioral correlates of intelligence"?
ReplyDeleteIf we are talking specifically about human intelligence, and talking in a purely empirical sense as I have said I'm doing, then we can define it as a set of behaviors that would pass a Turing test of arbitrary length, i.e. something behaviorally indistinguishable from a normal human, as tested by a human like yourself. I make no comment on whether this is a good test of true "intelligence" in some objective metaphysical sense. It seems like you keep objecting to the words I use because you associate them with ultimate metaphysical essences even though I specified very clearly that for the purposes of my question, I am using these words to talk only about what we might observe empirically, and I specifically use phrases like "behavioral correlates of intelligence" rather than "intelligence" to make that more clear. Would it help if I used even more complicated phrases that didn't contain any words you might associate with metaphysical traits, like "behaviors indistinguishable from those of a human, as judged by humans like yourself engaging in long-term interactions with them"? Do you think that could be displayed by an upload constructed in a reductionist manner, or that it could arise over billions of years through natural processes? (leaving open the possibility that the rules of the natural world were themselves designed by an intelligent being)
"Evolution is a very specific type of natural process" - Is it? Or is this just a stipulation as to how you are intending to use the word?
ReplyDelete"just like an avalanche" - certainly not. Very unlike an avalanche. An avalanche is a real thing, a discrete event. Evolution is no such thing. It has no subsistence. It is an abstraction.
You might as well say that economics is the efficient cause of poverty. Or that art is the efficient cause of a statue. If you believed that evolution was a tool of divine providence, you could say that God causes various things to exists *by means of* evolutionary processes, just as a sculptor makes a statue *by means of* his know-how and his tools. But such *by means of*-things are not, as such, efficient causes. Evolution is a thing insofar as the term is also used to refer to the concrete facts of evolutionary history, but these facts cannot be their own cause.
It's certainly inconceivable to me how such an upload could be behaviorally indistinguishable from a normal human. Could it be indistinguishable in terms of some arbitrarily defined *subset* of 'normal human behaviors'?
ReplyDeleteWhy not all behaviors, as opposed to a subset? Obviously some behaviors involve physical interactions rather than interacting by computer screen, but suppose for the sake of argument we imagine that the upload was created by removing the brain of a normal human, mapping it, simulating it, and then connecting the simulated brain to the spinal cord of the original body the brain came from, so it still has a normal human body that is now being controlled by a simulated brain on a computer chip rather than a biological brain. Is it inconceivable to you that no longer how long you interacted with it or what type of relationship you had with it, you would never see anything to arouse your suspicions that it wasn't a regular human being? (Keep in mind that I'm not asking just about whether it's "inconceivable" to your personal intuitions, but rather whether A-T philosophy would give any definite answer to whether this would be possible or not).
"Would it help if I used even more complicated phrases that didn't contain any words you might associate with metaphysical traits"
ReplyDeleteJesse, I've already pointed this out but I guess I need to repeat myself: your distinction between the 'metaphysical' and the 'empirical' is a false distinction. I have no idea what you think you mean by the notion of a 'metaphysical trait', such that you think you can talk about things 'empirically,' while completely bracketing any understanding of their 'metaphysical traits.' Like talking about 'behavioral correlates of intelligence' while disregarding what 'intelligence' is, this seems to me an absurdity of the first order.
An avalanche is a real thing, a discrete event. Evolution is no such thing. It has no subsistence. It is an abstraction.
ReplyDeleteAn avalanche certainly wouldn't be a substantial form in the A-T philosophy, so what do you mean when you say it has "subsistence"? It's a name for a process made up of many distinct events and sub-processes of individual rocks hitting each other and falling under gravity. "Evolution" in the Darwinian sense I am using it refers to a specific process of random mutation and differential survival in the population, "economics" does not refer to any such specific process, it's just a field of study. And what about the closer analogy I offered of gold panning? In both cases you have a process where you generating a bunch of things of a particular class (bits of stuff making up river mud/gene variants), and subjecting them to a filtering process which discards things of type #1 (dirt bits/genes that are harmful to survival and reproduction) and keeps things of type #2 (gold bits/genes which are helpful to survival and reproduction), and the whole process is iterated over and over again until you have a large assemblage of things of type #2.
Also, what about an evolutionary algorithm on a computer in which large numbers of random variants of simulated genes are created and only those simulated creatures whose genes allow them to perform above a certain threshold at a task are allowed to reproduce in the next generation--would you say the evolutionary algorithm can be an essential cause of the final output of the simulation, like the simulated creatures evolved for efficient movement in this example?
"it still has a normal human body that is now being controlled by a simulated brain on a computer chip rather than a biological brain"
ReplyDeleteSo the question is simply, just as a functioning artificial heart may be possible, is a functioning artificial brain (in principle) possible? Sure, I don't see why not. But on A-T principles it is still the person who is thinking, not the brain (whether or not it is artificial).
"An avalanche certainly wouldn't be a substantial form in the A-T philosophy" - of course not; that clearly doesn't mean it doesn't subsist. (With due respect, this is pretty basic stuff.)
ReplyDeleteThe gold panning is like evolution: it is a 'by means of' (like the sculptor's skill or his tools), not an efficient cause.
An evolutionary algorithm on a computer is very different from (atheistic) evolution. If you are thinking of God in your picture, then my prior comments apply.
Jesse, I've already pointed this out but I guess I need to repeat myself: your distinction between the 'metaphysical' and the 'empirical' is a false distinction.
ReplyDeleteReally? Take the trait "being conscious"--is it possible to determine with certainty whether anyone you're talking talking about is really conscious, having internal qualia and so forth, as opposed to a philosophical zombie? (which is what an A-T advocate might take a successful upload simulation to be) In A-T philosophy I think the question of whether something is a true "substantial form" would be similar in that it would have an objective answer but we humans could never be sure of what that answer is, see the discussion I got into about tables and planets on the comments of this post. On the other hand, take the trait "looking like a human being to you"--aren't you in a position to make definitive judgments as to whether something looks like a human to you, since by definition this trait doesn't refer to any true properties of the thing-in-itself, but only your perceptions of it?
" take the trait "looking like a human being to you"--aren't you in a position to make definitive judgments as to whether something looks like a human to you, since by definition this trait doesn't refer to any true properties of the thing-in-itself, but only your perceptions of it?"
ReplyDeleteNo, Jesse; really no. The problem is that I *don't* know what "looking like a human being (to me)" even means apart from a metaphysical judgment about what a human being is and how human beings look - the tag-on "to me" is entirely idle, given that I'm not a solipsist (and that there are excellent reasons not to be a solipsist).
that clearly doesn't mean it doesn't subsist. (With due respect, this is pretty basic stuff.)
ReplyDeleteOK, I have not claimed to have much knowledge of A-T philosophy, that's why I ask these questions. What is it that decides whether a type of process (as opposed a physical object) is "subsistent" or not?
The gold panning is like evolution: it is a 'by means of' (like the sculptor's skill or his tools), not an efficient cause.
I'd rather not bring in analogies that require active intervention by humans, since as I said I just want to discuss the physical process (efficient causes), leaving aside intentionality/final causes. Are you saying that even if we were talking about gold panning done by a machine, for some reason it wouldn't make sense to use "gold panning" to refer to the specific physical process of the machine repeatedly scooping out clumps of mud and sifting out all but the heavy bits and dumping them in a pile, and then say that "gold panning" was the efficient cause of the final pile of gold? Would it make a difference if some natural process not constructed by humans was sifting heavy bits of mud from light ones and depositing only the heavy ones in a specific location?
An evolutionary algorithm on a computer is very different from (atheistic) evolution.
I didn't say anything about atheism, and Darwinian evolution doesn't presuppose anything about whether the universe in which Darwinian evolution can occur was designed (there are plenty of theists who accept Darwinian evolution). Since intentionality belongs to the realm of final causes, I would have thought that judging whether or not some physical process is an "efficient cause" shouldn't depend on whether it (or the universe containing it) was designed. What's more, if you still maintain your position that "evolution" is a mere abstraction like "natural processes" or "economics", then it seems like you should continue to maintain that regardless of whether life or the universe was designed to evolve by an intelligent being--are you stepping back from the "it's too abstract to be an efficient cause" claim now that I've stated the specific Darwinian process I'm referring to?
The problem is that I *don't* know what "looking like a human being (to me)" even means apart from a metaphysical judgment about what a human being is and how human beings look
ReplyDeleteWell, I didn't say empirical judgments can't involve metaphysical beliefs of the person making the judgments. But presumably you accept that your judgments are not infallible, so something could have the trait of "being judged by you to be a human" when it actually isn't a human, no? And my question about the upload is whether it could pass the test of being judged to be human by a human like yourself--perhaps I should also specify that you weren't given information that would tell you directly that it didn't have a biological brain.
So the question is simply, just as a functioning artificial heart may be possible, is a functioning artificial brain (in principle) possible? Sure, I don't see why not. But on A-T principles it is still the person who is thinking, not the brain (whether or not it is artificial)
Would it make a difference if the body was a realistic android body instead of a biological one? Could such an entity still be a "person" capable of real thinking?
"What is it that decides whether a type of process (as opposed a physical object) is "subsistent" or not?"
ReplyDeleteAn avalanche is not a kind of process, but a kind of discrete event. (It's like a genetic mutation, not like evolution.) It is subsistent because a subsistent physical mass destabilizes and breaks apart, and part of it slams into another subsistent physical object, a house. (I hate to appeal to common sense, but that's what I'm going with here.)
If gold panning (or mud panning) were done by a machine, the machine would be the efficient cause of whatever the machine produced.
"I didn't say anything about atheism" - My point was that you also didn't say anything about the programmer of the evolutionary algorithm. When you have such a figure in the background, he is the primary cause - the program is a tool. Of course, maybe he doesn't know what his program will produce, so the program does have some 'independence' - but to this extent it is a merely accidental cause of whatever it happens to produce - i.e., its link to the specific characteristics of its effect is merely incidental. (Essential or natural causes, by definition, operate always or for the most part in the same way - e.g., your parents are essentially the efficient cause of you qua human, but only accidentally qua male.) But when it comes to human beings, no evolutionary algorithm could be the efficient cause, even accidentally, of a human being (specifically of living *reason*, of a living intellect). This possibility is excluded in principle by the principle of causality in A-T metaphysics.
[The problem with Darwinian evolution is that it is based on 'natural selection' - but 'natural selection' amounts to the empty assertion that whatever happens to survive has *ipso facto* been selected to survive (by nature). (This can also be rephrased in terms of 'adaptive traits.') That is too abstract a notion to count as referring to a genuine efficient cause of anything in particular.]
"And my question about the upload is whether it could pass the test of being judged to be human by a human like yourself--perhaps I should also specify that you weren't given information that would tell you directly that it didn't have a biological brain."
ReplyDeleteSo, is it possible, in principle, for me (or, say, Feser) to mistake a robot for a human being? Yes.
"Would it make a difference if the body was a realistic android body instead of a biological one? [Not necessarily.] Could such an entity still be a "person" capable of real thinking?"
Yes... but could such a person be purely mechanically constructed, by human beings? I think A-T principles would exclude that - somehow(?). It certainly couldn't be purely mechanically *constituted*. I suppose A and T never really asked the question whether human art could ever produce a realistic facsimile of a living natural substance (or even of a burger).
Now I think of it, what exactly distinguishes a 'realistic android body' from a 'biological' one?
@JesseM:
ReplyDelete"Would it be a sort of falsification of A-T ideas if an upload of this type was successfully created and displayed the same wide-ranging thinking abilities as a normal human, or would the argument just be that despite all external appearances and abilities, it isn't really 'thinking' in terms of what's really happening here at an ontological level?"
Why are those the only two possibilities? In A-T terms, assuming that the experiment worked (and that we had some way to tell it worked), it would just mean that the resulting entity had a rational soul/substantial form. How would that constitute a "falsification of A-T ideas"?
How would that constitute a "falsification of A-T ideas"?
ReplyDeleteI was led to believe that the rational soul is immaterial under the A-T framework, but if the upload works you would have a material object as the rational soul.
@JesseM:
ReplyDelete"I was led to believe that the rational soul is immaterial under the A-T framework, but if the upload works you would have a material object as the rational soul."
Why?
I should rephrase that last part. You would have a computational program with electrical programming states as the rational soul.
ReplyDeleteSorry, that should have been "@step2."
ReplyDelete@step2:
ReplyDelete"I should rephrase that last part. You would have a computational program with electrical programming states as the rational soul."
I don't need to rephrase; the question is still "Why?"
New human beings are conceived all the time, and every one of them has its own soul/substantial form. If we found another process that resulted in the generation such beings (which I think is unlikely as all hell, but let's suppose), why would that constitute a problem for A-T metaphysics?
@Scott:
ReplyDeleteWhy are those the only two possibilities? In A-T terms, assuming that the experiment worked (and that we had some way to tell it worked), it would just mean that the resulting entity had a rational soul/substantial form. How would that constitute a "falsification of A-T ideas"?
I actually did raise that possibility in the comment at 12:01 PM (at least the way times display for me--it's the one where my first words are "I just used the language of 'despite all appearances'...") The reason I originally suggested this might falsify the position Dr. Feser was arguing for is because an upload designed in the way I stipulated clearly functions in a "reductionist" manner, in the sense that anyone could follow the program's steps and verify that the program was doing nothing more than calculating local interactions of individual simulated neurons (or smaller basic elements of the simulation) according to some general rules of neural behavior. So it seems to me all its outward behavior can be predicted based on a " a corporeal process having no properties over and above those described by physics, chemistry, neuroscience, and the like", as Dr. Feser described the position he was opposing (along with his dismissal of the idea "that thinking -- the process of going from one thought to another -- involves the transition from one such symbolic structure to another in accordance with the rules of an algorithm"). But I'm not sure, perhaps Dr. Feser would distinguish between being able to predict behavior in a reductionist/algorithmic way vs. the true "cause" of the behavior in a metaphysical sense, with the whole upload potentially being a substantial form and thus being the true cause, even if all its behavior can be predicted perfectly with nothing but knowledge of the arrangement of the parts, and general laws governing the individual part. It isn't clear to me from his statements that he means something like this, though, which is why I asked about this specific thought-experiment.
@JesseM:
ReplyDelete"The reason I originally suggested this might falsify the position Dr. Feser was arguing for is because an upload designed in the way I stipulated clearly functions in a 'reductionist' manner . . . "
And the reason for my question is that that isn't clear at all. I strongly doubt that the experiment would or could succeed, but supposing that it did and that we knew the result of the experiment did have a rational soul, why would we assume that it "got there" just from "the program's steps" and its "calculating local interactions of individual simulated neurons"? You might as well say that because we built a radio receiver out of scrap from a junkyard, the radio program we receive on it is somehow generated by the receiver.
In A-T terms, the principle of causality is nearly as fundamental as the law of non-contradiction. If it somehow turned out that consciousness could "emerge" from certain arrangements of (what modern physics calls) matter, surely the sensible A-T response would simply be to acknowledge that matter contained causal potentials we hadn't known about, and that the ability to generate souls was "built in" to (what modern physics calls) matter in a hitherto unexpected manner. (Alternatively, the response could be that God stepped in to endow each AI with a rational soul.)
Either way, though, the A-T proponent wouldn't have to do anything but acknowledge that in some instances, some things that look superficially like artifacts might be substances after all.
ReplyDelete@Scott:
ReplyDeleteI strongly doubt that the experiment would or could succeed, but supposing that it did and that we knew the result of the experiment did have a rational soul, why would we assume that it "got there" just from "the program's steps" and its "calculating local interactions of individual simulated neurons"?
Again, I am not talking about the presence/absence of a soul or any other such objective metaphysical truths, only about observable behavior (note that in the post you are responding to I said 'it seems to me all its outward behavior can be predicted based on a "a corporeal process having no properties over and above those described by physics, chemistry, neuroscience, and the like"). It may well be that Dr. Feser would have no philosophical objection to the idea that all the outward correlates of thinking could be present in an upload that is designed this way, but it isn't obvious, and certainly if he wouldn't have objection I think this would be a point worth clarifying when debating anyone who wasn't very well-versed in A-T thinking (probably including Rosenberg), or else there is a strong chance the two people will be arguing at cross purposes, using the same terms but really talking about different things.
[Reposting to fix a minor HTML error.]
ReplyDelete@JesseM:
"Again, I am not talking about the presence/absence of a soul or any other such objective metaphysical truths, only about observable behavior . . . "
In that case (to return to my original question) it's not clear why you regard your thought experiment as something that might constitute a "falsification of A-T ideas." I know of nothing in A-T thought that says that the behavior of an artifact can't look to an observer like the behavior of a substance.
"...I said 'it seems to me all its outward behavior can be predicted based on a "a corporeal process having no properties over and above those described by physics, chemistry, neuroscience, and the like")."
ReplyDeleteWhat would prediction accomplish? This is about explanation. For example, explaining intentionality. From my understanding, neither Rosenberg nor Feser think that intentionality (in humans, or simulations or computers or in animals, etc) can be explained via quantitative physical sciences. Rosenberg opts for eliminativism.
@David M:
ReplyDeleteAn avalanche is not a kind of process, but a kind of discrete event. (It's like a genetic mutation, not like evolution.) It is subsistent because a subsistent physical mass destabilizes and breaks apart, and part of it slams into another subsistent physical object, a house. (I hate to appeal to common sense, but that's what I'm going with here.)
I wasn't thinking of the type of avalanche where a single mass breaks apart, rather the kind where a few rocks near the top start rolling and knock loose some other rocks which start rolling too, they knock loose further rocks down the slope, and so forth, creating a cascade. So, it seems reasonable to call this a "process" involving the sum of a bunch of individual events where rocks hit other rocks and get them moving. Anyway, is the distinction you are making between a "process" and a "discrete event" part of A-T philosophy or your own terminology? At a microscopic level, a "single mass" like a boulder or glacier breaking apart can be seen as a bunch of individual collisions between atoms, and my understanding from this thread was that A-T philosophy would treat inanimate natural objects as just "heaps" of smaller bits of matter.
If gold panning (or mud panning) were done by a machine, the machine would be the efficient cause of whatever the machine produced.
There isn't just one correct way to describe "the" efficient cause of any event, is there? Can't a process be an efficient cause along with the material entities involved in that process? For example, if water in an ice cube tray freezes, isn't it valid to say both that the freezer is an efficient cause and that the process of heat transfer from the warmer water to the colder air of the freezer is an efficient cause?
My point was that you also didn't say anything about the programmer of the evolutionary algorithm. When you have such a figure in the background, he is the primary cause - the program is a tool.
That's why I specifically used the phrase "proximate cause", to avoid objections about some cause not being primary enough. Again, I would think there is not just one answer to what is "the" efficient cause of something, can't you have a series where A causes B causes C, so B is an efficient cause of C even if it isn't primary?
The problem with Darwinian evolution is that it is based on 'natural selection' - but 'natural selection' amounts to the empty assertion that whatever happens to survive has *ipso facto* been selected to survive (by nature).
It's a statistical explanation involving probabilities (gold panning would also involve statistics, since while it's more likely the dirt particles will get dumped and the gold particles won't, it isn't 100% perfect). Probabilities are not usually understood to be the same as actual observed frequencies on a limited number of trials, more like some ideal of what the frequencies would be in the limit of a very large number of trials. That's why I can say that the probability of a fair coin landing heads is 1/2, even if I only flip it 4 times and get heads 3 times. If fitness were purely about actual frequencies, the concept of genetic drift, where statistical fluctuations in small populations may cause actual gene frequencies to differ significantly from their relative fitness, wouldn't make sense.
why you regard your thought experiment as something that might constitute a "falsification of A-T ideas."
ReplyDelete"Might" not in the sense I have any definite opinion that it would falsify them, but only in the sense that I find Dr. Feser's wording unclear on this point (like the objection to the idea "that thinking -- the process of going from one thought to another -- involves the transition from one such symbolic structure to another in accordance with the rules of an algorithm"), and I am far from well-versed in A-T philosophy myself (and anyway this issue of machines with intelligent-seeming behavior is probably not one Aristotle or Aquinas addressed directly), so I'm looking for clarification.
JesseM, I don't think that "the objection to the idea "that thinking -- the process of going from one thought to another -- involves the transition from one such symbolic structure to another in accordance with the rules of an algorithm" is necessarily tied to AT. It's a more general criticism of computationalism. See the links:
ReplyDeletehttp://edwardfeser.blogspot.com/2012/02/popper-contra-computationalism.html
http://instruct.westvalley.edu/lafave/is_brain_a_computer.html
The basic point is that symbols and syntax don't exist objectively, apart from minds. Thus, using symbols and syntax to explain human thought is circular.
From my understanding, neither Rosenberg nor Feser think that intentionality (in humans, or simulations or computers or in animals, etc) can be explained via quantitative physical sciences.
ReplyDeleteI would say Rosenberg wrongly accepts the disjunction problem as a real problem in evolution. From a naturalist perspective evolution “selecting against” (but only partially) simple creatures that have extreme indeterminacy is a feature not a bug-like vagueness. It isn’t as if an amoeba has functional memory although it does move towards food. The disjunction problem illuminates many crooked paths in recalling and communicating fuzzy-trace memory, but it makes a preposterous assumption that meaning only comes from perfect knowledge.
I think Rosenberg bases his position on physics rather than evolution. After all, shouldn't evolutionary processes be fully reducible and explainable via physics?
ReplyDelete@JesseM:
ReplyDelete"[O]nly in the sense that I find Dr. Feser's wording unclear on this point."
In what way? He disagrees that "there is nothing more to matter than what natural science attributes to it," and also disagrees that thinking "must be entirely embodied in some sort of corporeal process." What's unclear in his wording?
Given those disagreements, he has two possible responses to your proposed scenario:
(1) The simulation isn't genuinely thinking but merely appearing to observers to behave as though it's thinking; any intentionality it has is derived.
(2) The simulation is genuinely thinking, but its thought isn't reducible to what natural science attributes to matter.
My own response would be (1) and I expect it would be Ed's as well. But so far as I know, there's nothing in A-T philosophy that absolutely rules out (2) in principle although we might not in practice ever be able to know it was true. And in neither case is there any "sort of falsification of A-T ideas." What's unclear here?
@JesseM:
ReplyDelete"It may well be that Dr. Feser would have no philosophical objection to the idea that all the outward correlates of thinking could be present in an upload that is designed this way, but it isn't obvious, and certainly if he wouldn't have objection I think this would be a point worth clarifying when debating anyone who wasn't very well-versed in A-T thinking (probably including Rosenberg), or else there is a strong chance the two people will be arguing at cross purposes, using the same terms but really talking about different things."
I don't think Ed has been unclear about this, and I still don't understand how an answer to your question would help to clarify anything anyway. In fact all the unclarity I'm finding seems to be in the question itself.
From earlier in the thread:
"God might have intended from the beginning that uploads be a type of thinking substantial form . . "
First of all, be careful here: if the upload were a conscious being, it would have a substantial form, but it would be a substance.
But yes, what you're getting at here is the point: so far as I know, nothing in A-T philosophy absolutely rules out the possibility that such an upload might turn out to be a substance (with a substantial form) rather than an artifact (with only accidental forms).
But every time I've said as much, you've said it wasn't what you meant. If what you want to know is what A-T philosophy says about the possibility of such an upload, there just isn't any way to answer your question without talking about substantial forms and so forth. So it doesn't make a lick of difference that you're "not talking about the presence/absence of a soul or any other such objective metaphysical truths, only about observable behavior"[*]; we have to talk about such things in order to reply to you.
So the answer, which you've received several times now, is that, yes, it's possible for us to mistake a substance for an artifact, and yes, it's also possible for an artifact to behave outwardly like a substance. And again, I still don't see the relevance of such an answer to anything Ed is discussing in his post.
----
[*] Especially since you bloody well are talking about it. You're the one who brought up the possibility that an upload might have a "thinking substantial form," which is exactly what A-T means by a rational soul.
Scott:
ReplyDeleteTotally off topic, but did you know that Diana Hsieh claimed you don't understand Objectivism?
http://aynrandcontrahumannature.blogspot.com/2011/05/can-objectivism-be-criticised.html
@Scott:
ReplyDeleteEspecially since you bloody well are talking about it. You're the one who brought up the possibility that an upload might have a "thinking substantial form," which is exactly what A-T means by a rational soul.
You are failing to distinguish between my original question, and a subsequent side point that came up during the lengthy discussion of it. My question was only about whether A-T would be compatible with the idea that all the outward behaviors we associate with thinking might be the product of a computational simulation based on reductionist scientific theories. I later noted (in the post from 12:01 pm yesterday) that if an A-T advocate answered "yes" to this question about a simulation replicating outward behaviors, they wouldn't necessarily have to conclude that the simulation only had the appearance of thinking but wasn't really doing so, since they'd be free to imagine that God designed the metaphysical laws of the universe such that uploads would be thinking substantial forms. I didn't ask whether this was true because it seemed pretty obvious from previous discussions I've gotten into about what does and doesn't qualify as a substantial form (like on this thread) that A-T philosophy doesn't give us any way to deduce the answer with certainty, only God can really know.
But yes, what you're getting at here is the point: so far as I know, nothing in A-T philosophy absolutely rules out the possibility that such an upload might turn out to be a substance (with a substantial form) rather than an artifact (with only accidental forms).
Again this is not something I was ever asking about on this thread. I think I have been quite clear in all my statements of my question that I was only asking about empirically observable behaviors, and in fact I underlined this very point several times when David M seemed to be misunderstanding me to be talking about objective metaphysical truths.
If what you want to know is what A-T philosophy says about the possibility of such an upload, there just isn't any way to answer your question without talking about substantial forms and so forth.
My question was very specific, not some kind of general "what does A-T philosophy say about the possibility of such an upload". I am only asking what it says about whether it would be possible to have an upload that was behaviorally identical to a human, including all the behaviors we would normally take as evidence of "reasoning", leaving aside any metaphysical questions about whether the upload is "really" thinking (the existence of such a behaviorally identical upload would also be strong evidence for the conclusion that reductionist explanations in terms of brain function are also adequate to "explain" our own behavior in the purely scientific sense of making predictions, though not necessarily in terms of explanations of the metaphysical "causes" of our behavior). And I know what your answer to the question would be, I'm just not sure that Dr. Feser would give exactly the same one--as to your questions about why I see ambiguity, I'll address that in my next comment.
@Scott:
ReplyDelete"[O]nly in the sense that I find Dr. Feser's wording unclear on this point."
In what way? He disagrees that "there is nothing more to matter than what natural science attributes to it," and also disagrees that thinking "must be entirely embodied in some sort of corporeal process." What's unclear in his wording?
Obviously ambiguity is somewhat in the eye of the beholder, but the part that made it seem most ambiguous to me was the later statement I quoted, where he is dismissive of the "language of thought" hypothesis which says that "thinking -- the process of going from one thought to another -- involves the transition from one such symbolic structure to another in accordance with the rules of an algorithm." So his objection seems to concern the process that would generate subsequent thoughts from earlier ones, a relational issue, not anything about the essential nature of each individual thought itself, like the qualia associated with it or the fact that it has subjective meaning for a conscious subject. And if we're just talking about the relation between the content of different thoughts, the content is still there for the upload in the sense that the upload can make reports about the series of thoughts it's having.
Granted, he does later say "An obvious problem with this theory is that nothing counts as a sentence apart from language users who form the convention of using it as a sentence ... Similarly, someone could decide to count some brain process as a sentence, but until someone does so it is not a sentence." So he would probably say that though an upload might make reports on the sequence of thoughts, these reports wouldn't really have any semantic content except as experienced by a thinking substance (which the upload might or might not be). But that's back to being an argument about individual thoughts, which doesn't even require talking about the relational "process of going from one thought to another", so it seems there is some ambiguity to me.
Similarly in the earlier post that FZ linked to above, Dr. Feser attacks "computationalism", which also seems ambiguous to me. But computationalism does not normally imply eliminativism with regard to subjective experience (qualia)--the philosopher David Chalmers, who has influenced a lot of my own thinking on these questions, considers himself a computationalist, but also considers there to be a fundamental ontological distinction between objective physical processes including computations, and subjective qualia (he is well-known for championing the zombie argument to establish this distinction). He is a "computationalist" in the sense that he thinks all the behaviors we associate with having a mind can be generated by computations, and in that he argues for the idea of a one-to-one relationship between distinct computations and distinct subjective states (so Chalmers would presumably believe that an upload with identical behavior could in principle be created, and if so it should have the same subjective experiences, including a sense of meaning, as we do). So when Dr. Feser objects to "computationalism", would that include the brand of it that Chalmers argues for, or is he adopting some more narrow definition that says subjective states and thoughts are nothing but computations in an ontological sense? Again the meaning seems ambiguous to me.
@JesseM:
ReplyDelete"My question was only about whether A-T would be compatible with the idea that all the outward behaviors we associate with thinking might be the product of a computational simulation based on reductionist scientific theories."
Your original question was this: "Would it be a sort of falsification of A-T ideas if an upload of this type was successfully created and displayed the same wide-ranging thinking abilities as a normal human, or would the argument just be that despite all external appearances and abilities, it isn't really 'thinking' in terms of what's really happening here at an ontological level?"
And the answer, again, is that these are not the only two possibilities; it's entirely possible for A-T philosophy to accept that such an upload would genuinely be thinking, simply by recognizing it as a substance rather than an artifact. So A-T allows for the possibility that the upload might not be "thinking" at all, and also allows for the possibility that the upload is a thinking substance rather than an artifact. In principle it therefore makes no prediction about which will in fact be the case.
And so the answer to your shortened and revised version of the question is what it's been all along: "Yes." A-T is very obviously compatible with the idea that such an upload might behave outwardly like a human being, whether in fact it was a substance with a rational soul or merely an artifact.
"Again this is not something I was ever asking about on this thread."
I didn't say you asked it. Here's what you wrote in a later summary of your original question: "I just want to know whether the A-T philosophy predicts the uploading experiment could never produce something behaviorally indistinguishable from a human, or whether it allows that this is possible but says that it wouldn't actually be 'thinking'. (And given the premise that it works, even the latter seems questionable, I would think an A-T advocate who says uploading might work could be dubious an upload would have true consciousness and thought but would allow it's possible, since God might have intended from the beginning that uploads be a type of thinking substantial form that comes into being through proximate cause of human ingenuity . . . )"
So there you are, proposing the same false alternative in your question, and going on to propose the very same answer that I've been repeatedly giving you.
"I didn't ask whether this was true because it seemed pretty obvious from previous discussions I've gotten into . . . that A-T philosophy doesn't give us any way to deduce the answer with certainty . . , "
Well, if that's "obvious" to you, then I really don't know what your original question was about. That's right, A-T philosophy doesn't give us any way to deduce the answer with certainty. That is the answer to your original question. In principle, according to A-T, there's no reason such an upload couldn't result in a simulation that behaved just like a human outwardly, without reaching the question of whether there was anybody "in there."
"I'm just not sure that Dr. Feser would give exactly the same one . . . "
Neither your original question nor the later summary I just quoted was specifically about what Dr. Feser would say. You originally wanted to know what A-T would say.
@Neil:
ReplyDelete"Totally off topic, but did you know that Diana Hsieh claimed you don't understand Objectivism?"
No, but I'm not terribly surprised. She's not the first to think so and she won't be the last. Thanks for the link.
1. Computation involves symbol manipulation according to syntactical rules.
ReplyDelete2. But syntax and symbols are not definable in terms of the physics of a system.
3. So computation is not intrinsic to the physics of a system, but assigned to it by an observer.
4. So the brain cannot coherently be said to be intrinsically a digital computer.
This is from here:
http://edwardfeser.blogspot.com/2012/02/popper-contra-computationalism.html
Computational models might be useful for making predictions, but they don't provide evidence for materialism/reductionism.
Computationalism works just fine if you don't adopt strict materialism/reductionism. Chalmers is also a dualist, so he can get away with it.
ReplyDelete@JesseM:
ReplyDelete"So when Dr. Feser objects to 'computationalism', would that include the brand of it that Chalmers argues for, or is he adopting some more narrow definition that says subjective states and thoughts are nothing but computations in an ontological sense?"
I have no idea why you think this distinction is relevant to Ed's post. There's not a single reference to "computationalism" in it.
For the purposes of this post, all Ed needs is what he says: he rejects the view that "thinking" is a purely corporeal process having no properties but those attributed to matter by natural science.
As I'm sure you're aware, Ed has addressed computationalism in previous posts, where I, at least, think he's been adequately clear.
@Scott:
ReplyDeleteApologies, I had forgotten that I did originally phrase the question as a dilemma, though I was primarily just wondering what the answer to the first part would be and erroneously thinking that the second part would be the only obvious alternative. But in the 12:01 pm post I did think to note that these weren't actually the only two possibilities, and I believe (without having gone back to reread everything) that in subsequent comments I did restate my question as just being about the behavioral issue. You quote my 12:01 pm post and say "So there you are, proposing the same false alternative in your question", but in that post you can see I first propose the dilemma, then note that the alternative I proposed isn't really the only one, and that an A-T advocate might in fact say an upload was a substantial form. To me blog comments are more like dialogues then polished essays, so I sometimes think aloud and end up changing my mind about something or qualifying a statement that strikes me as too strong.
Neither your original question nor the later summary I just quoted was specifically about what Dr. Feser would say. You originally wanted to know what A-T would say.
My original comment was stated in a way that was meant to indicate I was asking for Dr. Feser's interpretation of A-T philosophy, which I wasn't sure would agree with the interpretation of others (and this is a question that would require at least some interpretation, since Aristotle/Aquinas presumably didn't directly address anything closely resembling this issue). See this sentence from my first comment in particular:
'I think one commenter I discussed this with previously said he thought A-T philosophy wouldn't definitively rule out the idea that this would work, but some of your comments make me wonder if you would agree, as with the argument above against the position that "thinking, if there is such a thing, must involve a corporeal process having no properties over and above those described by physics, chemistry, neuroscience, and the like". '
Also, searching for mentions of Dr. Feser on this comment thread I see my second comment on the thread said "My question is just whether the A-T philosophy as Dr. Feser understands it would say this though-experiment definitely wouldn't work", and my first comment to you at 5:04 pm repeatedly mentioned his name: "The reason I originally suggested this might falsify the position Dr. Feser was arguing for ... as Dr. Feser described the position he was opposing ... perhaps Dr. Feser would distinguish"
Reply to Scott continued:
ReplyDeleteI have no idea why you think this distinction is relevant to Ed's post. There's not a single reference to "computationalism" in it.
I didn't say it was directly relevant to the post (though it seems at least tangentially related given that he talked about whether sequences of thoughts are generated "in accordance with the rules of an algorithm"), but it is relevant to the question I asked, which itself seemed to me to be related to the post for the reasons I explained above. It's relevant to the question because if he did think the A-T philosophy would require rejecting even Chalmers' version of "computationalism", that would seem to mean he would either reject the idea that our behaviors could be produced by a computation, or might accept that possibility but would reject the idea that there'd be a one-to-one relation between computations and subjective experiences (which would be a strange position to take, since it would suggest two beings could have different subjective senses of the meaning of a thought, yet no matter how much you asked them about it they'd always give precisely identical answers).
As I'm sure you're aware, Ed has addressed computationalism in previous posts, where I, at least, think he's been adequately clear.
Sufficiently clear that you think you could answer on his behalf my question about whether Chalmers-style computationalism might be correct?
@Anonymous:
ReplyDelete3. So computation is not intrinsic to the physics of a system, but assigned to it by an observer.
I think this specific objection is the same "implementation problem" that Chalmers discusses in his paper Does a Rock Implement Every Finite-State Automaton?, and in that paper he suggests a specific possible definition that would decide when an abstract computation has been "implemented" in a physical system (there might be other coherent definitions; if so, I suppose Chalmers would ay that the correct definition would be determined by the "psychophysical laws" he speculates about).
I'll take my time to read through the paper, but
ReplyDelete"...a specific possible definition that would decide when an abstract computation has been "implemented" in a physical system."
Is this definition based on physics? If not, then it only relocates the problem rather than solving it. Could you perhaps direct me to it in the paper?
Also, in response to "requisite causal organization" Searle wrote:
"I think the main reason that the proponents do not see that multiple or universal realizability is a problem is that they do not see it as a consequence of a much deeper point, namely that the "syntax" is not the name of a physical feature, like mass or gravity. On the contrary they talk of "syntactical engines" and even "semantic engines" as if such talk were like that of gasoline engines or diesel engines, as if it could be just a plain matter of fact that the brain or anything else is a syntactical engine.
I think it is probably possible to block the result of universal realizability by tightening up our definition of computation. Certainly we ought to respect the fact that programmers and engineers regard it as a quirk of Turing's original definitions and not as a real feature of computation. Unpublished works by Brian Smith, Vinod Goel, and John Batali all suggest that a more realistic definition of computation will emphasize such features as the causal relations among program states, programmability and controllability of the mechanism, and situatedness in the real world. But these further restrictions on the definition of computation are no help in the present discussion because the really deep problem is that syntax is essentially an observer relative notion. The multiple realizability of computationally equivalent processes in different physical media was not just a sign that the processes were abstract, but that they were not intrinsic to the system at all. They depended on an interpretation from outside. We were looking for some facts of the matter which would make brain processes computational; but given the way we have defined computation, there never could be any such facts of the matter. We can't, on the one hand, say that anything is a digital computer if we can assign a syntax to it and then suppose there is a factual question intrinsic to its physical operation whether or not a natural system such as the brain is a digital computer.
And if the word "syntax" seems puzzling, the same point can be stated without it. That is, someone might claim that the notion of "syntax" and "symbols" are just a manner of speaking and that what we are really interested in is the existence of systems with discrete physical phenomena and state transitions between them. On this view we don't really need O's and 1's; they are just a convenient shorthand. But, I believe, this move is no help. A physical state of a system is a computational state only relative to the assignment to that state of some computational role, function, or interpretation. The same problem arises without 0's and 1's because notions such as computation, algorithm and program do not name intrinsic physical features of systems. Computational states are not discovered within the physics, they are assigned to the physics."
http://instruct.westvalley.edu/lafave/is_brain_a_computer.html
Popper makes a similar point:
ReplyDelete"The property of a brain mechanism or a computer mechanism which makes it work according to the standards of logic is not a purely physical property, although I am very ready to admit that it is in some sense connected with, or based upon, physical properties. For two computers may physically differ as much as you like, yet they may both operate according to the same standards of logic. And vice versa; they may differ physically as little as you may specify, yet this difference may be so amplified that the one may operate according to the standards of logic, but not the other. This seems to show that the standards of logic are not physical properties. (The same holds, incidentally, for practically all relevant properties of a computer qua computer.)"
http://edwardfeser.blogspot.com/2012/02/popper-contra-computationalism.html
Jesse,
ReplyDeleteA computer program is an artifact. As such, its purpose and meaning derive from human intention. There is no possible world in which a computer program could have a substantial form and be classified as a rational substance, because "having a substantial form" is exactly what is ruled out by its being an artifact. Similarly, there is no possible world in which a computer program could perfectly mirror the behavior of a rational animal, because computer programs are fundamentally rule-following while rational animals are not. Rule-following is a non-free activity: one can only stop following one rule if another rule tells one to do so, and so forth. Even the most highly advanced computer brain could not fail to follow rules. On the other hand, if human brains worked this way, then the Kripkenstein argument would hold and all rules would become incomprehensible. As a result, the rule-following condition present in machines and absent in humans will remain an unbridgeable divide in mirroring behavior.
@rank sophist:
ReplyDeleteA computer program is an artifact. As such, its purpose and meaning derive from human intention. There is no possible world in which a computer program could have a substantial form and be classified as a rational substance, because "having a substantial form" is exactly what is ruled out by its being an artifact.
It seems you are disagreeing with Scott, who said above in the 9:09 am post that "nothing in A-T philosophy absolutely rules out the possibility that such an upload might turn out to be a substance (with a substantial form) rather than an artifact (with only accidental forms)." And regardless of what Dr. Feser would say about uploads that follow computational rules, I don't think he would agree that anything that had the efficient cause of being constructed by mortal intelligent beings must automatically be an artifact rather than a substantial form. This would imply, for example, that if the earliest life on this planet was created by aliens and that we evolved from these artificially-constructed ancestors, then we could not be substantial forms. But Dr. Feser addressed such a scenario in his post on the movie Prometheus, and although he does not explicitly talk about artifacts vs. substantial forms he does say "That the Engineers made man thus has no more significance for classical theism than the fact that each of us has parents does."
@rank sophist:
ReplyDelete"A computer program is an artifact. As such, its purpose and meaning derive from human intention. There is no possible world in which a computer program could have a substantial form and be classified as a rational substance, because 'having a substantial form' is exactly what is ruled out by its being an artifact."
This just begs the question. All A-T can say about the matter is that if an instantiation of a computer program did turn out to be genuinely able to think, the embodied program would be a substance rather than an artifact.
For example, there's nothing contrary to A-T in supposing either that God imbues the implementation with a soul/substantial form when it's run (much as He imbues a zygote with a soul when it's first conceived), or that God has established the causal properties of nature in such a way that certain implentations of algorithms have substantial forms. I do not in fact think that either of those is the case, but there's nothing strictly inconsistent with A-T in either one.
@JesseM:
ReplyDelete"Dr. Feser addressed such a scenario in his post on the movie Prometheus, and although he does not explicitly talk about artifacts vs. substantial forms he does say "That the Engineers made man thus has no more significance for classical theism than the fact that each of us has parents does."
See there? You do know what Ed thinks about this.
Jesse,
ReplyDeleteIt seems you are disagreeing with Scott,
That's because, with all due respect to Scott, he's wrong on this point.
And regardless of what Dr. Feser would say about uploads that follow computational rules, I don't think he would agree that anything that had the efficient cause of being constructed by mortal intelligent beings must automatically be an artifact rather than a substantial form.
He wouldn't agree, and neither would I. But a computer brain is an artifact because it has no intrinsic teleology. The meanings and processes of a computer are separate from and imposed upon the materials that compose the computer. Saying that a computer isn't an artifact is like saying a gold necklace isn't an artifact.
@Anonymous:
ReplyDeleteIs this definition based on physics? If not, then it only relocates the problem rather than solving it. Could you perhaps direct me to it in the paper?
He gives his definition of a "combinatorial state automata", which can do all the same things as a traditional finite state automata, in section 6 of the paper, and then gives a "definition of implementation" in the indented paragraph in that section. It does have some dependence on the physical in that in treats the system's state as a vector with many components, and each component "must correspond to a distinct region in the physical system", though at the end of section 7 he suggests that this constraint may not be necessary. I'm also not clear to what extent the definition depends on physical counterfactuals. He says that each component itself has multiple possible states, I think an example would be how each cell of a cellular automata has a finite range of possible colors (often just black or white), but it seems that for physical systems this would involve knowing what possible physical states a given subsystem might be in.
Probably the definition he gives for implementation is not the only possible one that would avoid falling prey to the Putnam argument, but I don't think that's a problem, since as I said the "true" answer to the implementation problem could be part of the "psychophysical laws" governing the relation between the physical world and subjective states which he postulates. Non-uniqueness would imply we humans could never be sure what the details of the psychophysical laws actually were (it wouldn't be a question that could be determined by experiment as with physical laws), but I don't think that's a fatal weakness since he's just trying to put forth an internally coherent metaphysical picture that would be able to address various mind/body problems.
@rank sophist:
ReplyDelete"That's because, with all due respect to Scott, he's wrong on this point."
That's entirely possible, but I'd like to see a non-question-begging argument to that effect.
@rank sophist:
ReplyDeleteBut a computer brain is an artifact because it has no intrinsic teleology.
That's just an assertion, someone could equally well assert that an artificially constructed cell has no intrinsic teleology and therefore no substance either. If you think an artificially created cell could have intrinsic teleology but a computer couldn't (including one controlling a robot body moving around in the real world just like an artificial cell), then what specific properties of the computer make the crucial difference here? For example, is it the fact that it's deterministic so we could run it twice with the same starting state and inputs and it would produce exactly the same output?
@rank sophist:
ReplyDelete"[A] computer brain is an artifact because it has no intrinsic teleology. The meanings and processes of a computer are separate from and imposed upon the materials that compose the computer."
This is a bit beside the point. The matter of which the computer is composed has its own intrinsic teleology just as everything does, and neither you nor I nor anyone else knows that its intrinsic teleology doesn't include any causal powers to participate in the generation of an overarching consciousness/mind/self when arranged in certain ways and/or caused to behave in certain patterns—quite independently, perhaps, of any "meanings" the operations of the computer might have to us.
JesseM, thanks for the clarification. I'll leave one last comment. From section 6:
ReplyDelete"The substates correspond to symbols in those squares or particular values for the cells."
This is the problem, and this is the entire point/focus of Searle's premise:
2. But syntax and symbols are not definable in terms of the physics of a system.
Symbols are irreducible to physical laws/fundamental forces, etc. Thus, computationalism is not compatible with strict physical reductionism.
Anonymous, I don't think he's talking about "symbols" that have semantic meaning, just placeholders used in computations like the 1's and 0's on a Turing machine tape, or the two possible values (black or white) that cells can take in a simple cellular automata like Conway's Game of Life. His proposal is just about coming up with a rule that can determine whether or not a physical system is "implementing" an abstract computation, nothing to do with the symbolic meaning of the computation; the idea is that the psychophysical laws could then do the further work of tying any implementation of an abstract computation to a particular subjective state, including the subjective perceptions of things having "meaning". So all he's doing in that paper is looking for a way to avoid the specific criticism that the state of any physical system could be mapped to any possible abstract computation; even if they could be so mapped by an observer who chose to do so, the point is that the psychophysical laws could select one unique "correct" mapping, so that only when a physical system implements abstract computation X under that mapping do the subjective states associated with X occur.
ReplyDeleteI dont think "placeholder" is the word you want to use, a placeholder is still a symbol/representation of something(s) outside of itself. And Searle addresses the "decision as to whether or not a physical system is a computer" in the quote I provided.
ReplyDelete@Anonymous
ReplyDeleteAnd Searle addresses the "decision as to whether or not a physical system is a computer" in the quote I provided.
But he doesn't address the possibility that psychophysical laws could select a unique answer, in that only a certain type of mapping between physical states and abstract computations "counts" for the purposes of giving rise to the subjective sensations associated with a given computation (with the mapping between subjective sensations and computations also determined by the psychophysical laws). For an advocate of A-T philosophy this notion shouldn't seem too foreign, as I think it's fairly similar to the idea Scott expressed in his most recent comment that matter can have 'causal powers to participate in the generation of an overarching consciousness/mind/self when arranged in certain ways and/or caused to behave in certain patterns—quite independently, perhaps, of any "meanings" the operations of the computer might have to us.'
"But he doesn't address the possibility that psychophysical laws could select a unique answer, in that only a certain type of mapping between physical states and abstract computations "counts" for the purposes of giving rise to the subjective sensations associated with a given computation (with the mapping between subjective sensations and computations also determined by the psychophysical laws). For an advocate of A-T philosophy this notion shouldn't seem too foreign, as I think it's fairly similar to the idea Scott expressed in his most recent comment that matter can have 'causal powers to participate in the generation of an overarching consciousness/mind/self when arranged in certain ways and/or caused to behave in certain patterns—quite independently, perhaps, of any "meanings" the operations of the computer might have to us.'"
ReplyDeleteSearle's point is that computationalism cannot rely only on physical laws. By bringing in psychophysical laws, you are kinda proving his point. It seems to me that this necessitates some form of dualism, on one hand, you have physical laws governing the behavior of physical systems, and on the other hand you have psychophysical laws governing the connections between subjective sensations, abstract computations and physical systems. Both are distinct and required. Though I don't think Chalmers, being a dualist, would have an issue with that entailment.
Rank Sophist: Saying that a computer isn't an artifact is like saying a gold necklace isn't an artifact.
ReplyDeleteA gold necklace isn't an artifact. ... if God makes it so. Or rather, if God annihilates the artifact and replaces it with a substance that looks exactly like a gold necklace. Similarly, Scott is saying that it's at least hypothetically possible for God to have established a rule of nature such that when matter is brought together in a certain way to make a certain kind of computer, it brings a new rational substance into being. (And as Scott also pointed out, we know that this is possible at least in principle because that is the same kind of thing that happens when a new human being is conceived.)
Now if your point is that should such a thing happen, the resulting rational substance should not be called a "computer"... well, we should get our definitions clear up front. If we agree to define "computers" or "necklaces" as kinds of artifacts, then of course we should instead say that the correct assembly of matter will result in a pseudo-computery substance.
Anyway, the point is that there is nothing in A-T that makes it impossible for there be a universe where one or both of the following could occur:
—matter is assembled in some computery-type form such that a thinking substance comes into being
—matter is assembled to make a computery artifact that does not think, but fakes it well enough to fool us
(Whether the actual laws of physics we're stuck with will allow it is another question, but I agree with Scott that either outcome is compatible with A-T metaphysics.)
JesseM: For an advocate of A-T philosophy this notion shouldn't seem too foreign, as I think it's fairly similar to the idea Scott expressed in his most recent comment that matter can have 'causal powers to participate in the generation of an overarching consciousness/mind/self when arranged in certain ways and/or caused to behave in certain patterns—quite independently, perhaps, of any "meanings" the operations of the computer might have to us.'
ReplyDeleteIt's not clear to me whether this sentence indicates a misunderstanding of Scott's point or not: to clarify (I hope), it is not the case that matter could be arranged so as to form a machine (artifact) that has some sort of mind, or even that directly and fully generates a mind. In A-T, the mind or soul is a substantial form, i.e. whatever has it is a substance — so machines can never think. What might be possible is for something that you start off constructing like a machine to at some point cease to exist and a new, substantial entity comes into being — such as when "building" a so-called test-tube baby. If you end up with a baby, then it is a substance and not an artifact, no matter what events preceded its creation. Anything that actually thinks must have an intellect (which is something immaterial), and must be a substance, regardless of what it looks like to us.
On the other hand, what we start out assembling as an artifact might simply remain an artifact. Anything that is a computing machine in that sense cannot really be thinking, no matter how good it is at fooling us. But we don't even need fancy robots — a book is good enough to illustrate the relevant principles. We talk about ideas in a book, but only figuratively; the ideas or thoughts are not literally in the book, but only in a figurative or derived sense. (The thoughts really come from the author and are represented in some way in the book.) The same goes for an audio-book, even if it fools you into thinking it's a person talking (maybe you can't see where the sounds are coming from). A robot is effectively just a really really fancy book. If it really is an artifact, then it's not really thinking; and if it really is thinking, then it's not an artifact.
Rank Sophist: On the other hand, if human brains worked this way, then the Kripkenstein argument would hold and all rules would become incomprehensible. As a result, the rule-following condition present in machines and absent in humans will remain an unbridgeable divide in mirroring behaviour.
ReplyDeleteI don't think I follow. Now for one thing, Jesse is interested in what a hypothetical artificial intelligence would look like from the "outside", i.e. based on sense data. Such empirical observations always carry a margin of error that could be used to fudge things, so a machine doesn't have to simulate a mind perfectly — it just has to be "good enough".
But beyond that, I'm not sure why a machine couldn't (hypothetically) simulate exact behaviour. Certainly, the Kripkenstein paradox indicates that a computing machine cannot have a mind, but it can behave like one because its behaviour depends on something that does have a mind (namely, the programmer who made it). To return to my earlier example, books are so good at "simulating" their author's thoughts that we commonly speak about authors who are long dead and gone as though we were holding conversations with them. If you could wrap up a book in fancy robot-shell so that it didn't look like a book, that's no big deal — appearances can be deceiving.
@Mr. Green:
ReplyDeleteIt's not clear to me whether this sentence indicates a misunderstanding of Scott's point or not: to clarify (I hope), it is not the case that matter could be arranged so as to form a machine (artifact) that has some sort of mind, or even that directly and fully generates a mind. In A-T, the mind or soul is a substantial form, i.e. whatever has it is a substance — so machines can never think. What might be possible is for something that you start off constructing like a machine to at some point cease to exist and a new, substantial entity comes into being — such as when "building" a so-called test-tube baby. If you end up with a baby, then it is a substance and not an artifact, no matter what events preceded its creation. Anything that actually thinks must have an intellect (which is something immaterial), and must be a substance, regardless of what it looks like to us.
I didn't use the word "machine" or "artifact" in that post, I was just suggesting some similarity between Scott's comment and Chalmer's view about any appropriately-constituted system giving rise to experiences. But on this subject, in A-T philosophy is "artifact" taken to be a fundamental ontological category like a "substance", as opposed to a somewhat fuzzy and subjective human descriptive category? An A-T advocate should believe there is always a single objective truth about whether something is a substance, even if that truth isn't knowable with certainty by humans, is the same true for artifacts? If I perform some very minor modification on some naturally-occurring thing to make it more useful to me, like bending a plant stem to produce a hook to grab at something in a narrow opening my hand can't reach into, I might or might not choose to call it a "tool" or "artifact"--could I be objectively mistaken?
@Mr. Green:
ReplyDelete"[T]o clarify (I hope), it is not the case that matter could be arranged so as to form a machine (artifact) that has some sort of mind, or even that directly and fully generates a mind. In A-T, the mind or soul is a substantial form, i.e. whatever has it is a substance — so machines can never think. What might be possible is for something that you start off constructing like a machine to at some point cease to exist and a new, substantial entity comes into being — such as when 'building' a so-called test-tube baby. If you end up with a baby, then it is a substance and not an artifact, no matter what events preceded its creation. Anything that actually thinks must have an intellect (which is something immaterial), and must be a substance, regardless of what it looks like to us."
That's it exactly. Thank you.
@Mr. Green:
ReplyDelete"Scott is saying that it's at least hypothetically possible for God to have established a rule of nature such that when matter is brought together in a certain way to make a certain kind of computer, it brings a new rational substance into being. (And as Scott also pointed out, we know that this is possible at least in principle because that is the same kind of thing that happens when a new human being is conceived.) . . .
Anyway, the point is that there is nothing in A-T that makes it impossible for there be a universe where one or both of the following could occur:
—matter is assembled in some computery-type form such that a thinking substance comes into being
—matter is assembled to make a computery artifact that does not think, but fakes it well enough to fool us.
(Whether the actual laws of physics we're stuck with will allow it is another question, but I agree with Scott that either outcome is compatible with A-T metaphysics.)"
Again, that's exactly what I had in mind, and again, thank you.
Wait, where does Chalmers talk about psychophysical laws?
ReplyDeleteI agree that eliminatavism makes no sense, but the overwhelming majority of naturalists are REDUCTIVE materialists, who believe that the subjective feelings of a bat emitting an ultra-sound are REAL, but fully identical with some brain processes.
ReplyDeleteYet, for a mysterious reason, most reductive materialists believe that a neuroscientist knowing everything about these processes would not know the inner experience of the bat.
Lothars Sohn - Lothar's son
http://lotharlorraine.wordpress.com
@JesseM
ReplyDeleteI think you have missed the point of the post entirely. Anyhow, I'm pretty sure that the short answer to your question is "No". For my own part I think that if a mind uploading were possible it would actually count as a point against materialism, in so far as the physical process of a digital computer is just about as different as any process can be from the physical process of a brain. If we were to assert that these two process actually had something in common it would be an immaterial something. Software is no more material than poetry is.
@rank sophist
"A computer program is an artifact. As such, its purpose and meaning derive from human intention."
This argument always seems to me to be conflating the distinction between "a computer does what we tell it to do" and "a computer does what we intend it to do". I agree with @Scott, the question is being begged.
A computer program follows rules in exactly the same way that I obey the laws of physics. The "rule" in that sense does not need to have a meaning when it is obeyed.
Kripkenstein doesn't enter into it.
@Anonymous:
ReplyDeleteHe doesn't talk about psychophysical laws in that paper, but he talks about psychophysical laws in his book The Conscious Mind, and on p. 315-322 he repeats the argument of that paper about what it means to "implement" a computation, and connects this to his principle of "organizational invariance" discussed earlier in the book (in a series of sections that starts on p. 247), which says that systems with the same computation should have the same qualia (based on the same arguments he makes in this paper), which is a proposal for how psychophysical laws would work. For example, on p. 273 he talks about how he thinks this invariance principle would be derived from more fundamental psychophysical laws, saying "It is therefore a law, for certain functional organizations F, that realization of F will be accompanied by a specific kind of conscious experience. This is not to say it will be a fundamental law. It would be odd if the universe had fundamental laws connecting complex functional organization to conscious experiences. Rather, one would expect it to be a consequence of simpler, more fundamental psychophysical laws. In the meantime, the principle of organizational invariance acts as a strong constraint on an ultimate theory."
@reighly:
ReplyDeleteI think you have missed the point of the post entirely.
In what way? I didn't comment on what the main point of the post was, it's just that some of the wording of the post brought to mind a longstanding question I've had about whether A-T philosophy is compatible with the idea that the external behaviors associated with thought (whether in an upload, or in ourselves) might be entirely predictable in a computable/reductionist manner.
Anyhow, I'm pretty sure that the short answer to your question is "No".
So you disagree with those on this thread--Scott and David M, at least--who said A-T philosophy would be compatible with the idea that an upload could be behaviorally indistinguishable from a human?
For my own part I think that if a mind uploading were possible it would actually count as a point against materialism, in so far as the physical process of a digital computer is just about as different as any process can be from the physical process of a brain. If we were to assert that these two process actually had something in common it would be an immaterial something.
They would have mathematical structure in common, I suppose. I don't think "materialism" is necessarily taken to mean that their can't be objective mathematical truths. Would you also say that an accurate simulation of nonliving things, like this one involving water molecules, is a strike against materialism?
@JesseM:
ReplyDelete"[S]ome of the wording of the post brought to mind a longstanding question I've had about whether A-T philosophy is compatible with the idea that the external behaviors associated with thought (whether in an upload, or in ourselves) might be entirely predictable in a computable/reductionist manner."
That's a very different question from the one you asked, though. Assuming that you simulated my brain at the neural level, it wouldn't follow that I and it would forever afterward behave in exactly the same ways, whether or not there was "anyone at home" in the simulation. That sort of reductionism is at odds with A-T. I have a substantial form, and the simulation either doesn't or has another one of its own.
@JesseM:
ReplyDeletePerhaps that question is what you had in mind when you wrote: "'Reductionism' in the purely predictive, empirical sense would suggest that if the simulators get the details right, the simulated brain would behave just like the original living brain--it would pass long-term Turing tests, people who knew the person the original brain came from could converse with it at length and not detect any difference in its thinking or personality, etc."
However, I didn't take "behave just like the original living brain" to mean that it might behave in exactly the same way in every detail as the original brain, partly because it obviously wouldn't have the same experiences (or "simulated experiences," whatever those might be) and partly because you yourself went on (or so I thought) to explain what you meant: that the simulation would pass Turing tests and seem to people who knew the original person to be like him/her in personality, and so forth.
If you mean that the simulation would be a fully 100% accurate predictor of the behavior of the original brain, then no, that scenario is incompatible with A-T, for the very simple reason that according to A-T the behavior of the brain is not reducible to the behavior of its neurons. The soul runs the brain, not vice versa.
I should add that A-T doesn't rule out the possibility that the original brain and the simulation could "run in parallel" for a very long time; it's just that A-T wouldn't take the simulation to be a guaranteed-reliable predictor of the behavior of the original brain (at least as long as the original brain was part of a person). Again, this applies whether or not the simulation has its own rational soul.
ReplyDelete"His proposal is just about coming up with a rule that can determine whether or not a physical system is "implementing" an abstract computation, nothing to do with the symbolic meaning of the computation; the idea is that the psychophysical laws could then do the further work of tying any implementation of an abstract computation to a particular subjective state, including the subjective perceptions of things having "meaning"."
ReplyDeleteSearle isn't concerned with connections to subjective sensations or what a computation "feels like", but rather just the abstract computations. A computation is not the manipulations of placeholders, it is the manipulation of symbols according to syntactical rules. I don't see how Chalmers' implementation gets past that. Searle is arguing that there are no abstract computations independent of minds. Thus, Chalmers' discussion of determining whether or not a physical system is implementing an abstract computation begs the question against Searle.
Sure, you could argue that some things are objectively potential computations, and some are not. Like 1's and 0's and neuronal states are potential computations, but a wall is not. But a potential computation isn't an actual computation, until an external mind treats it as such. Thus you still have the problem of requiring something other than the physics of a system.
@Scott:
ReplyDeleteThat's a very different question from the one you asked, though. Assuming that you simulated my brain at the neural level, it wouldn't follow that I and it would forever afterward behave in exactly the same ways, whether or not there was "anyone at home" in the simulation. That sort of reductionism is at odds with A-T. I have a substantial form, and the simulation either doesn't or has another one of its own.
Is there any rule that says two substantial forms can't behave identically? It's true that even from a reductionist perspective the simulation probably wouldn't behave identically to the biological individual it was based on because of the butterfly effect in chaos theory (any miniscule differences in the initial conditions or external inputs of two nearly-identical systems will cause their behaviors to significantly diverge over time if the dynamics involve any chaos, as brain dynamics are believed to). So, when I said it was behaviorally identical I didn't mean it would mirror every action of the original in parallel with it, just that it would be qualitatively identical so no one could tell which was which in a blind test. But you allow for the possibility that the simulation has a substantial form of its own, and any computer simulation can be copied and run on multiple computers, as long as the initial state and inputs were completely identical each copy of the program should behave identically. So combined with your view that it's not impossible the upload could be a thinking substance, this would seem to imply it's not impossible to have two thinking substances that can be predicted to behave identically with certainty for an indefinite amount of time, under the right circumstances.
Also, if the behavior of an upload is qualitatively just like a person--if no one can tell it's not the original in a blind test, including friends/family etc.--I would take that as pretty strong evidence (though not proof) that our own behavior could in principle be predicted in the same way, if we knew the precise initial physical state of a person + environment along with the outcome of any purely random events. Uploads with behavior indistinguishable from humans would strongly suggest that no fundamentally new ingredients are likely to be present obstacles in predicting human behavior, like a type of "free will" which is neither deterministic nor random, and would suggest the only reason that we couldn't in practice predict a biological human's behavior just as accurately as an upload's is due to some combination of not being able to know the initial physical state with infinite precision (the butterfly effect again) and the possibly random elements in fundamental physics.
Anonymous wrote:
ReplyDeleteA computation is not the manipulations of placeholders, it is the manipulation of symbols according to syntactical rules.
As I said the symbols don't have to have any "meaning" (you objected to the term "placeholders", I'm fine with any other word you might want to use for symbols with no external referents), and I don't see why the rules must be called "syntactical" any more than the rules of physics are syntactical. In a cellular automaton like the Game of Life, do the black vs. white values of each cell on each turn need to "represent" anything? Are the rules for how cells change values based on the values of their neighbors any more syntactical than the rules for the accelerations charged particles experience based on their proximity to other charged particles?
Searle is arguing that there are no abstract computations independent of minds.
If by "independent of minds" you mean "independent of being interpreted as computations by minds", I don't see why you can't have psychophysical laws, which are not themselves "minds", that determine whether or not a physical system is implementing a computation, and that further determine what subjective experiences arise based on what computations are implemented physically. As a thought-experiment we could even take minds out of the picture--it's logically possible to imagine a universe where the laws of physics "care" about the implementation issue, in the sense that a particular type of effect (say, a special type of particle) is only generated when some other system has implemented a particular computation, with the criteria for what counts as an "implementation" of that computation themselves being part of the laws of physics (perhaps using Chalmer's proposed solution for what counts as an implementation).
Is your objection just that even if law of physics or psychophysics were sensitive to particular complex patterns of cause-and-effect that someone like Chalmers might be inclined to call "computations", these patterns of cause-and-effect wouldn't really be computations unless some mind was observing them and identifying them as such? If so this seems like just a matter of semantics, with Chalmers and Searle defining the word "computation" somewhat differently. Perhaps we could just call Chalmers' definition "C-computation" and Searle's definition "S-computation", then it might be true that there is an objective observer-independent answer to whether a C-computation has been implemented by a physical system, but not whether an S-computation has been implemented. But in that case, I don't see why "computationalism" as Chalmers conceives of it should require anything more than an objective answer to the implementation problem for C-computations.
@JesseM:
ReplyDelete"Is there any rule that says two substantial forms can't behave identically?"
Again, be careful with the A-T terminology: I think you mean "substances" rather than "substantial forms" here. And if so, then no, there's no such principle in general, and I've agreed that it's not impossible that even two substances whose substantial forms were rational souls could chug along identically for quite some time. It's just that we couldn't guarantee that the behavior of one was a reliable predictor of the behavior of the other.
"[W]hen I said it was behaviorally identical I didn't mean it would mirror every action of the original in parallel with it, just that it would be qualitatively identical so no one could tell which was which in a blind test."
Good, that's what I took you to mean.
"But you allow for the possibility that the simulation has a substantial form of its own, and any computer simulation can be copied and run on multiple computers, as long as the initial state and inputs were completely identical each copy of the program should behave identically. So combined with your view that it's not impossible the upload could be a thinking substance, this would seem to imply it's not impossible to have two thinking substances that can be predicted to behave identically with certainty for an indefinite amount of time, under the right circumstances."
That's not how substantial forms work; you seem to be thinking of them reductively. If an AI is a thinking substance, then it has a rational soul (that's what its "substantial form" is), and its behavior from its first moment of existence onward is not reducible to the execution of the program that it instantiates (although it could conceivably operate deterministically at a higher level that included its intellect and will). Conversely, if it lacks a rational soul (as I think would in fact be the case even though A-T metaphysics itself doesn't commit us to that view), then its behavior would be strictly determined by the program and it would not be "thinking" by A-T standards.
"Also, if the behavior of an upload is qualitatively just like a person--if no one can tell it's not the original in a blind test, including friends/family etc.--I would take that as pretty strong evidence (though not proof) that our own behavior could in principle be predicted in the same way, if we knew the precise initial physical state of a person + environment along with the outcome of any purely random events."
Then you're not an Aristotelian-Thomist; fair enough. But what A-T would say here is that such apparent evidence would be no such thing, since we know independently that such reductionism (of human behavior to physics) is false.
@Scott:
ReplyDeleteThat's not how substantial forms work; you seem to be thinking of them reductively. If an AI is a thinking substance, then it has a rational soul (that's what its "substantial form" is), and its behavior from its first moment of existence onward is not reducible to the execution of the program that it instantiates (although it could conceivably operate deterministically at a higher level that included its intellect and will).
But there are different possible meanings of "reducible". I am not saying the upload needs to ontologically reducible to the sum of the parts of the computer running it and their individual behaviors, i.e. metaphysical reductionism. Ontologically, regardless of whether its behavior is predictable, the whole can have a real existence distinct from the parts, perhaps replacing them so they only exist "virtually" (to use a term that Mr. Green brought up at the end of this comments thread, hopefully I'm getting the usage right). I'm just talking about empirical predictions being reducible to an understanding of the rules of the computer program. Is it not in God's power to make a thinking substance whose behavior can be predicted in this way, even if it isn't metaphysically "caused" by the functioning of real existing lower-level parts?
Also, since you said it's possible God would make the universe so that building an upload caused a thinking substance to come into being, are you now adding that if this were the case, it would automatically have to imply that if every detail of the 1's and 0's representing the upload's state over time were printed out on screen (which would still presumably be visible in the thinking substance even if the 1's and 0's in the computer now have only a "virtual" existence, just like cells and organs are visible in a person), we would necessarily have to see that the way they changed over time no longer matched what we'd expect given the original rules of the program? If so is this because A-T philosophy absolutely demands that thinking substances have libertarian free will, and compatibilism must be rejected along with the idea that unpredictability in behavior is due purely to chance? (And if that's the case, I would imagine this element of the philosophy wasn't clearly present in Aristotle's writings, and was an element added by Aquinas? See for example Free will in ancient philosophy which says that although for Aristotle "Some events depend on chance", it is also true that "his physical theory of the universe, the action he allots to the noûs poietkós, and the irresistible influence exerted by the Prime Mover make the conception of genuine moral freedom in his system very obscure and difficult.")
@JesseM:
ReplyDelete"[A]re you now adding that if this were the case, it would automatically have to imply that if every detail of the 1's and 0's representing the upload's state over time were printed out on screen . . . . , we would necessarily have to see that the way they changed over time no longer matched what we'd expect given the original rules of the program?"
I'm not saying that would necessarily happen, but I'm certainly saying it could, yes.
"If so is this because A-T philosophy absolutely demands that thinking substances have libertarian free will, and compatibilism must be rejected along with the idea that unpredictability in behavior is due purely to chance?"
Not at all. As I said, as far as this point is concerned it's still possible that rational souls could operate deterministically at the appropriate level (i.e., the level of intellect and will). All I'm concerned to rule out here is purely physical determinism.
@JesseM:
ReplyDelete"Is your objection just that even if law of physics or psychophysics were sensitive to particular complex patterns of cause-and-effect that someone like Chalmers might be inclined to call 'computations', these patterns of cause-and-effect wouldn't really be computations unless some mind was observing them and identifying them as such? If so this seems like just a matter of semantics . . . "
That doesn't seem like a purely semantic issue to me (not, mind you, that semantic issues are unimportant in their own right). The point is that mind is logically prior to "computation" in any sense of the latter term that could possibly be relevant to a theory of mind/consciousness.
Mr. Green,
ReplyDeleteOr rather, if God annihilates the artifact and replaces it with a substance that looks exactly like a gold necklace. Similarly, Scott is saying that it's at least hypothetically possible for God to have established a rule of nature such that when matter is brought together in a certain way to make a certain kind of computer, it brings a new rational substance into being.
If a computer became a rational substance rather than an artifact, then it would no longer be in any way like a computer. It would magically gain abilities that no computer has--such as nutritive and sensitive powers. Otherwise, it would be an animal composed of non-living materials, which is impossible on the Porphyrian tree. If artificial intelligence became capable of thought, then it would no longer be artificial intelligence and it would no longer be housed in a computer: a radically different and possibly contradictory kind of substance would have to come into being to support the rational soul. Similarly, a natural necklace would change heavily. It would suddenly have a substantial form that we could discover, just as gold and water do; and it could recur in nature.
But beyond that, I'm not sure why a machine couldn't (hypothetically) simulate exact behaviour. Certainly, the Kripkenstein paradox indicates that a computing machine cannot have a mind, but it can behave like one because its behaviour depends on something that does have a mind (namely, the programmer who made it).
If it is not a mind, then it's incapable of behaving exactly like a mind. Just like a book can't have a conversation with you. Any non-free, rule-following "mind" will fail to exactly simulate a free, rational mind, since their formal differences are so vast. The rule-following zombie can certainly give the appearance of rationality, but it will always necessarily break down in places that a real mind would not.
Jesse,
then what specific properties of the computer make the crucial difference here?
First, a computer lacks all signs of life. It has no nutritive or sensitive powers, and so it does not regenerate, reproduce or self-move. If it had a rational soul, then we would have two options: either A) rational souls could exist in non-living bodies, which contradicts A-T; or B) it would totally change and gain the powers of life.
reighley,
ReplyDeleteA computer program follows rules in exactly the same way that I obey the laws of physics.
First, there are no laws of physics. Those are abstractions from the actually existent regularities within individual substances. Second, if your mind obeys the "laws of physics" in the way that a computer brain follows rules, then Kripkenstein most definitely enters into it. Minds cannot simultaneously be minds and follow rules: the two are mutually exclusive.
Scott,
The matter of which the computer is composed has its own intrinsic teleology just as everything does, and neither you nor I nor anyone else knows that its intrinsic teleology doesn't include any causal powers to participate in the generation of an overarching consciousness/mind/self when arranged in certain ways and/or caused to behave in certain patterns
This is reductionistic emergence that is wholly incompatible with A-T. A rational soul is created individually by God and implanted into living matter at the moment of conception. It in no way "emerges" from matter, and matter can in no way have an intrinsic telos toward the outcome of a rational soul. Arrangements of matter are totally irrelevant to this issue. The only way that a computer could become rational is if God destroyed the artifact and replaced it with a totally different, and living, substance. It could not simply be a computer that thought: it would take on numerous strange and possibly contradictory traits in the process.
@rank sophist:
ReplyDelete"This is reductionistic emergence . . . "
I stopped reading right there. No, it very obviously isn't any such thing, and I'm very well on record as an opponent of reductionism and emergentism. If you insist on reading participation as emergence, then your screen name is more apt than I realized.
@Scott:
ReplyDeleteI'm not saying that would necessarily happen, but I'm certainly saying it could, yes.
But note that the "necessarily" was part of an if-then statement which started with "it's possible God would make the universe so that building an upload caused a thinking substance to come into being". In other words, I'm not asking if the first part of the premise (that building an upload would cause a thinking substance to come into being) is necessarily true, just whether, if that were true, it would then follow as a consequence that the upload's behavior could no longer be predicted with perfect certainty from the rules that were originally programmed into the computer. Or to put it another way, do you view "being a thinking substance" and "being perfectly predictable with computable rules" as inherently contradictory traits?
As I said, as far as this point is concerned it's still possible that rational souls could operate deterministically at the appropriate level (i.e., the level of intellect and will). All I'm concerned to rule out here is purely physical determinism.
But if the physical parts had only a "virtual" existence, then it would no longer be true in a metaphysical sense that the upload's behavior was really "caused" by the lower-level physical parts, even if it could still be predicted by looking at those 1's and 0's and consulting the original rules of the program. Despite the possibility of this sort of prediction, the true metaphysical cause might be the decision of the whole thinking substance. Is it totally clear that this would still qualify as "physical determinism" in the A-T philosophy? Did Aristotle or Aquinas ever talk about the issue of prediction, or anything analogous to a Laplacian demon?
Original rules of the program. More psychophysical laws? I don't see how you can have "original rules" with just fundamental physics.
ReplyDelete@rank sophist:
ReplyDeleteFirst, a computer lacks all signs of life. It has no nutritive or sensitive powers, and so it does not regenerate, reproduce or self-move.
In my previous comment to you I added "including one controlling a robot body moving around in the real world just like an artificial cell". So it would presumably need sensory inputs, and some form of power--if it gets its energy from solar power how is this fundamentally less of a nutritive power thana plant? Does A-T philosophy place strict restrictions on what might be "life", strict enough that some forms of logically possible alien beings would be ruled out? For example, if there was an alien species that wasn't capable of regenerating from injuries, would the A-T have to say they couldn't really be living things even if they had all the other traits?
If it had a rational soul, then we would have two options: either A) rational souls could exist in non-living bodies, which contradicts A-T
Actually, is that even true? What about angels? I found this section of Aquinas' Summa Theologica which says: "I answer that, The angels have not bodies naturally united to them. For whatever belongs to any nature as an accident is not found universally in that nature; thus, for instance, to have wings, because it is not of the essence of an animal, does not belong to every animal. Now since to understand is not the act of a body, nor of any corporeal energy, as will be shown later (75, 2), it follows that to have a body united to it is not of the nature of an intellectual substance, as such; but it is accidental to some intellectual substance on account of something else."
Of course an upload would still be "corporeal" in the sense that it has a sort of physical brain, but the example of angels seems to suggest that in general having a living body is not necessary to be an "intellectual substance" in Aquinas' view.
@Scott:
ReplyDeleteThat doesn't seem like a purely semantic issue to me (not, mind you, that semantic issues are unimportant in their own right). The point is that mind is logically prior to "computation" in any sense of the latter term that could possibly be relevant to a theory of mind/consciousness.
I don't see why. Would you say that mind is logically prior to any mathematical function whatsoever, such as addition, so that it's somehow logically impossible that you could have matter following laws of physics involving addition (like those involving the sum of amplitudes of different electromagnetic waves) in the absence of sum mind interpreting it as addition? Mathematicians would just view "computation" as a class of mathematical function, so unless this argument is so general as to apply to the idea of nature behaving in any mathematical way whatsoever, I think you must be tacking on additional concepts to "computation" beyond the ones a mathematician (or myself) would see as essential.
"[T]o put it another way, do you view 'being a thinking substance' and 'being perfectly predictable with computable rules' as inherently contradictory traits?
ReplyDeleteOffhand I don't see any reason why, according to A-T, a thinking substance simply couldn't operate in such a way that "computable rules" succeeded in predicting its behavior. I tentatively think the most A-T could say here (and I agree) is that a thinking substance wouldn't be operating from or on the basis of those rules.
"Of course an upload would still be 'corporeal' in the sense that it has a sort of physical brain, but the example of angels seems to suggest that in general having a living body is not necessary to be an 'intellectual substance' in Aquinas' view."
I think your reading of Aquinas is correct here.
What explanatory work does computation do when the physical structures involved don't represent anything? 2 + 2 = 4 doesn't express, describe or explain a mathematical truth unless we apply meaning to the symbols.
ReplyDelete@JesseM:
ReplyDelete"Would you say that mind is logically prior to any mathematical function whatsoever, such as addition, so that it's somehow logically impossible that you could have matter following laws of physics involving addition . . . in the absence of sum mind interpreting it as addition?"
I'd say that matter following the laws of physics is not carrying out the mathematical operation of addition in the sense of actively seeking the sum of two numbers. It's certainly operating in conformity with the laws of addition, and in some way it's instantiating mathematical truths about addition—and if you want to call that "addition" too, that's fine. But I'd say it's not "addition" in any sense relevant to the nature of consciousness. I'm open to argument, though.
"What explanatory work does computation do when the physical structures involved don't represent anything?"
ReplyDeleteYeah, that's pretty much the point, I think.
Jesse,
ReplyDeleteif it gets its energy from solar power how is this fundamentally less of a nutritive power thana plant?
A relevant passage from Real Essentialism:
"For all living things, nutrition, growth, and reproduction are powers, and their exercise manifests the fundamental capacity and tendency of the organism to adapt to its environment (and of course to fail so to adapt when the environment triumphs over its nature). Inorganic objects do not adapt to their environment: either they persist in it due to the strength of the forces holding them together outweighing the dissipative forces in the environment, or they degrade and ultimately cease to exist when the latter outweigh the former. They do not adapt themselves - they are either maintained or destroyed."
A solar panel cannot be a nutritive power because it is not alive. It is simply an object that either persists or dissipates based on the forces acting on it; it does not act through itself, nor can it adapt itself in any way. Another agent can alter it to suit its environment, certainly, but this is just another example of transient causation. Immanent causation is what defines life, and it's what a machine can never have.
Does A-T philosophy place strict restrictions on what might be "life", strict enough that some forms of logically possible alien beings would be ruled out?
Logically possible entities, in the modal logic sense, are irrelevant to A-T.
Actually, is that even true? What about angels?
Angels are complete immaterial substances without bodies, but they can control bodies like puppets. The standard issue rational soul is properly united to living matter. Unless you're proposing that the souls of computer brains are in fact angels incognito, then what I said follows.
@rank sophist:
ReplyDelete"Logically possible entities, in the modal logic sense, are irrelevant to A-T."
They most certainly are not. They may be (mostly) irrelevant to what can and can't happen in our world, but they are utterly relevant to considerations about what an omnipotent God can and can't do, and therefore to what He might have done in our world even if we don't know it yet.
What is your definition of computation, JesseM?
ReplyDeleteAnonymous:
ReplyDeleteWhat explanatory work does computation do when the physical structures involved don't represent anything?
I'm not sure exactly what you mean by "explanatory work", but for example the theory of computation can tell us that if a function is computable by one system that functions as a universal computer, than it is computable by any other system that functions as a universal computer. To clarify what you mean here, could you tell me if you think the cells in a cellular automaton need to "represent" anything, or that exploring the behavior of cellular automata with different rules does any "explanatory work"? Mathematicians are potentially interested in the consequences of any mathematically-definable set of rules.
2 + 2 = 4 doesn't express, describe or explain a mathematical truth unless we apply meaning to the symbols.
Well, how would you answer the question I asked Scott about generalizing Searle's argument? Do you in fact think that Searle's argument would just as well work if we were talking about any other form of math besides computation, so for example it's meaningless to talk about electromagnetic waves obeying additive rules in the absence of a mind to observe and sum them?
Somewhat off-topic, all the talk of Chalmers has reminded me of a poem I wrote for a friend of mine who's a philosophy grad student.
ReplyDeleteWhat is your definition of computation, JesseM?
ReplyDeleteI would defer to mathematicians who deal with computability theory for a precise definition, but as I understand it there is a certain class of functions where the output can be calculated from the input by any one of a number of abstract "computers". The most commonly-discussed abstract computer is a universal Turing machine, but there are a number of others which can be shown to be exactly equivalent in the class of functions they can calculate (see Turing completeness), so these functions are simply labeled "computable" without reference to any specific example of an abstract machine. I think a "computation" would be the sequence of state changes an abstract computer would go through generating the output of a computable function from the input (with these state changes governed by a finite list of rules, such as "if the read/right head moves to a square on the tape with 0 when the read/right head is in internal state 6, then it changes the 0 to a 1, changes its internal state to 3, and moves 10 squares to the left"). But to decide in general which real physical sequence of state changes counts as an "implementation" of an abstract computation, you need to have an answer to the implementation problem, though in practice we have designed plenty of examples where the mapping is obvious (for example, computers with actual read/write heads that move along actual memory tapes where the magnetization of each portion stands for a 1 or 0 on the abstract Turing machine tape).
By explanatory work, I mean explaining the mind.
ReplyDeleteAnd, how do you go from
In our universe, a wave of amplitude X and a wave of amplitude X interfere constructively to produce a wave of amplitude 2X.
To
There are additive mathematical rules.
Because it is easy to imagine a possible world where waves simply do not interfere constructively. They just bounce or repel rather than fusing into a larger wave.
Would it follow that there are no such additive mathematical rules?
Anonymous wrote:
ReplyDeleteBy explanatory work, I mean explaining the mind.
I don't think computation is meant to "explain" the mind in Chalmers' scheme, it's just that certain implementations of computations can be the cause of qualia according to the psychophysical laws. When we have physical laws that say condition A produces result B, the laws themselves don't "explain" that connection, they just note a regular relation between the two.
In our universe, a wave of amplitude X and a wave of amplitude X interfere constructively to produce a wave of amplitude 2X.
To
There are additive mathematical rules.
I'm not going from the physical observation to the meaningfulness of the notion of addition, if that's what you mean--I'm a platonist, I don't think mathematical truths depend on the specific physical facts about our universe. That fact about waves is just an example of addition having application to our world--my point is that I think it would still be an example of addition being "implemented" in the physical world even if no mind were there to think about it (at least none except the mind of God, since I'm sympathetic to the idea that platonic mathematical truths are ideas in God's mind), and similarly I think it's meaningful to talk about computations being instantiated in physical systems even if no human is thinking about the physical system as a computer.
@JesseM:
ReplyDelete"I'm a platonist, I don't think mathematical truths depend on the specific physical facts about our universe. That fact about waves is just an example of addition having application to our world--my point is that I think it would still be an example of addition being 'implemented' in the physical world even if no mind were there to think about it (at least none except the mind of God, since I'm sympathetic to the idea that platonic mathematical truths are ideas in God's mind), and similarly I think it's meaningful to talk about computations being instantiated in physical systems even if no human is thinking about the physical system as a computer."
I'm right there with you on all of that, including the idea that mathematical truths are ideas in God's intellect, but I still don't see how instantiating or implementing a mathematical truth, in and of itself, gets something any closer to having a mind. Everything instantiates mathematical truths, and we don't therefore (for that reason, that is) say that everything has a mind. But as I said, I'm open to argument.
@Scott:
ReplyDeleteI'd say that matter following the laws of physics is not carrying out the mathematical operation of addition in the sense of actively seeking the sum of two numbers. It's certainly operating in conformity with the laws of addition, and in some way it's instantiating mathematical truths about addition—and if you want to call that "addition" too, that's fine. But I'd say it's not "addition" in any sense relevant to the nature of consciousness. I'm open to argument, though.
The earlier discussion was about psychophysical laws creating a connection between a particular type of physical "implementation" of a computation and particular subjective experiences (qualia). So by analogy, wouldn't you have addition in a "sense relevant to the nature of consciousness" if there were psychophysical laws that said that when physical systems "implemented" additive sums in a certain well-defined way, this was the cause of particular qualia? For example, as an (implausible) thought-experiment one might imagine that the psychophysical laws divide all of physical space into cubic volumes, and then one of the laws is that "whenever the the sum of the charge in two adjacent volumes is equal to five elementary charges, qualia X results". By the very nature of the premise, isn't this a type of addition that's relevant to the nature of (or at least the laws applying to) consciousness? Chalmers' premise is analogous, that the psychophysical laws dictate that when physical systems satisfy certain criteria for the "implementation" of an abstract computation, the unique qualia associated with that abstract computation will result.
@Scott:
ReplyDeleteI'm right there with you on all of that, including the idea that mathematical truths are ideas in God's intellect, but I still don't see how instantiating or implementing a mathematical truth, in and of itself, gets something any closer to having a mind. Everything instantiates mathematical truths, and we don't therefore (for that reason, that is) say that everything has a mind. But as I said, I'm open to argument.
Again the idea is that the psychophysical laws dictate what "counts" as an implementation for the specific purposes of giving rise to certain qualia--and we either have to just accept these laws as axioms and not inquire "why" these laws as opposed some others (as atheists would do with the laws of physics, and perhaps even theists would in some sense have to do with mathematical/logical axioms). For example, there are a lot of physical examples that could potentially count as the adding of two collections and getting five, but in my silly example in the previous comment, only one very specific type of implementation (the sum of charges in two adjacent cubic volumes of specific dimensions and locations) would "count" for giving rise to the associated qualia.
Rank Sophist: [A "natural necklace"] would suddenly have a substantial form that we could discover, just as gold and water do; and it could recur in nature.
ReplyDeleteSure it could recur: whatever caused the first one to come into being could happen again and make some more. I don't know what you mean about discovery; some things in nature we can discover, and some we can't. A substantial form is not something you can just stick under a microscope to get a conclusive answer. The universe we inhabit seems to be laid out in an amazingly pedagogical manner, as though God were inviting us to figure it all out; He did have to make it that way.
If it is not a mind, then it's incapable of behaving exactly like a mind.
That is either trivially true or false. If "exact behaviour" means "doing exactly the same things in exactly the same way", then of course it never will. On the other hand, if "exact behaviour" means "produce the same [or even just similar] observable effects", then it's false. You say that "it will always necessarily break down in places that a real mind would not", but I don't know what you have in mind. Can you give an example?
Rank Sophist: If a computer became a rational substance rather than an artifact, then it would no longer be in any way like a computer.
ReplyDeleteI had reply written to your whole message before I figured out what you mean (I think!). Is this right: you are pointing out that our hypothetical non-computer substance would not run software (like a computer) — instead it would think. It would not "process input data" like a computer — it would sense. And so on, because it is a living being, not a machine, and so it acts as a living body acts, not as a machine trying to mimic a living body. With that, of course, I agree (and I am sure so does Scott). The way in which it is like a computer or a machine is in appearance — it can cause certain empirical effects that are the same as a machine... even though it is causing those particular effects in a very different way.
@Mr. Green:
ReplyDelete"With that, of course, I agree (and I am sure so does Scott)."
Sure do.
@JesseM:
ReplyDelete"The earlier discussion was about psychophysical laws creating a connection between a particular type of physical 'implementation' of a computation and particular subjective experiences (qualia). So by analogy, wouldn't you have addition in a 'sense relevant to the nature of consciousness' if there were psychophysical laws that said that when physical systems 'implemented' additive sums in a certain well-defined way, this was the cause of particular qualia?"
Perhaps, but not in any way that explained consciousness in terms of something more fundamental (which of course Chalmers doesn't claim to do; quite the contrary). The fact (if it is one) that a quale[*] occurs when certain mathematico-physical conditions are realized isn't meant to tell us where consciousness "comes from," and if (as Chalmers seems to hold) it's just a brute fact that this is the case, there simply isn't any explanation to be had along those lines.
----
[*] Since this has come up on this blog before, let me explain for anyone who doesn't already know it that the sinular of qualia is quale, and it's pronounced "qually" rather than "quail."
Arrrrgh. "Sinular" should be "singular." (Is a "sinule" the same thing as a peccadillo?)
ReplyDelete@JesseM:
ReplyDeleteI'm not at all sure that my previous post actually addresses your point. Let me know.
Yeah, I misinterpreted the wave thing. What exactly is the constructive interference computing?
ReplyDelete@JesseM:
ReplyDelete"Anyway, is the distinction you are making between a "process" and a "discrete event" part of A-T philosophy or your own terminology?"
Again, on this point, the word 'evolution' is ambiguous, but so far as evolution being a process, 'evolution' refers to a particular history, a procession of events and organisms. Necessarily there are both constant law-like causal influences (which tend to be rather poetically referred to the heavens in A-T philosophy) and immediate, concrete, mundane causal agents involved in processes of natural change. Evolution qua process is neither the former nor the latter.
"can't you have a series where A causes B causes C, so B is an efficient cause of C even if it isn't primary?" - Yes, of course; but where we have an historical series, say grandfather, father, son, the grandfather is only incidentally the efficient cause of the son. The 'real' (especially immediate) efficient cause of the son is the father (and mother) - not reproductive processes or evolutionary processes (although the latter may well be involved).
Re. Darwinian evolution and genetic drift: I'm not sure what your point is. Also, genetic drift is not a Darwinian concept, is it?
In regard to your primary question, you might refer to Oderberg's article in the latest volume edited and advertised by Feser (see his blog). It sounds like Oderberg's answer is that the metaphysics excludes the possibility of the empirical result (though still not, I would assume, the possibility that we might be fooled by an empirical result if we were metaphysically ignorant).
In regards to syntax. It is not coherent for the behavior of matter to be correct or incorrect. However, it is possible for the syntax of a computer program to be correct or incorrect, similar to the way propositions can be true or false. What physical fact determines whether the syntax is correct or incorrect? I don't have any issues with proposing psychophysical laws. But, as I mentioned earlier, psychophysical laws seem incompatible with physical reductionism, and entail a form of dualism. And it vindicates Searle's argument. After all why propose psychophysics if the four fundamental forces, the standard model, etc could explain computer programs?
ReplyDelete@Scott:
ReplyDeletePerhaps, but not in any way that explained consciousness in terms of something more fundamental (which of course Chalmers doesn't claim to do; quite the contrary). The fact (if it is one) that a quale[*] occurs when certain mathematico-physical conditions are realized isn't meant to tell us where consciousness "comes from," and if (as Chalmers seems to hold) it's just a brute fact that this is the case, there simply isn't any explanation to be had along those lines.
Yes, I agree with this--I'm not sure if I mentioned this in some previous discussion we had about Timothy Sprigge, but I'm actually inclined to some sort of panpsychism/idealism (perhaps the sort of panpsychist "double-aspect ontology" that Chalmers discusses on p. 305 of The Conscious Mind), where consciousness is just the basic "stuff" of reality and there's no point trying to explain it in terms of anything more fundamental (though I think there's still plenty to be explored about why experience always seems to be structured, and to what degree that structure might be described mathematically). I actually mentioned the point about any solution to the implementation problem not being about "explaining" consciousness in my post to Anonymous at 4:03 PM, where I wrote: 'I don't think computation is meant to "explain" the mind in Chalmers' scheme, it's just that certain implementations of computations can be the cause of qualia according to the psychophysical laws. When we have physical laws that say condition A produces result B, the laws themselves don't "explain" that connection, they just note a regular relation between the two.'
"Evolution qua process is neither the former nor the latter." - Perhaps I should have added: and an *avalanche* is the latter (a concrete, mundane agent of change). The fact that we might be able to pick out the first boulder that moves, and then the second, etc. is irrelevant with respect to causation of the effect in question (house being crushed).
ReplyDelete@JesseM: Gotcha. I think we're all up to speed, then. Thanks for the clarification/reminders.
ReplyDeleteScott,
ReplyDeleteThey may be (mostly) irrelevant to what can and can't happen in our world, but they are utterly relevant to considerations about what an omnipotent God can and can't do, and therefore to what He might have done in our world even if we don't know it yet.
Even God can't change the Porphyrian tree and make an animal without nutritive powers. Essence is not something subject to variation.
Mr. Green,
I don't know what you mean about discovery; some things in nature we can discover, and some we can't. A substantial form is not something you can just stick under a microscope to get a conclusive answer.
I know. However, if something possesses a substantial form, then it follows that there is at least a possibility that we can discover it. The substantial form of a necklace seems like an incoherent concept to me, but perhaps it isn't.
If "exact behaviour" means "doing exactly the same things in exactly the same way", then of course it never will. On the other hand, if "exact behaviour" means "produce the same [or even just similar] observable effects", then it's false. You say that "it will always necessarily break down in places that a real mind would not", but I don't know what you have in mind. Can you give an example?
A plastic flower may look like a real flower, but closer examination reveals that it is not one. A-T's position on forms and identity entails that it is impossible to create a perfect outward copy with a different form. Just as there is no possible world in which Putnam's XYZ (water without the form H2O) exists, there is no possible world in which a computer brain can behave exactly like a real mind. It is not a real mind, and so it cannot act exactly like one.
As for breaking down, a computer brain, as a rule-following machine, can only act according to the rules and patterns that have been programmed into it. Even "learning computers" operate under this mode: they simply copy patterns from the users with which they communicate. It is metaphysically impossible for this artificial process to perfectly mirror in every respect the outward behavior of a real mind, because a real mind operates so differently. Again, no plastic flower can ever be real; and XYZ cannot exist. As a result, the computer brain will at least sometimes react in unnatural ways, even if it is a very good imitator at other times.
Is this right: you are pointing out that our hypothetical non-computer substance would not run software (like a computer) — instead it would think. It would not "process input data" like a computer — it would sense. And so on, because it is a living being, not a machine, and so it acts as a living body acts, not as a machine trying to mimic a living body. With that, of course, I agree (and I am sure so does Scott). The way in which it is like a computer or a machine is in appearance — it can cause certain empirical effects that are the same as a machine... even though it is causing those particular effects in a very different way.
This is largely correct, but you've left out one key point I was trying to make: a living computer could not be made of non-living matter anymore. By gaining the powers you mentioned, it would have to jump on the Porphyrian tree from "inanimate" to "animate", all the way down through "sensitive" and "rational". It couldn't look exactly like a computer anymore: it would have to be some kind of flesh-computer that perhaps, in some respects, resembled a normal computer. To say otherwise would be to claim that a rational animal could be non-living, which is a flat contradiction given the Porphyrian tree.
This seems relevant:
ReplyDeleteThomas Aquinas, De operationibus occultis naturae (On the hidden workings of nature):
"On the basis of what has been argued, it seems that because artificial forms are accidents that do not follow upon the species, it is not possible that a product of art [e.g., a mind-upload robot] receives from a heavenly body in the process of its being constructed a power and activity to bring about, as if from an endowed power, natural effects that transcend the powers of the elements. For if powers of this kind belonged to works of art, they would not be consequent upon any form from the heavenly bodies [as is the case for natural substances]. The form of an artwork is nothing other than an order, arrangement and shape, and from such as these can come no power or activities of the kind we have been discussing. Thus it is clear that if an artwork brings about actions of this kind (for example, if near a sculpture snakes die or animals are rendered immobile or injured), this does not proceed from an indwelling and permanent power, but only from the power of an extrinsic agent that uses such things as instruments for its own purposes.
"Nor is it correct to say that activities of this type arise from the power of the heavens just because the heavens naturally act among the lower bodies. The particular shape a body may be given by an artist in no way makes it more or less fit to receive the imprint of a natural agent. Thus, it is impossible that pictures or sculptures that bring about remarkable effects, have their efficacy from the heavens, even granting they were made under particular constellations. Rather, to the extent they have this efficacy it is only through spirits which work through them."
And I guess what follows is also relevant:
ReplyDelete"Now just as pictures get their matter from the natural world, but their form from art, so also human words have a natural matter, namely, sounds uttered from the human mouth, but their form, that is, their signification, they get from the intellect expressing ideas through sounds of that kind. Thus, likewise, human words do not have the efficacy to change a natural body from a power of a natural cause, but only from a spiritual substance.
"Therefore, these actions that come about through words, pictures, sculptures and the like, are not natural, since they do not proceed from an intrinsic power, but rather they are *empericae* ['empirical'?], and belong to the order of superstition. But the actions we have been speaking about above that arise from the forms of bodies are natural, since they come from intrinsic principles."
@David M:
ReplyDeleteFair enough.
I think everyone here is agreed that, for A-T, in order for a "mind upload" to result in a thinking substance, it would have to involve something more than artifice—and as a matter of fact I don't believe for a moment that it does.
But nothing in Aquinas's arguments, so far as I know, rules out as absolutely impossible that God could order a world in, for example, such a way as to bring a rational substance into being when, as Mr. Green nicely puts it, "matter is assembled in some computery-type form."
Again, the point is not that I think this actually happens; I don't. I might even agree that if it did happen it would be tantamount to a miracle (perhaps along the lines of transubstantiation, though please don't press that analogy too far).
The point is that just that if it did happen, A-T metaphysics could account for it and would not thereby be disproved.
Even God can't change the Porphyrian tree and make an animal without nutritive powers.
ReplyDeleteNothing absolutely true or false can be said about God for the simple reason that God is prior to (i.e. he precedes and surpasses) every reality that we understand, including existence.
For the sake of comparison.
hello all,
ReplyDeletei wonder about how this phenomenon would be explained on A-T.
http://www.whydontyoutrythis.com/2013/05/jacob-barnett-14-year-old-with-asperger-syndrome-may-be-smarter-than-einstein.html
@rank sophist:
ReplyDeleteA solar panel cannot be a nutritive power because it is not alive. It is simply an object that either persists or dissipates based on the forces acting on it; it does not act through itself, nor can it adapt itself in any way. Another agent can alter it to suit its environment, certainly, but this is just another example of transient causation. Immanent causation is what defines life, and it's what a machine can never have.
None of the individual molecules which make up a cell (even if they only "virtually" exist within the cell from the perspective of the A-T philosophy) can be said to be individually "alive", but the cell is. Why couldn't a solar panel be a (virtual) part of a more complex entity that can be judged to be alive?
You say "immanent causation is what defines life, and it's what a machine can never have" but isn't this debate about whether it's within God's power to design the laws of nature such that, on a metaphysical level, when certain machine parts are brought together in the right way, a new substance is created and the result is not a machine? After all, that's what you seem to think might be going on if Craig Venter is able to create a cell from a collection of nonliving molecules. Even if some of the individual molecules might have been labeled as "molecular machines" before being integrated into a cell, you presumably would say the cell is no longer a machine despite having been synthesized from parts that were individually machines before being brought together in the right way. So why isn't it possible the same would be true for the right combination of computer and robot parts? Just because it still appears to be composed of individual parts which appear machine-like, one is free to say the whole is not actually a "machine" at all on the metaphysical level.
This brings me to another question I asked Mr. Green about terms like "artifact" or "machine" in A-T philosophy but didn't get an answer to, perhaps you or some other commenter could answer:
'in A-T philosophy is "artifact" taken to be a fundamental ontological category like a "substance", as opposed to a somewhat fuzzy and subjective human descriptive category? An A-T advocate should believe there is always a single objective truth about whether something is a substance, even if that truth isn't knowable with certainty by humans, is the same true for artifacts? If I perform some very minor modification on some naturally-occurring thing to make it more useful to me, like bending a plant stem to produce a hook to grab at something in a narrow opening my hand can't reach into, I might or might not choose to call it a "tool" or "artifact"--could I be objectively mistaken?'
Does A-T philosophy place strict restrictions on what might be "life", strict enough that some forms of logically possible alien beings would be ruled out?
Logically possible entities, in the modal logic sense, are irrelevant to A-T.
First of all, my question needn't merely be about aliens in other possible worlds--it can also be about whether A-T philosophy places any strong constraints on the biology of alien beings we might find if we went out and explored our own universe. Would you predict with absolute confidence that we would never find an alien species that couldn't repair injuries, or would you just say if we ever did find such a species it would not be "alive", or what?
(reply to rank sophist, continued)
ReplyDeleteBut on the topic of modal questions about possibilities, why do you say they are irrelevant? Aquinas may not have systematically addressed the issue of what aspects of there metaphysical claims were logically necessary and what were just features of the universe God actually created, but that doesn't necessarily mean he considered it "irrelevant", perhaps the question didn't occur to him. If you think there is some argument for the irrelevance of modal questions in A-T philosophy beyond just the historical fact that Aquinas didn't write much about them, please present it. Besides, Aquinas did at least address the question of what was "possible" for God to do, and argued that omnipotence meant God could do anything that wasn't a logical impossibility (I happen to know this because I was interested in the question of what various philosophers said about logical possibility, and did a little research into the subject). Look at Question 25, Article 3 here, where he distinguishes between what is "possible" relative to the powers of some specific being (like a human), and what is possible "absolutely, on account of the relation in which the very terms stand to each other" (what we would call logical possibility), and "God is called omnipotent because He can do all things that are possible absolutely; which is the second way of saying a thing is possible. For a thing is said to be possible or impossible absolutely, according to the relation in which the very terms stand to one another, possible if the predicate is not incompatible with the subject, as that Socrates sits; and absolutely impossible when the predicate is altogether incompatible with the subject, as, for instance, that a man is a donkey."
Angels are complete immaterial substances without bodies, but they can control bodies like puppets. The standard issue rational soul is properly united to living matter. Unless you're proposing that the souls of computer brains are in fact angels incognito, then what I said follows.
As I said above I don't see why it's impossible a body that was built out of robotic parts could be a "living" one in A-T philosophy, but aside from that, why do you think the Aquinas quote I provided about angels suggests that an intellectual substance without a living body must be incorporeal? In the section I quoted, Aquinas didn't make any statement along the lines of "all thinking substances must have living bodies, unless they are wholly incorporeal", suggesting that it has to be one or the other. He just used angels as an example to illustrate the general principle "that to have a body united to it is not of the nature of an intellectual substance, as such; but it is accidental to some intellectual substance on account of something else." If he ever said specifically that an intellectual substance with a body must have a living body, please point out where (of course A-T philosophy is not merely the "word of Aquinas", it's a philosophy rather than a revelation, so even if Aquinas didn't say this you may think there's a good philosophical argument for believing it in the context of A-T philosophy--in that case, please present the argument).
Artifacts are made of substances. It thus seems that they are not equally ontologically fundamental with them.
ReplyDeleteIn Aquinas' view (I believe), separate intellects can be accidentally joined to non-living bodies (e.g., heavenly bodies), but not artificially.
Jesse,
ReplyDeleteNone of the individual molecules which make up a cell (even if they only "virtually" exist within the cell from the perspective of the A-T philosophy) can be said to be individually "alive", but the cell is. Why couldn't a solar panel be a (virtual) part of a more complex entity that can be judged to be alive?
A cell is virtual as well. A solar panel could not be virtual, since it's a macroscopic object. If anything, it would be an accident. And, even if it was an accident housed in a larger, living substance, it still would not be alive. A pacemaker isn't alive. A pacemaker can imitate living functions, just as a solar panel imitates photosynthesis; but imitation is not the real thing.
You say "immanent causation is what defines life, and it's what a machine can never have" but isn't this debate about whether it's within God's power to design the laws of nature such that, on a metaphysical level, when certain machine parts are brought together in the right way, a new substance is created and the result is not a machine?
As I said to Mr. Green, such a substance would travel so far down the Porphyrian tree that it would be unrecognizable.
After all, that's what you seem to think might be going on if Craig Venter is able to create a cell from a collection of nonliving molecules.
A cell exhibits the properties of life. If the whole entity exhibited the properties of life, then we could say that a new, living substance had been created. But the Porphyrian tree means that such a substance, again, would no longer be anything like a machine.
in A-T philosophy is "artifact" taken to be a fundamental ontological category like a "substance", as opposed to a somewhat fuzzy and subjective human descriptive category?
A bit of both, honestly. It's fundamental in the sense that there is something that it is to be an artifact--an objective, ontological truth--, but this or that artifact is something whose purpose is defined by humans. The latter will necessarily be vague and occasionally subjective.
Would you predict with absolute confidence that we would never find an alien species that couldn't repair injuries, or would you just say if we ever did find such a species it would not be "alive", or what?
First, if an alien species was incapable of regeneration, then the cells of every member of that species would decay in short order. As a result, it would cease to exist. Anyway, I can say with absolute certainty that we could never find an animal that lacked sensitive powers, a plant that lacked nutritive powers or an animal that lacked nutritive powers. Otherwise, we would be claiming that we discovered a non-plant plant and a non-animal animal, which is contradictory.
If you think there is some argument for the irrelevance of modal questions in A-T philosophy beyond just the historical fact that Aquinas didn't write much about them, please present it.
ReplyDeleteFor A-T, all possible worlds have an extreme set of limitations thanks to the ontic and ontological categories into which all things fall. Essence, the Porphyrian tree, substance, artifact, accident and so on--these will apply in all possible worlds. This makes modal thought experiments more-or-less useless in the quest to understand the actual world.
Besides, Aquinas did at least address the question of what was "possible" for God to do, and argued that omnipotence meant God could do anything that wasn't a logical impossibility
Indeed. However, the topic under debate falls under the "impossible absolutely" banner, like a man being a donkey.
He just used angels as an example to illustrate the general principle "that to have a body united to it is not of the nature of an intellectual substance, as such; but it is accidental to some intellectual substance on account of something else."
You're misunderstanding something, here. A separated intelligence (i.e. a rational soul without a body) is not an intellectual substance: it is no kind of substance. The definition of "angel" is "intellectual substance", since it is a substance that is intellectual in nature.
A rational soul is naturally united to a body with nutritive and sensitive powers. The combined entity counts as a substance. A rational soul cannot be united to anything other than a living body on pains of contradiction. An angel, on the other hand, is not united to anything: it controls a body like a puppetmaster--like Descartes' res cogitans. It is, in theory, capable of manipulating a machine and acting as its "mind" without that machine necessarily possessing the attributes of a living thing. A rational soul does not operate anything like this, though. It unites with matter into a whole substance that is not dualistic. This substance will necessarily have certain traits--nutritive and sensitive powers, for instance--thanks to the Porphyrian tree. If a rational soul controlled matter like the res cogitans, then it would be possible for a rational soul to exist within a machine. But it doesn't work this way, and so the thought experiment is a contradiction.
"For A-T, all possible worlds have an extreme set of limitations thanks to the ontic and ontological categories into which all things fall. Essence, the Porphyrian tree, substance, artifact, accident and so on--these will apply in all possible worlds. This makes modal thought experiments more-or-less useless in the quest to understand the actual world."
ReplyDelete'The' Porphyrian tree? - or 'a' Porphyrian tree?
Also, isn't the conclusion a non sequitur? What if we are interested in modal questions with regard to the actual world?
@rank sophist:
ReplyDeleteA cell is virtual as well.
I was actually thinking of a single-celled organism in the comment you're responding to there; but yes, I understand that in the case of one of our own cells, the cell would be virtual.
A solar panel could not be virtual, since it's a macroscopic object
"Macroscopic" is not incompatible with being virtual--aren't the organs of a living person virtual too? As for calling it an "object", if by that you mean you mean something that's a complete "thing" on its own and not just a virtual part of a complete thing (I'm getting a little confused by the terminology so I'm not sure exactly how to translate "complete thing", but by this I mean whatever it is that the virtual parts are parts of), then you're just making a question-begging assertion. What features of the solar panel make you completely sure that it couldn't just be a virtual part of some more complete thing?
A cell exhibits the properties of life. If the whole entity exhibited the properties of life, then we could say that a new, living substance had been created. But the Porphyrian tree means that such a substance, again, would no longer be anything like a machine.
But by "no longer anything like a machine" are you talking about empirically observable properties or some kind of metaphysical essence? For any set of empirically observable properties, like self-repair or ability to move around or respond differently depending on its environment or extract energy from some external source to power its functions, it would probably be possible to build something that performs all these functions yet appears "machine-like" in various ways (being made of hard non-carbon-based materials, having gears and wires, etc.). If you're talking about some metaphysical essence separate from observable properties, then again the question is what stops God from making it so that something superficially "machine-like" has this essence?
A bit of both, honestly. It's fundamental in the sense that there is something that it is to be an artifact--an objective, ontological truth--, but this or that artifact is something whose purpose is defined by humans. The latter will necessarily be vague and occasionally subjective.
But is there an objective, ontological truth about whether some arrangement of matter is an artifact, leaving aside questions of its precise purpose? For an ambiguous case like a plant stem that I've bent to help me pull something in a narrow opening, if everyone agreed "yes, the fact that you've modified it for a useful purpose makes it an artifact" or if everyone agreed "no, that modification is too minor to really qualify it as an artifact, after all you might easily have just found a stem that was bent by an animal and used it the same way", could they be objectively wrong? If your answer is yes, then are there clear criteria that can determine whether something is an artifact based on empirically-observable properties, or is it the sort of question only God can be sure of the answer to in ambiguous cases?
First, if an alien species was incapable of regeneration, then the cells of every member of that species would decay in short order.
The decay time of complex molecules depends on nitty-gritty details of physics which, from God's point of view, would presumably be contingent. And even without talking about different laws of physics, people have speculated about alien biophysics, like silicon-based life rather than carbon-based, which might have a longer decay time. As long as the time to reproduce is shorter than the time to decay, this sort of decay need not doom the species even if it dooms individuals.
(reply to rank sophist, continued):
ReplyDeleteBesides, this is a practical argument, not an argument about what is logically necessary to the very concept of "life" or "intelligence". If you are claiming it's logically impossible for God to create an intelligent embodied being whose body doesn't have the capacity for self-repair, you need to show that there is something in the very concept of "intelligent embodied being" that creates a logical contradiction if you try to unite it with the concept of an entity that doesn't self-repair, not just that in practice such a being would die too young to acquire much understanding.
For A-T, all possible worlds have an extreme set of limitations thanks to the ontic and ontological categories into which all things fall. Essence, the Porphyrian tree, substance, artifact, accident and so on--these will apply in all possible worlds. This makes modal thought experiments more-or-less useless in the quest to understand the actual world.
If you can show that violation of any aspect of the Porphyrian tree creates a logical contradiction that's fine, it just means that all logically possible worlds will fit this tree. But where would I find such an argument? Does Aquinas or one of his later interpreters, claim to present such a logical contradiction, and if so can you give a reference, or present the argument yourself?
Anyway I'm confused about how the Porphyrian tree would show that any thinking substance united with a body must be united with a body that matches your criteria for "animal". I'm not sure if the tree shown here is accurate, but if so it seems to show that a thinking substance is not a sub-type of the genus animal, but rather a different species of "substance" than all extended physical substances, which would seem to suggest that human beings represent a sort of marriage of two entirely different branches of the tree. Is that true, and if so what are the rules that determine how different branches can be "married" in this way, if it's not just a matter of looking at the hierarchy itself?
Indeed. However, the topic under debate falls under the "impossible absolutely" banner, like a man being a donkey.
If that's so then it should be possible to give an argument, or reference to someone else's argument, showing explicitly how analyzing the concept of a thinking being shows that if it is embodied at all, it would be logically contradictory for it to be embodied in a body that doesn't match your criteria for an "animal". Can you do so?
You're misunderstanding something, here. A separated intelligence (i.e. a rational soul without a body) is not an intellectual substance: it is no kind of substance. The definition of "angel" is "intellectual substance", since it is a substance that is intellectual in nature.
What about when a person's soul is not lacking a body? Is either the whole person or the soul an "intellectual substance"? It seems like Aquinas says it is here, but I might be misunderstanding.
A rational soul is naturally united to a body with nutritive and sensitive powers. The combined entity counts as a substance. A rational soul cannot be united to anything other than a living body on pains of contradiction.
So what is the contradiction, exactly?
When you infer from observation, you've left observation, along with any empirical data observed.
ReplyDeleteAnd the Reason-Theists in the objectivist camp are as blunted about this as the rest of the naturalists and materialists, however much the three categories may intersect.
We're still waiting on that criteria for distinguishing "true" from merely "determined by the specified factors" when talking about reductive theories, by the way.
Scott: Somewhat off-topic, all the talk of Chalmers has reminded me of a poem I wrote for a friend of mine who's a philosophy grad student.
ReplyDeleteHeh. (1), (2), (1), (2) indeed!
Anonymous: i wonder about how this phenomenon would be explained on A-T.
ReplyDeleteSome people are smarter than others? (I don't see anything special to A-T about the situation.)
JesseM: If I perform some very minor modification on some naturally-occurring thing to make it more useful to me, like bending a plant stem to produce a hook to grab at something in a narrow opening my hand can't reach into, I might or might not choose to call it a "tool" or "artifact"--could I be objectively mistaken?
ReplyDeleteI don't think "artifact" has a special technical definition; in these contexts, I've been using it to distinguish a single substance from something that consists of several substances. But that's a bit sloppy, as your example shows. "Substance" is ontologically fundamental for A-T: any actual, real thing is either a substance (something that has its own intrinsic unity) or a group of substances (united under some accidental, extrinsic form).
Typically, an artifact is made up of multiple substances arranged in such a way as to work together; this arranging is imposed from outside, i.e. it is not something that follows from the nature of the substances themselves. (Including the obvious way, since substances don't have a nature, they have several natures, one each. Of course, if such a machine works, it will be because the individual natures involved are capable of working together when brought together that way.) Your bent plant is a very simple machine consisting of only one substance, but still the accidental form of being bent into that particular shape was something you applied externally (by artifice — admittedly, not very impressive artifice, but artifice all the same), hence it can rightly be said to be an artifact.
We can go further: you can use a rock as a paperweight without even having to bend it. It is a single substance, but still could be called (perhaps loosely) an "artifact" because even though you have not modified the rock, to use it as a paperweight is to apply a final form extrinsically. A rock, as a substance, has the intrinsic finality of "being heavy", but is not by nature a paperweight per se.
(There are also arbitrary groups of substances, such as a collection of people who happen to be in the same place. A "crowd" is not an artifact, but is an accidental arrangement of a bunch of individual substances.)
Rank Sophist: Even God can't change the Porphyrian tree and make an animal without nutritive powers. Essence is not something subject to variation.
ReplyDeleteDunno where the variations came from, but the Porphyrian tree is descriptive, not prescriptive.
A plastic flower may look like a real flower, but closer examination reveals that it is not one.
Maybe. It depends how closely we are able to examine the thing under consideration. The artificial flower fooled Solomon, but not the bee. In this case, we are explicitly restricting ourselves to a certain class of measurable physical properties, so there is no reason in principle why different forms might not produce the same range of effects within that particular limitation.
It is metaphysically impossible for this artificial process to perfectly mirror in every respect the outward behavior of a real mind, because a real mind operates so differently.
Perhaps, but that doesn't explain why it should be so. A chess-playing computer operates very different from a chess-playing human, but it plays successfully nevertheless. Indeed, computers have pretty much outdone human by this point. (I guess "being much better" is a way in which they are distinguishable from humans!) But the point is that the way something does or does not operate cannot per se tell us what effects it can cause in a certain limited domain.
As a result, the computer brain will at least sometimes react in unnatural ways, even if it is a very good imitator at other times.
Now we're back to the practicalities: human beings sometimes act in unnatural ways, so a simulation that wasn't perfect would in some ways be better than one that was "perfect". (There's certainly no problem in principle in getting a machine to pass the Turing test, say.)
This is largely correct, but you've left out one key point I was trying to make: a living computer could not be made of non-living matter anymore. [...] It couldn't look exactly like a computer anymore: it would have to be some kind of flesh-computer that perhaps, in some respects, resembled a normal computer.
Yes, I agree that a living "computer" would consist of living matter... not metal and plastic. As to what it would look like — that depends what sort of parts it has virtually. Human bodies look like big piles of atoms to the physicist, and presumably [though not necessarily!] so would our hypothetical living not-quite-computery organism. Might those atoms appears to be arranged like pieces of metal plastic? All I can say is that there is no contradiction involved.
Jesse: Why couldn't a solar panel be a (virtual) part of a more complex entity that can be judged to be alive?
ReplyDeleteRight, I would say it could. Of course, in that case it wouldn't really be a "solar panel" (that is, it wouldn't be a machine or artifact, as implied by the phrase "solar panel").
Does A-T philosophy place strict restrictions on what might be "life", strict enough that some forms of logically possible alien beings would be ruled out?
Not in that sense... RS said a substance can't have a non-living body, because that's just what it means to be a living body: a mind or soul is alive, and so if it has any kind of body at all, it is a living body. An angel manipulating matter does not have a body, any more than I have your body if I push you around. So the possibilities are, given a seemingly thinking computer: it's just a machine, but it's being manipulated by some kind of angel (i.e. an intellectual substance that is separate from the machine); it is not a machine but a living being (i.e. a single intellectual substance that includes a corporeal part); or it is just a machine with no intellect (even if it does a good enough imitation to fool us).
What about when a person's soul is not lacking a body? Is either the whole person or the soul an "intellectual substance"?
A person is an intellectual substance because he's a substance, and he's intellectual (as in, has an intellect). Of course, a human person is not merely an intellect, in the sense that an angel is. Humans have bodies too, or rather, are naturally corporeal, i.e. are "supposed to" have bodies. When you die, you lose your body, so all that's left of you is your soul, or intellect; that is the "whole" you insofar as there is nothing else that is part of you at that time, but it is also not the whole you, insofar as you are incomplete without a body. If you get resurrected, you are not a substance plus some matter; you are still just one substance, which incorporates the matter of your (living) body.
@JesseM:
ReplyDelete"If you can show that violation of any aspect of the Porphyrian tree creates a logical contradiction that's fine, it just means that all logically possible worlds will fit this tree. But where would I find such an argument?"
Nowhere. An (not "the") Arbor Porphyriana is not intended to involve any sort of metaphysical necessity. As Mr. Green says, it's descriptive, not prescriptive.
@Mr. Green:
ReplyDelete"So the possibilities are, given a seemingly thinking computer: it's just a machine, but it's being manipulated by some kind of angel (i.e. an intellectual substance that is separate from the machine); it is not a machine but a living being (i.e. a single intellectual substance that includes a corporeal part); or it is just a machine with no intellect (even if it does a good enough imitation to fool us)."
I agree. Again, my strong hunch is that the third is the one that would actually happen, but the first two aren't absolutely ruled out by A-T metaphysics.