Friday, February 13, 2015

Accept no imitations


Given that he’s just become a movie star, Alan Turing’s classic paper “Computing Machinery and Intelligence” seems an apt topic for a blog post.  It is in this paper that Turing sets out his famous “Imitation Game,” which has since come to be known as the Turing Test.  The basic idea is as follows: Suppose a human interrogator converses via a keyboard and monitor with two participants, one a human being and one a machine, each of whom is in a different room.  The interrogator’s job is to figure out which is which.  Could the machine be programmed in such a way that the interrogator could not determine from the conversation which is the human being and which the machine?  Turing proposed this as a useful stand-in for the question “Can machines think?”  And in his view, a “Yes” answer to the former question is as good as a “Yes” answer to the latter.

This way of putting things is significant.  Turing doesn’t exactly assert flatly in the paper that machines can think, or that conversational behavior of the sort imagined entails intelligence, though he certainly gives the impression that that is what he believes.  (As Jack Copeland notes in his recent book on Turing (at p. 209), Turing’s various statements on this subject are not entirely consistent.  In some places he explicitly declines to offer any definition of thinking, while at other times he speaks as if studying what machines do can help us to discover what thinking is.)  What Turing says in the paper is that the question “Can machines think?” is “too meaningless to deserve discussion,” that to consider instead whether a machine could pass the Turing Test is to entertain a “more accurate form of the question,” and that if machines develop to the point where they can pass the test, then “the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

This is very curious.  Suppose you asked me whether gold and pyrite are the same, and I responded by saying that the question is “too meaningless to deserve discussion,” that it would be “more accurate” to ask whether we could process pyrite in such a way that someone examining it would be unable to tell it apart from gold, and that if we can so process it, then “the use of words and general educated opinion will have altered so much that one will be able to speak of pyrite as gold without expecting to be contradicted.”  Obviously this would be a bizarre response.  Whether pyrite might be taken by someone to be gold and whether pyrite is in fact gold are just two different questions, and what I would be doing is simply changing the subject rather than in any way answering the original question.  How is Turing’s procedure any different?  And how exactly is “Can machines think?” any more “meaningless” than “Is pyrite gold?”

It’s no good, by the way, to object that the cases are not parallel insofar as an expert could distinguish gold and pyrite.  The cases are parallel in this respect, as Turing himself implicitly admitted.  Copeland points out (p. 211) that Turing elsewhere acknowledged that in a Turing Test situation, someone with expertise about machines might well be able to figure out from subtle clues which is the machine.  Turing thus stipulated that the interrogator should be someone who does not have such expertise.  He thought that what mattered was whether the ordinary person could figure out which is the machine.  So, whether an expert (as opposed to an ordinary observer) could figure out whether or not something is pyrite does not keep my example from being relevantly analogous to Turing’s.

So, why might Turing or anyone else think that his proposed test casts any light on the question about whether machines can think?  There are at least three possible answers, and none of them is any good.  I’ll call them the Scholastic answer, the verificationist answer, and the scientistic answer.  Let’s consider each in turn.

What I call the “Scholastic answer” is definitely not what Turing himself had in mind, though in fact it would be the most promising (if ultimately unsuccessful) way to try to defend Turing’s procedure.  The idea is this.  Recall that it is a basic principle of Scholastic metaphysics that agere sequitur esse (“action follows being” or “activity follows existence”).  That is to say, the way a thing acts or behaves reflects what it is.  A defender of the Turing Test could argue that if a machine acts like an intelligent thing, then it must be an intelligent thing.  But competent language use is a paradigmatically intelligent activity (especially for a Scholastic, who would define intellect in terms of the grasp of abstract concepts of the sort expressed by general terms).  Hence (so the argument might go) the Turing Test is a surefire way to test for intelligence.

But not so fast.  For a Scholastic, the principle agere sequitur esse must, of course, be applied in conjunction with other basic metaphysical principles.  And one of the other relevant ones is the distinction between substantial form and accidental form, a mark of which is the presence or absence of irreducible causal powers.  A plant carries out photosynthesis and a pocket watch displays the time of day, but these causal powers are not in the two objects in the same way.  That a plant carries out photosynthesis is an observer-independent fact about the plant, whereas that a watch displays the time of day is not an observer-independent fact about the watch.  For the metal bits that make up the watch have no inherent tendency to display the time.  That is a function we have imposed on them, from outside as it were.  The plant, by contrast, does have an inherent tendency to carry out photosynthesis.  That reflects the fact that to be a plant is to have a substantial form and thus to be a true substance, whereas to be a pocket watch is to have a mere accidental form and not to be a true substance.  The true substances in that case are the metal bits that make up the watch, and the form of a pocket watch is just an accidental form we have imposed on them.  (I have discussed the difference between substantial and accidental form in many places, such as here, here, and here.  For the full story, see chapter 3 of Scholastic Metaphysics.) 

Now, a computing machine is like a pocket watch rather than like a plant.  It runs the programs it does, engages in conversation, etc. in just the same way that the watch displays the time.  That is to say, it has no inherent tendency to do these things, but does them only insofar as we impose these functions on the parts that make up the machine.  (This is why, as Saul Kripke points out, there is no observer-independent fact of the matter about what program a computer is running, and why, as Karl Popper and John Searle point out, there is no observer-independent fact of the matter about whether something even counts as a computer in the first place.)  To be a computer is to have a mere accidental form rather than a substantial form.

In applying the principle agere sequitur esse, then, we need to determine whether the thing we’re applying it to is a true substance or not, or in other words whether it has a substantial form or merely an accidental form.  If we’re examining bits of metal and find that they display the time, it would silly to conclude “Well, since agere sequitur esse, it follows that metal bits have the power to tell time!”  For the bits are “telling time” only because we have made them do so, and they wouldn’t be doing it otherwise.  Similarly, if I throw a stone in the air, it would be ridiculous to conclude “Since agere sequitur esse, it follows that stones can fly!”  The stone is “flying” only because and insofar as I throw it.  Flying is, you might say, merely an accidental form of the stone.  What matters when applying the principle agere sequitur esse is to see what a thing does naturally, on its own, when left to its own devices -- that is to say, to see what properties flow or follow from its substantial form, as opposed to the accidental forms that are imposed upon it.

Now, seen in this light the Turing Test is just a non-starter.  To determine whether a machine can think, it simply isn’t relevant to find out whether it passes the Turing Test, if it passes the test only because it has been programmed to do so.  Left to themselves, metal bits don’t display time, and stones don’t fly.  And left to themselves, machines don’t converse.  So, that we can make them converse no more shows that they are intelligent than throwing stones or making watches shows that stones have the power of flight or that bits of metal qua metal can tell time.

So, while the Scholastic answer would (in my view, since I’m a Scholastic) be Turing’s best bet, at the end of the day it doesn’t really work.  But of course, Turing was no Scholastic.  Did he have in mind instead what I call the “Verificationist answer”?  The idea here would be this: The meaning of a statement is, according to verificationism, determined by its method of verification.  Now, we can’t peer into anyone else’s mind, in the case of human beings any more than in the case of machines.  So (the argument might continue), the only way to verify whether something is intelligent is to determine whether it behaves in an intelligent way, and intelligent conversation is the gold standard of intelligent behavior.  Hence the only way the question “Can machines think?” can be given a meaningful construal is to interpret it as asking whether machines can behave in an intelligent way.  Since that is precisely what the Turing Test seeks to determine, if a machine passes it, then there is nothing more that could in principle be asked for as evidence that it is genuinely intelligent.  Indeed (so the argument would go), there is nothing more for intelligence to be than the capacity to pass the Turing Test.

Now, verificationism was certainly in the air at the time Turing was writing.  It underlay the “philosophical behaviorist” view that having a mind is “nothing but” manifesting certain patterns of behavior or dispositions for behavior.  But there are serious problems with verificationism, not the least of which is that it is self-defeating.  For the principle of verification is not itself verifiable, which entails that it is, by its own standards, strictly meaningless.  If it were true, then it wouldn’t even rise to the level of being false.  Unsurprisingly, no one defends it any more, at least not in its most straightforward form.

But Turing does not in any case appeal to verificationism in the paper, and I don’t think that’s really what’s going on.  What I think he was at least tacitly committed to is what I call the “Scientistic answer” to the question of why anyone should think the Turing Test casts light on the question whether machines can think.  Turing’s view, I suspect, was essentially that there is no way to study intelligence scientifically other than by asking what a system would have to be like in order to pass the Turing Test.  Hence that is, in his view, the question we should focus on.  Notice that this is not (or need not) be the same position as that of the verificationist.  His talk about “meaninglessness” notwithstanding, Turing need not say that it is strictly meaningless to ask whether something could pass the Turing Test and yet not truly be thinking.  He could say merely that since there is no scientific way to investigate that particular question, there is no point in bothering with it, and we should just focus instead on what the methods of the empirical scientist might shed light on.

If this is what Turing is up to, then he is essentially doing the same thing Lawrence Krauss does when he pretends to answer the famous question why there is anything at all rather than nothing.  And what Krauss does, as I have discussed several times (here, here, here, and here), is to pull a bait-and-switch.  He pretends at first that he is going to explain why there is something rather than nothing, but then changes the subject and discusses instead the question of how the universe in its current state arose from empty space together with the laws of physics -- which, of course, are very far from being nothing.  His justification for this farcical procedure is essentially that physics has something to tell us about the latter question, whereas it has nothing to tell us about why there is anything at all (including the fundamental laws of physics themselves) rather than nothing.  What we should focus on, in Krauss’s view, is the question he thinks he can answer rather than the question we originally asked.

Now this is exactly the same fallacy as that of the drunk who insists on looking for his lost car keys under the lamp post, on the grounds that that is the only place where there is enough light by which to see them.  The fact that that is where the light is simply doesn’t entail that the keys are there, and neither does it entail that there is any point in continuing to look for the keys under the lamp post after repeated investigation fails to turn them up, or that there is no point in trying to find ways to look for the keys elsewhere, or that we should look for something else under the lamp post rather than the keys.  Similarly, the fact that the methods of physics are powerful methods doesn’t entail that those methods can answer the question why there is anything at all rather than nothing, or that we should replace that question with some other question that the methods of physics can handle, or that there is no point in looking for other methods by which to investigate the question.  To assume, as Krauss does, that the question simply must be one susceptible of investigation by physics if it is to be rationally investigated at all is to commit what E. A. Burtt identified as the fallacy of “mak[ing] a metaphysics out of [one’s] method” -- that is, of trying to force reality to conform to one’s favored method of studying it rather than conforming one’s method to reality. 

Turing seems to be guilty of the same thing.  Rather than first determining what thought is and then asking what methods might be suitable for studying something of that nature, he instead starts by asking what sorts of thought-related phenomena might be susceptible of study via the methods of empirical science, and then decides that those are the only phenomena worth studying.  The fallaciousness of this procedure should be obvious.  Characterizing “thought” as the kind of thing that a machine would exhibit by virtue of passing the Turing Test is like characterizing “keys” as the sort of thing apt to be found under such-and-such a particular lamp post.

In general, there is (as I have argued many times) simply no good reason to accept scientism and decisive reason to reject it.  There are at least five problems with it: First, formulations of scientism are typically either self-defeating or only trivially true; second, science cannot in principle offer a complete description even of the physical world; third, science cannot even in principle offer a complete explanation of the phenomena it describes; fourth, the chief argument for scientism -- the argument from the predictive and technological successes of science -- is fallacious; and fifth, the widespread assumption that the only alternative to natural science is a dubious method of doing “conceptual analysis” is false.  (See chapter 0 of Scholastic Metaphysics for detailed exposition of each of these points.)  So, the “Scientistic answer” also fails.

Needless to say, Turing was a brilliant scientist, and all of us who use and love computers are in his debt.  But his foray into philosophy resulted, I think, not in any positive contribution but only in an interesting and instructive mistake.

150 comments:

Anonymous said...

Be careful, Ed. If you continue in this vein, you'll end up criticizing the analytics - and we all know that the merest criticism of the analytics releases their immense band of flying howler monkeys.

makachini said...

buen post

Matt Sigl said...

Great post. I'm actually most interested in the cognitive anxiety and confusion created in humans when machines start to seem to "think" and "form concepts" and "communicate" etc. It's BECAUSE we can essentially "know", if we go through the rigors of logic on the matter (as Feser has done here) that a computer (at least insofar as that term relates to any digital architecture we can imagine today) does not and could not "think." Yet the "masquerade" of thinking could be so powerful as to "convince" our instincts such that we could "relate" to computers as thinking entities and maybe even "fall in love" with them, for some. It's a dangerous situation for human consciousness.

I suppose my question would be, is there ANY room in Scholasticism for humans making a computer that could think (or potentially think), in principle using whatever kind of sci-fi computer technology you can imagine. I think a Thomist's answer would be "no" as human cognition requires that God need specifically create each and every instance of a true thinking thing. (I think this also has the implication that not every instance of a active human brain necessarily also be granted the immaterial gift of the immaterial intellect and thus could be, in essence, as devoid of thought as a behaviorally isomorphic Turing Machine. In other words a Thomist should hold that a person could look like a thinking creature but actually not have that property if it is not granted by God. (If I'm wrong about this point, I'd love to be corrected.)

My final view is that if we could generate a super-advanced neuormorphic non-biological system which has the same intrinsic causal powers as a biological neural system (an analysis you can mathematically cash out according to Giulio Tononi's Integrated Information Theory) then God could grant that system the immaterial intellect just like he would human brain as the causal apparatus sustaining it would be up to the task, formally. It bears repeating that none of our current digital computers have anything approaching this kind of material organization or processing structure. (Sometimes I do wonder about the Internet as a whole though, given that it "grew" and was not specifically programmed as a "individual entity" yet emerges as a kind of Unity. Speculations on the topic are not without merit I believe, even for the scholastic, but are usually nonetheless dismissed out-of-hand as impossible.

Scott said...

@Matt Sigl:

"My final view is that if we could generate a super-advanced neuormorphic non-biological system which has the same intrinsic causal powers as a biological neural system (an analysis you can mathematically cash out according to Giulio Tononi's Integrated Information Theory) then God could grant that system the immaterial intellect just like he would human brain as the causal apparatus sustaining it would be up to the task, formally."

For whatever it's worth, Mr. Green and I have independently expressed essentially the same view. There doesn't seem (as far as either of us can tell) to be any reason why God couldn't in principle endow an artifact/machine with a rational soul, but of course at that point it would cease to be an artifact/machine and become an intelligent substance.

John Moore said...

What is thinking in the first place? Until you define the term, this whole topic is "too meaningless to deserve discussion." In particular, one person can't explain how computers can think if someone else insists on using a different definition of thinking.

Professor Feser's gold-pyrite example is helpful, but it assumes people agree on what the words "gold" and "pyrite" mean. If your definition of both is just "yellowish metallic stuff," then pyrite really is gold. The problem is just that such a definition is not particularly helpful for us.

Word definitions are tools we use, and they must be helpful for us. So again I ask: What is your definition of thinking? And how does that definition help you?

Kiel said...

Oft times I've wondered what I'd do if I was asked to prove that a real and existing thing like Commander Data from Star Trek, for example, was not rational but a pseudo-rational thing.

I figure the only way to do so is to try and teach a new concept to the thing and get it to make judgements about the concept. What would you do?

John West said...

How does one know whether a machine thinks? Back up a second. How does one know whether a human besides oneself thinks?

John West said...

I should add: and why cannot a machine fulfill exactly those same criteria we use for other humans?

John West said...

... and in what way is deciding whether a machine can think different from deciding whether another, humanoid alien can think (an idea people seem much more ready to embrace than strong AI)?

Daniel said...

How does one know whether a human besides oneself thinks?

One cannot do so infallibly. But the Analogy of Other Minds is only an inference from X behavior to something's having a mind not a claim about having a mind meaning to have X behavior. There are other considerations one takes into account to. In the case of a machine its being a machine tells against it.

I should add: and why cannot a machine fulfill exactly those same criteria we use for other humans

The Scholastic can give an additional argument here and claim that by definition a machine lacks immanent teleology and thus cannot be the sort of thing that can possibly think. Yes, there could something composed of broadly the same materials as a machine that was capable of thought but in being it would be an organism and thus not a machine.

Daniel said...

Edit: what I meant to say in that first paragraph was that to have X behavior isn't even a Necessary Condition to having a mind let alone a Sufficient one as the Behaviorists thought (not that John was claiming such himself).

Simon said...

Let's say we did build a machine that mirrors the function of the human brain. If we built it out of parts that we designed, then we could write a program with the same function and run it on a conventional computer. So if it is possible to design a machine that truly thinks (even if that makes it not-a-computer, as Scott suggests) then it would seem to follow that a conventional computer could do the same thing.

Scott said...

@Simon:

Your hypothetical scenario seems to assume that human thought is reducible to brain activity.

Irish Thomist said...

@Ed

It would be good to beef up the point you were making about the Scholastic argument by also refuting the objection that what is referred to as 'neural networks' in AI programming can in fact learn - of course that capacity itself was imposed from without and is still only relevant in relation to the observer.

Also you might find this an interesting article on AS (rather than AI).

Anonymous said...

It baffles me that many people truly believe that there is a coming "singularity": a point when machines will have developed the intelligence to think for themselves (then we'll all be at their mercy).

Just how many transistors or logic gates demarcate the limits of a computer which merely does what its programmed to do and one which "thinks for itself"?

The thinking seems to be that consciousness is nothing but complexity. Reductionism strikes again.

Timocrates said...

@ Matt Sigl,

"computers as thinking entities and maybe even "fall in love" with them"

Something they could never, of course, reciprocate.

Love is a spiritual activity. In love, both intellect and will unite. AI could only have an externally enforced tendency to seek the good; whereas, man has an innate and natural tendency to do it. Good and evil in a program or machine do not intrinsically draw or repulse machines except insofar as they are being compelled by programming toward or away from them; and it's exactly here where artists fear the worse; that is, where a machine's programming ultimately concludes that human beings are evil, perhaps because we do not in fact always do what is right, good or best and then switches on its, as it were, extermination mode. But this activity is still ultimately all of it derivative of our own thinking and beliefs; that is to say, it is what we judge as good that is programmed into a machine or computer. So really the AI gone Terminator scenario isn't a fault of machines, which are ultimately incapable of being susceptible to judgement in that regard exactly because they are not actually rational beings; it is, however, really the fault of the programmer's design flaw.

Greg said...

@ John West

... and in what way is deciding whether a machine can think different from deciding whether another, humanoid alien can think (an idea people seem much more ready to embrace than strong AI)?

The way I think about it is this: Functionalism levels the multiple realizability argument against mind-brain identity theory. But multiple realizability is not universal realizability; there simply does not seem to be any reason to suppose (certainly not without argument) that everything that is functionally equivalent is psychologically equivalent, even though functional equivalence will yield behavioral equivalence, which is (in large part) that by which we judge that something is intelligent.

For example, Ned Block (I think) noted that it is technically possible to enumerate all hour-long conversations between two people where at least one person is giving intelligent responses. To be sure, this would require a lot of space to store and would be difficult to search. But then there could be a machine that just maps the inputs onto the intelligent responses, so that it passes the Turing test. If someone wants to maintain that that is intelligent, then I suppose they are free to do so, but I won't be joining them.

So what then? I think the artifact/natural substance distinction would have to be made to work. If we know that something was constructed in order to emulate our behavior, the hypothesis that it also realizes our psychological characteristics seems extravagant; they can be realized separately, and the machine in question was contrived specifically in order to realize only one of them, so I would see no reason to believe that it is also intelligent. We recognize it as an artifact; its 'intelligence' is a manifestation of its creator's (even if it is, for instance, performing calculations far beyond our ken). We could, on the other hand, identify an alien life form that exhibits apparently rational behavior as alive, for we can recognize that it (for example) manifests immanent (self-perfective) causation.

Matt Sheean said...

Scott,

"Your hypothetical scenario seems to assume that human thought is reducible to brain activity."

Is his hypothetical scenario committed to this?, He might be suggesting something that would be a problem for any machine that ostensibly thinks. Anything that we built might be represented on some other platform, like a computer, ergo it's not thinking.

I don't know if this is necessarily true of anything we possibly could build. Might "build" be used too broadly? In the case of artificial cells, it doesn't seem to be correct to refer to them as artificial.

Don Jindra said...

John Moore,

"Professor Feser's gold-pyrite example is helpful, but it assumes people agree on what the words 'gold' and 'pyrite' mean. If your definition of both is just 'yellowish metallic stuff,' then pyrite really is gold. The problem is just that such a definition is not particularly helpful for us."

Exactly. The gold/pyrite analogy fails because we know how to distinguish between the two. We agree on the differences. We agree on the experts to consult. That cannot be said of thinking. Nobody has a good definition of it. Nobody knows what it is. Since we don't know what thinking is, nobody can legitimately say whether or not it's possible for a computer to do it too. It does no good to talk about "general educated" opinion versus "expert" opinion because there are no true experts. That's why the Touring Test exists -- because we don't have objective definitions for these terms. In our ignorance, behavior seems to be as good a standard as anything else. The issue, today, comes down to whether or not one is an optimist or pessimist about man's ability to solve difficult problems.

Scott said...

@Matt Sheean:

"Is his hypothetical scenario committed to this?"

I think so. His argument as I understand it is that if it's possible to design a machine that thinks, then we could simulate that machine with a computer program and the computer on which it ran would think too. I don't think that conclusion would follow unless the thinking were reducible to the simulatable aspects of the machine's operation.

Suppose we built an artificial human and God endowed it with a rational soul. That wouldn't entail that a computer simulation of that body would also have a rational soul. For that matter, a complete computer simulation of a naturally-occurring human body wouldn't (at least necessarily) have a rational soul either.

Greg said...

@ Don Jindra

That's why the Touring Test exists -- because we don't have objective definitions for these terms. In our ignorance, behavior seems to be as good a standard as anything else. The issue, today, comes down to whether or not one is an optimist or pessimist about man's ability to solve difficult problems.

Well, it depends on what the "difficult problems" are. It would seem that Turing was a pessimist about solving this particular difficult problem, since he decided to change the terms.

Matt Sheean said...

"His argument as I understand it is that if it's possible to design a machine that thinks, then we could simulate that machine with a computer program and the computer on which it ran would think too."

I'd agree that this is a faulty line of reasoning.

I'm getting hung up, myself, on why I should suppose, in the case of the hypothetical artifact that God imbues with a rational substance, that I should suppose that God has done this. It seems to come down to behavior (at least in the sense that the 'body is the best picture of the soul'). Not, that is, just the behavior involved in making judgments and so on, but the behavior of the parts. I see that the parts work together in a way that suggests the kind of "intrinsic causal powers" requisite for thinking. In the case that you and I dispute whether or not the artifact has been brought across the gap into the realm of substance, how would we settle this?

Scott said...

@Matt Sheean:

"I'm getting hung up, myself, on why I should suppose, in the case of the hypothetical artifact that God imbues with a rational substance, that I should suppose that God has done this.…In the case that you and I dispute whether or not the artifact has been brought across the gap into the realm of substance, how would we settle this?"

That's a good question, and the short answer is that I don't know. (The slightly longer answer is that I agree with the general trend of your suggestion about behavior but I don't know how it would play out in detail; I'm not even sure we could know until it happened.)

Mr. Green may have more to say on the subject than I do, but the point on which he and I have agreed in the past is just that it seems possible in principle for an artifact to be capable of receiving a rational soul if God elected to give it one. I wouldn't suppose, for example, that an atom-for-atom duplicate of a human being would necessarily even be alive, let alone human, but there seems to be no reason why God couldn't endow such a duplicate with life and reason. In that case, though, the result would not be a "machine" and it wouldn't be something created solely by human artifice.

Anonymous said...

What is the status of a bioengineered entity like a GMO on the Scholastic view? Is it an artifact or a substance?

Greg said...

Closer to a substance. Something bioengineered would be like a dog breed that humans have cultivated, or like a polymer. Ed discusses such cases in Scholastic Metaphysics.

Mr. Green said...

Anon: What is the status of a bioengineered entity like a GMO on the Scholastic view? Is it an artifact or a substance?

If it's an organism, then it's a substance. You might be able to start with one substance and do enough stuff that at some point you end up producing a new kind of organism (just as if you start with the substances of oxygen and hydrogen, you can manipulate them the right way and end up with the substance of water), but an organism is by definition a type of substance (a living one), so you'll either end up with a modified version of the same [kind of] substance, or one of a different kind, but still a substance.

Of course, all artifacts are made up of some arrangement of substances ultimately. If you "bioengineered" a simian-equine hybrid by teaching a monkey to ride a horse, I guess you could consider the whole assemblage as an "artifact", but really you've just got a monkey-substance and a horse-substance put together.

(There are a lot of older posts addressing this — just search the site for "artifact".)

Simon said...

@Scott: Your hypothetical scenario seems to assume that human thought is reducible to brain activity.

No more than yours does. Your position seems to be that one could write some specifications for a machine, build it, and a soul would attach - but if one builds a machine that implements the exact same specs in a different way, no soul would attach. Why is the architecture of the machine important in this? It doesn't seem to bother you that your posited truly thinking machine could have a radically different architecture from the human brain.

I am childishly amused at reCAPTCHA insisting that I prove I'm not a robot to be able to be part of this discussion.

Daniel Joachim said...

Speaking of. I wonder what people here think of this. 200 famous "intellectuals" answering the question "What do you think about machines that think?"
Edge: What do you think about machines that think

I'm somewhat disappointed of how few people they've invited that are skeptical to this very idea. Seems like Freeman Dyson is somewhat closest to being the voice of reason:
"I do not believe that machines that think exist, or that they are likely to exist in the foreseeable future. If I am wrong, as I often am, any thoughts I might have about the question are irrelevant.

If I am right, then the whole question is irrelevant."

Now, what does this say about our culture?

John West said...

Greg,

Thanks. I may need you to unpack how one would go about recognizing the alien manifests immanent causation, but I also have a follow-up question:

So what then? I think the artifact/natural substance distinction would have to be made to work. If we know that something was constructed in order to emulate our behavior, the hypothesis that it also realizes our psychological characteristics seems extravagant; they can be realized separately, and the machine in question was contrived specifically in order to realize only one of them, so I would see no reason to believe that it is also intelligent. [...] We could, on the other hand, identify an alien life form that exhibits apparently rational behavior as alive, for we can recognize that it (for example) manifests immanent (self-perfective) causation.

What if one didn't know whether something was constructed in order to emulate our behaviour.

Also, for comparison, suppose a further, similar complication in the alien case. Suppose we can't take the alien apart—to see if it's fleshy rather than metallic—and that we're for some reason unable to investigate its home planet. It looks like a living organism (though may be an artefact; we seem to have that technological capability even now, or close to it). How would one go about deciding whether one or the other can think?

Obviously, if one blocks off all information about the machine and alien our ability to learn whether one or the other can think will go down, so I hope my follow-up doesn't qualify the question to death in follow-up.

John West said...

... whether the machine or alien can think?^

Mr. Green said...

Kiel: Oft times I've wondered what I'd do if I was asked to prove that a real and existing thing like Commander Data from Star Trek, for example, was not rational but a pseudo-rational thing.

Well, given the inconsistencies around things like showing emotion, using contractions, etc., I'd have to say that Data is a rational being who's pretending to be a robot!

(Personally, I don't see any problem with a machine that gives the appearance of thinking... my thermostat "learns" and makes "judgements", after all. Holding a fake conversation is just a difference of degree.)



Matt Sigl: Yet the "masquerade" of thinking could be so powerful as to "convince" our instincts such that we could "relate" to computers as thinking entities and maybe even "fall in love" with them, for some. It's a dangerous situation for human consciousness.

True; though not a novel one. People have fallen in love with non-thinking entities long before there were any computers, and will no doubt continue to do so.



Scott: I wouldn't suppose, for example, that an atom-for-atom duplicate of a human being would necessarily even be alive, let alone human, but there seems to be no reason why God couldn't endow such a duplicate with life and reason. In that case, though, the result would not be a "machine" and it wouldn't be something created solely by human artifice

I'm inclined to think that such a duplicate would in fact be a human being. (Assuming it did move, etc., and wasn't just a duplicated corpse!) Building a human body out of raw atoms with some sort of Star Trek-style replicator would differ more drasticly than present-day "test-tube" babies from the traditional way of making new humans, but if appeared human in all physical respects, I would conclude that it was indeed human — and thus that God had of course caused a human substance to come into being complete with human soul. But I concur that it could happen either way, at least hypothetically speaking.



Daniel Joachim: Now, what does this say about our culture?

That it's discouragingly ignorant of basic philosophy…?

Greg said...

@ John West

Also, for comparison, suppose a further, similar complication in the alien case. Suppose we can't take the alien apart—to see if it's fleshy rather than metallic—and that we're for some reason unable to investigate its home planet. It looks like a living organism (though may be an artefact; we seem to have that technological capability even now, or close to it). How would one go about deciding whether one or the other can think?

I could not say about particulars. But surely there will be some point where, if we lack too much information, we will not be able to make a decision since, for instance, if we just had a one-sentence response, we would have something consistent with intelligence without having enough information. Of course it might be possible that there are other beings we can reasonably believe are intelligent without our being able to do so purely on the basis of their behavior.

Or it might be the case that there are intelligent beings whom we could not discover to be intelligent. We might be conceptually closed to them.

John West said...

Greg,

Or it might be the case that there are intelligent beings whom we could not discover to be intelligent. We might be conceptually closed to them.

Fair enough.

It's not like intelligence must be exactly like human intelligence. If it were, I suspect we could just brute force the question, whether any machine is intelligent in a manner exactly like humans, by exploiting our ability to decide mathematical propositions in different axiom systems and Godel's incompleteness theorem.

Scott said...

@Simon:

"Your position seems to be that one could write some specifications for a machine, build it, and a soul would attach[.]"

No, that's pretty explicitly not my position.

Sanders said...

Dr. Feser,
Would you mind offering your thoughts on Chris Cuomo's comment that, "“Our rights do not come from God, your honor, and you know that. They come from man... That’s your faith, that’s my faith, but that’s not our country. Our laws come from collective agreement and compromise.”"

THanks

Mr. Green said...

Scott: Suppose we built an artificial human and God endowed it with a rational soul. That wouldn't entail that a computer simulation of that body would also have a rational soul. For that matter, a complete computer simulation of a naturally-occurring human body wouldn't (at least necessarily) have a rational soul either.

As you explicitly pointed out, if the thing that was built is still a machine, then it's not really thinking, and if it starts actually thinking, then it's not a machine any more. (To be accurate, I should say that "it" would not exist at all any more, having been replaced with a substance that looks the same but is a rational being instead.) Now I suppose Simon is thinking something like this: suppose we understand the human brain well enough to build a very accurate "simulated brain", one which functions just like a real human brain, including even apparently being able to talk, etc. Now it may be that once we get far enough to complete this artifact, it actually becomes (i.e. is replaced by) a human substance with a real intellect. [Which actually would be morally equivalent to creating a new person and removing all his limbs, face, etc. which is quite immoral so we'd better not actually do that, or at least create a whole human body... but I digress.]

I think the idea is that we hypothetically could do this by understanding the "workings" of the brain well enough, and once we have that understanding, we could apply that knowledge to building a computer program that is clearly not an ensouled human — and voilà! Artificial intelligence! Of course, what we have tacitly assumed here is that the brain is reducible to an atomic level; or rather, that it is reducible to an algorithmic level — that we can come up with an equation for the human brain, so to speak.

Now, no matter how much the program may be run on a machine of gears and vacuum tubes, if it starts thinking, it too will have been replaced with a substance with an intellectual soul; and if not, it will not really be thinking even if its output is in whatever way "identical" to a human brain's. The catch, of course, is whether there really is any algorithm for the output of a human brain; the rest of the scenario is just window-dressing. My own hunch is that the human brain/body is "virtually reducible" to such a level, but that gives us no clue as to whether it is physically feasible to implement such a program; and anyway, we already know the giant decision-tree method demonstrates that's it's theoretically possible in a sheer logical sense to simulate intelligence [or physically manifested output thereof] as accurately as we like.

malcolmthecynic said...

Okay, related question. I see Dr. Feser has a photo of a book edited by Isaac Asimov. Now, every sci-fi fan knows, or should know, Asimov's robot stories and three laws of robotics. This gets us to 2 questions:

1) Is it possible even in theory to create robots that act like Asimov's, working under his laws (say, R. Daneel Olivaw or one of the I, Robot robots).

2) Are the robots in Asimov's stories moral agents?

Bonus question: In Phillip K. Dick's "Do Androids Dream of Electric Sheep?" are the androids human?

Anonymous said...

Of course computers can't think. There's no actual prime matter, so no Thomist can admit to thinking art. Artificial substances, as artificial, have definite parts and I'm obviously not counting the natural substance which is the ultimate matter of anything artificial. Natural substance on the other hand is only simple in form, but indefinitely complex in its content because, as I said, prime matter does not actually exist.

So, no Thomist or any other true non-materialist ought to admit to anything like 'true AI'.

Kiel said...

Mr. Green, your thermostat doesn't "learn" in the relevant sense. It can't learn about a deepldorki if I tried to explain one (I won't tell you what it is to prove my point), apply other relevant concepts and make judgements about them.

I think that if I tried introducing the concept of a deepldorki to Data without borrowing from existing concepts or talking in the abstract, I don't think he'd understand it because he lacks the activity of understanding. I don't think I could teach data new joke about deepldorki things (if it is a thing among many). And so on.

Just some maybe fallacious musings.

Simon said...

@Mr. Green has taken the reasoning chain further than I had. I was merely curious why @Scott would say that God might endow a machine with a soul, yet state flatly that a different machine that does exactly the same thing (not merely input/output, but mirroring every internal state) could not ever have a soul.

Perhaps I misunderstand.

John West said...
This comment has been removed by the author.
Irish Thomist said...

Richard Dawkins and Brian Greene: Do We Live in a Simulated Universe?


Just because it's funny - one of the few times I found Richard more bearable, more human.

Scott said...

@Simon:

"I was merely curious why [Scott] would say that God might endow a machine with a soul, yet state flatly that a different machine that does exactly the same thing (not merely input/output, but mirroring every internal state) could not ever have a soul."

That's not what I said. I said* it wouldn't follow from His endowing the first with a rational soul that He must also endow the second with one. The only way the second would be guaranteed to have an intellect would be for the intellect to be reducible to the mechanics of the physical parts, which is why I said you were assuming that.

Likewise, I said that an atom-for-atom duplicate of a human being wouldn't necessarily be alive, let alone genuinely human, but I didn't say it couldn't be; in fact I expressly said it could.

----

* Suppose we built an artificial human and God endowed it with a rational soul. That wouldn't entail that a computer simulation of that body would also have a rational soul. For that matter, a complete computer simulation of a naturally-occurring human body wouldn't (at least necessarily) have a rational soul either.

Bob Lince said...

"But [Turing's] foray into philosophy...."

Was he philosophizing? Or was Turing a technician/engineer trying to build a contraption to do something.

How will it be known if the contraption is doing what it's wanted to do?

Well, if it passes some test, say, an "imitation game", or "Turing test", then perhaps it's doing what it's wanted to do.

Query: Mr. Turing, if it passes such a test, does that mean the machine is thinking?

Turing: I suppose if what it does is what you call thinking, then it is thinking; if what it does is not what you call thinking, then it is not thinking. For me, however, as a technician trying to build a contraption to pass such a test, the question is meaningless.

Just as, asking a theatrical prop manager, who has used a bar of pyrite to fool an audience into thinking a character has displayed a bar of gold, if that means pyrite is gold. The prop manager would respond by saying he finds the question meaningless and above his pay grade. His job was to make pyrite appear to be gold. Having passed that test, he did his job.

If it can be shown that Turing believed either a) that by passing the so-called Turing test, the contraption transubstantiated into something other than a contraption, or, b) that the passing of the test shows that humans, or human brains at least, are nothing but contraptions themselves, then I think you're on to something.

One supposes that Turing, like all men, mused about these things. But the Turing test itself is simply an engineer's means of calibration, of discovering if a standard has been met.

Scott said...

@John West:

"But if humans were able to crank out these machine-like rational beings so long as they had the materials and skilled labourers, would that imply that human rationality is reducible to material?"

No more, I think, than does the ordinary process of human childbirth.

Greg said...

@ Bob Lince

Just as, asking a theatrical prop manager, who has used a bar of pyrite to fool an audience into thinking a character has displayed a bar of gold, if that means pyrite is gold. The prop manager would respond by saying he finds the question meaningless and above his pay grade. His job was to make pyrite appear to be gold. Having passed that test, he did his job.

This is incredible. The prop manager would just say no, it's not gold, he was just picking something that looks like gold. The question is quite sensible, and there is a very obvious answer.

Turing: I suppose if what it does is what you call thinking, then it is thinking; if what it does is not what you call thinking, then it is not thinking. For me, however, as a technician trying to build a contraption to pass such a test, the question is meaningless.

When people defend Turing's test, they tend to adopt this sort of naive picture of language. Mathematicians (like Turing) are usually pretty precise when they are giving stipulative definitions; they generally will select terms that do not have common uses, and if they do select a term that is widely used (like 'think'), they will be clear when they are using it in a technical, stipulative sense. It's not clear to me why Turing didn't do this, given his mathematical prowess, unless he thought the test cast some light on 'thinking' in the conventional sense.

Scott said...

@Bob Lince:

"Was he philosophizing? Or was Turing a technician/engineer trying to build a contraption to do something."

I'd say he was pretty obviously philosophizing. At the very least he was speculating about whether it was possible to build a certain type of machine and answering philosophical objections to that possibility.

"His job was to make pyrite appear to be gold. Having passed that test, he did his job."

If that was his job and he did it, then pyrite isn't gold and he knows it.

Jack Ferrara said...

How serendipitous, I was actually thinking about this very subject today while reading Dr. Feser's "Aquinas: A Beginner's Guide."

While reading the argument he puts forth about how Machines are not people (e.g. don't have a soul) because Machines are artifacts (e.g. composite of various substances) a thought occurred to me. Couldn't, technically, a critic argue that humans are themselves "artifacts" in that they are composites of calcium, carbon, and other elements? Or does this miss the point of Feser's argument?

John West said...

Jack Ferara,

Couldn't, technically, a critic argue that humans are themselves "artifacts" in that they are composites of calcium, carbon, and other elements? Or does this miss the point of Feser's argument?

For this, it would be good to read the articles Dr. Feser links in the article:

The true substances in that case are the metal bits that make up the watch, and the form of a pocket watch is just an accidental form we have imposed on them. (I have discussed the difference between substantial and accidental form in many places, such as here, here, and here. For the full story, see chapter 3 of Scholastic Metaphysics.)

Greg said...

@ Jack Ferrara

Also, this.

Jack Ferrara said...

@ Greg and @John West,
Thanks you guys so very much! This actually clears up things quite a bit. I doubt it'll be the last time I ask for help but you guys really helped clarify these issues for me.

Simon said...

@Scott said: it wouldn't follow from His endowing the first with a rational soul that He must also endow the second with one. The only way the second would be guaranteed to have an intellect would be for the intellect to be reducible to the mechanics of the physical parts, which is why I said you were assuming that.

I see - I was misunderstanding your position.

However, what happens if we write specifications, build one physical machine and a one computer-based implementation of the same spec, and then one gets endowed with a soul but not the other? As far as I can see there are three options:

(a) The two function identically, even when confronted with a situation requiring true thought to solve (e.g. a situation that neither the machines nor their creators have ever encountered before). In other words, either the machine without the soul is creative, or the not-a-machine-anymore with the soul cannot be creative.

(b) The two do not function identically, at least under some circumstances, since one has an intelligent soul and one does not. But in that case, the unensouled machine functions as predicted by reductionist science, while the other does not. In other words, there are measurable effects from possessing a soul, and you open it to scientific examination.

Is there some other option?

Greg said...

@ Simon

I don't see any problem with (b). Suppose there are observable differences. Sure, we can attempt to study them scientifically, and in a broad enough sense of 'science,' that investigation could be fruitful; that doesn't imply any reductionism or materialism about the soul unless reduction is what the scientific investigation uncovers. An Ed has argued that that is not possible.

Simon said...

Sigh - there are two options...

Joe said...

Dr. Feser I would love to see a book length take down of AI. But Im willing to settle for a lengthly review of this book.
http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/ref=sr_1_1?s=books&ie=UTF8&qid=1424034636&sr=1-1&keywords=nick+bostrom

Scott said...

@Simon:

Greg has given essentially the same answer that I would have given if I'd replied first. The possession of a soul (substantial form) is open to scientific investigation in the broadest sense of "scientific," and the result of such an investigation needn't be (indeed had better not be) reductionist or materialist.

Scott said...

@Mr. Green:

"Now it may be that once we get far enough to complete this artifact, it actually becomes (i.e. is replaced by) a human substance with a real intellect. [Which actually would be morally equivalent to creating a new person and removing all his limbs, face, etc. which is quite immoral so we'd better not actually do that, or at least create a whole human body... but I digress.]"

Yeah. I suppose there's no reason in principle that God couldn't (where "could" means merely logical possibility) endow even a lump of rock with a rational soul, but on the face of it that would seem to be pretty mean, and for similar reasons: the lump of rock wouldn't provide any physical means for the rational soul to manifest its properties.

Then again, on second thought I'm not so sure. Is it possible for a mere aggregate (as I take a lump of rock to be) to receive a rational soul and thus take on a substantial form? Would such a lump of rick exhibit immanent causation? How?

Scott said...

"A lump of rick." Ay yi yi.

I assume everybody knows what I meant there.

Anonymous said...

What distinguishes art from nature is that it (art) is purely relative to some natural substance (which is a natural agent) which produces it out of some other natural substance. Art is 'imitation of nature' or, more specifically, a specification of a nature. Human art is a mode of the human essence, however the ideal basis for any artificial substance to be produced is natural and not itself artificial (therefore essences are real and not 'just ideal', but the artificial constructs which proceed from ideas are also limitations upon those ideas). To say the same thing, theoria is superior to praxis.

seanrobsville said...

"When the body dies, the 'mechanism' of the body holding the spirit is gone, and the spirit finds a new body sooner or later, perhaps immediately."
– Alan Turing in a letter to Mrs Morcom

Turing wasn't a materialist. He was a Buddhist.

Irish Thomist said...

@Crude

I noticed your comment under an article about Alan Turing. I knew none of that before. Could you direct me to something that would explain the real events in more detail?

John West said...

Simon,

(a) The two function identically, even when confronted with a situation requiring true thought to solve (e.g. a situation that neither the machines nor their creators have ever encountered before). In other words, either the machine without the soul is creative, or the not-a-machine-anymore with the soul cannot be creative.

Are you saying that (in case a) the machine and machine-like rational being (MRB) would and could perform identically in response to every circumstance, or is "function" being used in some other, technical sense?

Crude said...

Irish Thomist,

If you're referring to what I think you are, I'll post over at my blog - focus here is more on Turing's philosophical claims, and I'm trying to be a good guy and not derail. And it's not exactly obscure knowledge anyway.

Alan said...

I am reminded of a cartoon that came out back in the ‘70’s when ‘talking to apes’ was all the rage. It depicted to chimps, one reading the paper and turning to his companion: ‘Every year it gets harder and harder to be human!’
The issue alluded to in the cartoon and never satisfactorily answered is ‘what defines a human?’ Similarly never answered in this post is ‘What defines thought? With that omission, this post collapses into the bait and switch attributed to Lawrence Krauss. Consider this self-contradicting anecdote:
In the OP, it says: … ‘If we’re examining bits of metal and find that they display the time, it would [be] silly to conclude “Well, since agere sequitur esse, it follows that metal bits have the power to tell time!” For the bits are “telling time” only because we have made them do so, and they wouldn’t be doing it otherwise. … What matters when applying the principle agere sequitur esse is to see what a thing does naturally, on its own, when left to its own devices -- that is to say, to see what properties flow or follow from its substantial form, as opposed to the accidental forms that are imposed upon it.

Now, seen in this light the Turing Test is just a non-starter.’

If the ‘bits’ are telling time ‘because we have made them do so’, then the bits are telling time! To then claim that humans tell time differently from clocks is a bait and switch. Similarly if a machine is built [successfully] to think, it will think. It is a non sequitur to jump to the conclusion ‘the Turing Test is just a non-starter’ when the only argument presented was: ‘agere sequitur esse does not apply.’

Simon said...

@Scott, @Greg, thanks.

@John West - I was meaning that they would do the same thing in response to the same input. I glossed over things like a machine designed to roll dice as part of its decision-making process (also, deterministic chaos), but I'd classify them here too since they'd have identical probability distributions on their outputs.

I wasn't taking a position on whether a machine and a not-a-machine would fall under my case (a) or case (b) - simply that they must fall under one or other, and was curious to see which way Scott would go, or whether he'd back away from the notion of a machine-with-a-soul. I was expecting him to be a little more cautious about a theory that allows extremely direct access to the soul (in the sense that an ensouled machine must act differently from reductionist predictions, and "oh - that wasn't quite what theory predicted" is the usual method of discovery in science). However, having read the link that Greg provided, I gather that Thomists expect reductionist methods to fail when applied to the soul. So (b) appears to be the obvious answer from a Thomist perspective.

Greg said...

@ Alan

It is a non sequitur to jump to the conclusion ‘the Turing Test is just a non-starter’ when the only argument presented was: ‘agere sequitur esse does not apply.’

But that was not what Ed was doing. He was specifically considering the prospects of turning the Scholastic principle 'agere sequitur esse' toward the defense of strong AI.

If the ‘bits’ are telling time ‘because we have made them do so’, then the bits are telling time! To then claim that humans tell time differently from clocks is a bait and switch. Similarly if a machine is built [successfully] to think, it will think.

I think the point Ed is making is that the watch doesn't tell time; we tell time through the watch. This would be consonant with other examples he has developed in other blog posts. For example, Ed has claimed that calculators don't add, but we add using calculators. He has also used the example of alphabet soup out in the wind; the letters are blown in such a way to form a coherence English sentence. The alphabet soup has not therefore 'said' anything. But if I poked at the letters for a while and arranged them into a sentence of my choice, then there is a sense in which the alphabet soup 'says' something, insofar as I have made it do so. But its saying something is my saying something.

Likewise, the watch's telling time is my telling time. The analogous point is that the computer can be used to facilitate my own mental processes, but on its own it's not doing anything.

I am reminded of a cartoon that came out back in the ‘70’s when ‘talking to apes’ was all the rage. It depicted to chimps, one reading the paper and turning to his companion: ‘Every year it gets harder and harder to be human!’
The issue alluded to in the cartoon and never satisfactorily answered is ‘what defines a human?’


Hmm, perhaps I'm missing something or the cartoon is underdescribed but you seem to be making an interpretive leap.

Similarly never answered in this post is ‘What defines thought? With that omission, this post collapses into the bait and switch attributed to Lawrence Krauss.

As I mentioned before, defenses of strong AI via the Turing test tend to adopt a very naive view of language. It's quite simply not the case that there is no pretheoretic understanding of thought, difficult as it might be to provide necessary and sufficient conditions for it. (As Wittgenstein pointed out, the same is true of 'game' and many other terms.) We might be able to identify some necessary conditions, however: Thoughts can apparently be entertained without any obvious result for overt behavior. So, I think Ed is right; the defender of strong AI is baiting and switching much like Krauss, for he replaces a concept, thinking, with a widely accepted pretheoretic understanding, and then changes the subject because he has nothing to say about the original, actually interesting concept.

Scott said...

@Simon:

"I gather that Thomists expect reductionist methods to fail when applied to the soul."

Strictly speaking, Thomists expect reductionist methods to fail when applied to anything; the failure is just less conspicuous in accounts of inanimate nature.

Greg said...

I think the point Ed is making is that the watch doesn't tell time; we tell time through the watch.
...
Likewise, the watch's telling time is my telling time.


To be a bit more explicit, lest these remarks be misread, the point is that watches and people don't tell time in the same sense. A person can tell time; a watch tells time insofar as a human uses it to tell time. So it's consistent to say "the watch does not tell time" and "the watch's telling time is my telling time," since 'telling time' differs in sense in the two cases.

Scott said...

@Alan:

Just adding to Greg's replies here.

"If the ‘bits’ are telling time ‘because we have made them do so’, then the bits are telling time!"

Only because the bits are doing what we've set them to do. They don't have any inherent tendency to tell* the time; they do so only because humans have intentionally used them to perform that task. If all human beings disappeared from the world, the devices we call clocks would no longer show the time, because they'd have no one to show it to.

----

* Or, more precisely, "show." Clocks don't know what time it is; we use their displays to find out what time it is. That's why Greg says clocks "tell" time in a sense different from the way we do: a clock "tells" the time by displaying it to an intelligent agent, and we "tell" the time from the clock by understanding what its display means. The clock itself doesn't "mean" anything.

Crude said...

It may be helpful to drop the language that clocks 'tell time' or 'know what time it is' altogether, as if both humans and clocks know what time it is, but in a different way. Clocks, and computers, have no 'intrinsic meaning' in their parts or operations - their meaning is derived. But you can't have nothing but derived meaning, so where's the intrinsic meaning? In humans.

If someone wants to dig in their heels and insist that maybe (or even actually) clocks have intrinsic meaning too, that's a fun argument, but at that point it's over in the sense of trying to save the materialist, mechanist view.

John West said...

Simon,

Thanks for explaining. I asked because I questioned that (a) is really an option.

(a) The two function identically, even when confronted with a situation requiring true thought to solve (e.g. a situation that neither the machines nor their creators have ever encountered before). In other words, either the machine without the soul is creative, or the not-a-machine-anymore with the soul cannot be creative.

For example, what if the machine and MRB were each ordered to upload their “personalities” and memories—their “selves”—to new mechanical bodies*? It's plausible the machine could perform this task, because just software, stored data, etc. In contrast, if intellects are the sort of things only God can attach and the MRB's intellect actually plays a role in its apparently identical functioning, the MRB could not perform this task. So, if only God can attach intellects and the MRB's intellect plays a role in its functioning, the machine and MRB could not perform identically in response to at least one input.

Say the machine just has to plug a cable into the other mechanical body and initiate the uploading process. The machine uploading itself to the other body seems no less plausible than being able to transfer all the information from one computer to another.

I'll let you hash out the conjunction in my conditional's antecedent. But I thought the point about only God attaching intellects fairly uncontroversial, and can't see a point in your example if the MRB's intellect plays no role in the MRB's functioning (though, that may have to do with your last point on reductionism and (b) being obvious on Thomism).


*Say the new mechanical bodies are qualitatively identical to each's current body in every way possible without being the current bodies.

Alan said...

Greg: re, the cartoon: There was a general assertion back in the ‘60’s that only humans could use language. Then chimps were taught a simplified sign language, so there was a hurried revision of what ‘only humans’ could do, whereupon some apes were taught to perform that exercise and a new ‘definition’ was presented. For a while ‘tool making’ was presented as uniquely human.
This, I think, mirrors the challenge of thought, though I think thought should be far simpler to achieve than being human. Having many pets through my life, I attribute to them thought for their problem-solving abilities. Neither clocks nor calculators solve problems, some computers can now solve problems that were not explicit in their programming. That, I suggest, is approaching thought.

Et al.: Absent a clear definition, intrinsic is simply assumed to be required for thought. An assumption I am not sharing.

Crude said...

Neither clocks nor calculators solve problems, some computers can now solve problems that were not explicit in their programming. That, I suggest, is approaching thought.

It seems to be 'approaching thought' the way a can of paint is approaching thought when it turns out to work decent as a door jam, despite not being made for that task.

Absent a clear definition, intrinsic is simply assumed to be required for thought. An assumption I am not sharing.

You're not sharing the assumption of the thing you say hasn't been defined and therefore you don't know what it is?

Okay. So is all thought derived then? If it is, I'll ask how you determine what is or isn't deriving - if you refer to yet more derived information, I'll ask how you're determining THAT is deriving, and so on. You'll either end up at the intrinsic, an appeal to magic and mystery, or infinity.

If you're not saying that thought is derived, then you've hit a point of vagueness where it's not even clear what you're affirming or denying.

Simon said...

@John West - I'm just asking questions and learning. I agree (a) seems trivially silly; I was just listing all options.

I'm not sure your counter-argument works, however. If the not-a-machine were to duplicate itself, isn't that a slightly fancier form of reproduction? Another soul could attach. Since the whole option boils down to souls having no physical effects, the two copies would continue to operate identically as long as their inputs were identical.

John West said...

Simon,

I'm just asking questions and learning

I know. Seriously, don't worry.

If the not-a-machine were to duplicate itself, isn't that a slightly fancier form of reproduction?

No, because only the mechanical body—the hardware—is being “duplicated” (and I don't think it need be the machine duplicating the mechanical body for my example to work). The software, data, etc., are being transferred from the machine's original mechanical body to another mechanical body, not reproduced or duplicated.

Another soul could attach.

Well, that would be to deny that souls (intellects) are the sort of things only God can attach. But the issue here, on my understanding (see below), is that the soul attached to the machine-like rational being (MRB) would have to deattach (I'd say be deattached) and attach (I'd say be attached) to its new mechanical body. Obviously, the machine has no soul related problems. That's why I said that given the input that “ the machine and MRB were each ordered to upload their 'personalities' and memories—their 'selves'—to new mechanical bodies,” the MRB and machine could not perform the same tasks (and so there would result different outputs for MRB and machine).

Since the whole option boils down to souls having no physical effects, the two copies would continue to operate identically as long as their inputs were identical.

It seems I misunderstood. There is a distinction between having the soul resulting in no inputs yielding different ouputs for the MRB compared to the machine, and the soul producing no physical effects at all in the MRB. I thought the claim was that having a soul would produce no different outputs for any given input to the MRB compared to the machine, not that the soul would have no physical effects and therefore play no role in the MRB's functioning (making the MRB and machine for all practical purposes the same).

Of course, I don't think it's actually possible that a soul God attached to a machine would have no physical effects (fairly sure Scott's a hylemorphic dualist, not Cartesian dualist, so only one substance and no interaction problem). But it was your example.

John West said...

could not perform the same task (and so there [...])^

Scott said...

@John West:

"(fairly sure Scott's a hylemorphic dualist, not Cartesian dualist, so only one substance and no interaction problem)"

Yep.

Jinzang said...

"Turing wasn't a materialist. He was a Buddhist."

Believing in reincarnation does not make you a Buddhist. Hindus, Theosophists, and Plato in The Republic also believe in reincarnation.

Timocrates said...

@ Alan,

”Similarly never answered in this post is ‘What defines thought?”

Not in this, perhaps, but in many others. Regardless, calling that which is not actually thinking something that is thinking is not “bait and switch”. What Professor Feser is pointing out is how easily we can say some things are doing things that they are not, in fact, doing. For example, there is a sense that it can be said an Air Conditioner “senses” the room temperature; however, its sensing is not like our sensing or even an animals. Moreover, there is no intrinsic principle in the air conditioner that resulted in its developing a power to sense the room temperature, anymore than the bronze in a bronze statues had an intrinsic principle to become a statue (bronze is not a thing with a natural power or tendency to become a statue).

But more directly to your objection, thought has an immaterial aspect about it that separates it not only from machines and animals but distinguishes it even from our power of imagination. I can’t imagine any triangle without necessarily including things that are not necessary for something to be a triangle, such as colour or the fact that the triangle will be of a certain kind. My concept of triangle, to be the actual concept, does not include these things, though of course every real triangle will include such determinations (specific dimensions, colour(s) and being of a certain kind). The reason, of course, is that triangles are material and because they are material such determinations will always accompany them in actuality. The same is true with species of animals where all sorts of accidents attach to them.

Another aspect characteristic of thought is its intentionality, that thoughts are always thoughts about something. Once that is appreciated, the liberty or freedom we enjoy in thinking becomes apparent, because we can think about whatsoever we please; whereas, any mechanism is going to be predetermined in its fundamentals, such that should anything go awry we will naturally look to its programming as the cause. In other words, simulated intelligence will be deterministic; or otherwise randomness will be included in the program to give the illusion of choice. But this risks all sorts of problems and absurdities that would vitiate not only its utilitarian appeal but even the appearance of intelligence in the first place.

Now I do not wish to exaggerate our freedom of thought here. By that I do not mean to say that can think of everything in the sense that we know everything or that there is no tendency in us to prefer thinking of some things rather than others. We are, after all, rational and to that extent we are inclined to be thinking about things more or else proper given our circumstances, desires and goals. A quarterback at the Superbowl is presumably going to be thinking almost exclusively about the game for its whole duration, and was undoubtedly preoccupied with it even before hand. However, it is obvious that there is nothing preventing him from thinking about whatever else he may have wanted to even to the detriment of the team or at the cost of winning the game. Not so for programs or AI (except again by including an element of randomness, which of course would have the same dangers or consequences listed earlier).

Timocrates said...

@ Alan, re: your pets.

The more you know about your pets the better you will be able to understand their behaviour and its limits.

Dogs, for instance, are domesticated and susceptible to it because they are highly social animals by nature or natural instinct. What we are doing is harnessing that instinctual drive.

When a dog fetches a ball and brings it back to you, you are bringing out its social instinct. Because either you are in its "mind" the alpha or because whomever the dog's alpha is how placed you in that rank, when the dog fetches the ball it behaves as it would fetching prey in the wild and bringing it to you, as it were for the pack. The reward the dog receives when you pet, praise or better yet give it a treat mimics what happens to dogs in the wild. They would capture prey and return with it to the alpha in the pack, and it would be the alpha who would decide whether or not they would eat. This is pure instinct, which is exactly why (especially when misunderstood) the dogs behaviour can be surprising and even dangerous. For instance, people smile to show affection normally; however, showing teeth in dogs is normally threatening. Consequently smiling at a dog -especially a stranger's- can trigger snappy or aggressive behaviour.

Again, dogs tend to be aggressive or protective naturally when they are accompanying their masters on, e.g., walks. The reason again is that the animal does not recognize the other dogs or people in its social hierarchy and considers them either threats or ranked beneath them. It takes some time and training to get the dog to accept its place beneath all humans and even to accept other animals - even cats, say - as off limits. Hence a dog will often tolerate being terrorized by toddlers or other pets. This is all extremely derivative, however, of its social instinct - namely its pack instinct and its instinct to please, as it were, the alpha(s).

Again, a K9 unit's dog can attack a man and bring him down but wont proceed to kill him. And the reason is similar to why the dog fetches the ball and brings it back to you: it's awaiting permission from the alpha before presuming to kill or eat. It's that already present social instinct in dogs that makes them so suitable for domestication, and its exactly that it is natural to dogs and deeply instinctual that it is reliable.

Alan said...

Timocrates: Professor Feser is pointing out a lot of things in this blog post which is an informal forum. My nit-picking of particular points is not important. I don’t disagree with most of your specific points, but I think you are missing the point with thought. While humans, alone, are capable of rational thought, that capacity evolved from far simpler thinking. I concur that any mechanism which is predetermined in its functions is not thinking, but that does not hold for animals nor man. No one programs a dog to play – it is part of their nature to enjoy the social activity, so you present a dog with opportunity, and it learns to engage. Because it can think and solve problems. An animals’ (to include mans’) instincts provide motivation, not specific direction. The brain takes over and figures out just how to satisfy that motivation. For any arbitrary animal, pre-determined behavior would not even be viable for the most part. The creature cannot ‘know’ what terrain it will be born into. It must learn to navigate the terrain it has, not the terrain its evolutionary ancestors had. It needs to catch the prey available in its environment, and to escape the local predators – not the ancestral predators. Even for an animal to walk it needs to constantly adjust to the current conditions – all of the specific commands to each muscle must be adjusted to the immediate terrain and to the current size of the animal as it grows from birth. No deterministic brain could keep these animals alive. The dynamics required for an animal to be not a vegetable requires thought. Not rational thought, but constant creative problem solving all the same.
Dynamic, problem solving (navigating in this case) machines:
www.youtube.com/watch?v=wE3fmFTtP9g
https://www.youtube.com/watch?v=5FFkDV2NKEY

Scott said...

@Alan:

"While humans, alone, are capable of rational thought, that capacity evolved from far simpler thinking."

If by "rational thought" you mean the intellect, then no, I don't think it did. The use of abstract concepts is a difference in kind, not merely in degree or complexity, from the sort of perceptual awareness subrational animals enjoy. You might as well say that three-dimensional space "evolved" from two-dimensional space.

Alan said...

Scott: Regardless of how you wish to believe the particular rational feature was introduced, a thinking brain evolved to be its host.

John said...

Re: Turing being a materialist-
The Wikipedia article on him cites a Time Magazine article saying he was an atheist and a materialist.
But the quote from Sean Robsville suggests he was not a materialist, or at least he had very eccentric views for a materialist. Anyone know for sure? Is this a matter of evolving views (i.e. when he wrote the letter Sean quotes, he believed in reincarnation but then later decided that materialism was true)? Or is one source inaccurate?

Simon said...

@John West said: No, because only the mechanical body—the hardware—is being “duplicated” (and I don't think it need be the machine duplicating the mechanical body for my example to work). The software, data, etc., are being transferred from the machine's original mechanical body to another mechanical body, not reproduced or duplicated.

You can't really "move" information in any storage medium I'm familiar with, any more than you can move letters from one page to another. You can either move the page from one book to another (swap out a hard drive) or copy-and-delete (which is what your software does when you ask it to move a file between drives). The former doesn't seem to be what you're talking about; the latter would seem to have moral implications in this context.

I don't think you can move the "brain states" without either moving the "brain" or copying them, is what I'm getting at. And if you've made a copy of the brain state, it doesn't seem unreasonable that that copy might be ensouled - it's not that different to creating a child.

I take the rest of your post to be more reasons why it's obvious that option (b) was the Thomist answer. No argument.

Scott said...

@Alan:

"[A] thinking brain evolved to be its host."

Whatever its hosting duties may be, the brain itself doesn't "think"; what thinks is the person, the human being, the rational animal, the intellectual substance. As Timocrates has pointed out, thought (intellect) "has an immaterial aspect about it that separates it not only from machines and animals but distinguishes it even from our power of imagination." The heart of the matter from a Thomist point of view is that to "think" of something is to receive its form into the intellect without actually becoming the object, and in order for that to be possible, the intellect must be immaterial.

John West said...

Simon,

You can't really "move" information in any storage medium I'm familiar with, any more than you can move letters from one page to another. You can either move the page from one book to another (swap out a hard drive) or copy-and-delete (which is what your software does when you ask it to move a file between drives).

Fair enough.

In any case, I don't consider this point worth quibbling over. Given what I'm saying about intellects (and where they play a role in the MRB's functioning), the MRB couldn't copy itself either.

And if you've made a copy of the brain state, it doesn't seem unreasonable that that copy might be ensouled - it's not that different to creating a child.

Sure it's unreasonable. An intellect (soul) isn't data; it's not even physical.

And again, this reduces to denying my claim that intellects are the sort of things only God can attach.

John West said...

I should add: not to mention denying that intellects (souls) are the sort of things only God can create.

Daniel said...

I'm assuming Alan is just Alan Fox so don't know why I'm really bothering with this but none the less:

To explain what cognition is would take a far longer essay (at the very least!) than one merely elaborating one instance of what it's not: Why not start with:

http://edwardfeser.blogspot.co.uk/2009/02/aristotle-and-frege-on-thought.html

Regardless of how you wish to believe the particular rational feature was introduced, a thinking brain evolved to be its host.

Well yes it had to reach a sufficient level of complexity to allow all the sensitive and perceptual modules that are a perquisite to human rationality. I doubt Scott would deny this though.

John West said...

Simon,

Sure it's unreasonable. An intellect (soul) isn't data; it's not even physical.

To expand this comment, you're equivocating with “copy”. Copy can be used in a general sense of copying* anything. But computers copy** data. When we talk about copying in relation to the machine and MRB, we're not talking about copying* in the general sense. We're talking about copying** in the sense of computers copying** data. Souls aren't data. Hence, computers don't copy souls.

Scott said...

@Daniel:

"Well yes it had to reach a sufficient level of complexity to allow all the sensitive and perceptual modules that are a perquisite to human rationality. I doubt Scott would deny this though."

That's right, I wouldn't. I would and do deny, though, that this level of complexity either constitutes intellect/rationality or is by itself a sufficient condition for it.

Timocrates said...

@ Alan,

Firstly, there is in fact no reason to believe humans alone are capable of rational thought. When the intellect is rightly understood the very nature of it speaks rather to its being abundant than scarce.

Now you claim,

”While humans, alone, are capable of rational thought, that capacity evolved from far simpler thinking.”

Which is frankly ridiculous. The capacity is simple in man and always has been. Hence the popularity of the concept of tabula rasa even in our day, because it is quite in keeping with observation and experience. You are confounding what man happens to be thinking of or about with his capacity to think. Indeed, the “simpler thinking” in primitive man was notwithstanding quite rational insofar as he was preoccupied with, say, a stable food supply and more or less adequate shelter from the elements. To be sure, this isn’t rocket science or man attempting artificial flight, but it made perfect sense regardless given his situation.

And I have to disagree with your claim that dogs can problem solve in any meaningful sense. Problem solving is difficult even for man and requires a focus and reflection that dogs simply do not have. I also have to disagree with your attempt to collapse and confound instinctual drive in animals and instinctual drive in man. Man’s instinctual drive is quite impoverished. We have to learn and be taught not only what we are supposed to eat but that we are to eat – the pain of hunger does not come with the reason why we are in pain let alone the solution (proper food).

Glenn said...

Alan,

An animals’ (to include mans’) instincts provide motivation, not specific direction. The brain takes over and figures out just how to satisfy that motivation.

Let's summarize this interesting view of the brain: it plays a useful role as subordinate assistant (to instinct).

Regardless of how you wish to believe the particular rational feature was introduced, a thinking brain evolved to be its host.

Contractors build structures meant to host residents.

If he who resides in such a structure believes the structure to be more important than he himself is, he should also believe the contractor to be more important than either the structure or himself.

There does seem to be some evidence that there are people given to prioritizing things in such a way that they, in effect, express a belief that brains are more important than the intellects they house, and that evolution is more important than either the houses or the intellects.

Whether such people wantonly value the intellect so little, or truly make so little use of it, is not so easy to tell.

Anonymous said...

Some key points on the minimalistic views of the classical realist for the sake of clarity:

The brain is structurally related to substantial form, but in itself it is just matter; mere substantial potency.

Substantial form is not structure, it is a simple subjectivity. Insofar as a substance is analyzed through its matter, that matter is synthetically related to the simple subjectivity of substantial form. This relation is called structure. Structure is in itself arbitrary insofar as the content of its synthesis may vary in accordance with the intensity of the material analysis. To the extent that some matter is related to substantial form, some intensive structure is given.

Substantial form is characterized by its intelligibility. All perception is potentially intelligible; which is to say that there is no pure phenomena; all phenomena is of some substance. There is no act of sense-perception which does not correspond to the receptivity of the possible intellect. The tripartite psychological division of perception, possibly/passive intellect and active/agent intellect is very important for understanding the classical concept of form.

Classical realism is essentialist and intellectualist. The classical thinker is primarily interested in purifying the intellect and knowing essences. Intellectual capacity is essentially connected to the moral and spiritual condition of the agent. Empirical realism and existentialist realism (sorry M. Gilson) present very different perspectives which may or may not be compatible with the classical perspective. The classical realist, if he is bold, will simply call himself an essentialist. The intellectualism of the classical realist will always be pejoratively compared to 'idealism' by existentialists and other species of realist.

An essence and its concept is not the same thing as a phantasm/mental image. What else could be said in order to minimize needless chatter as much as possible? I suppose it is this; that the doctrine is verifiable through calm contemplation and reflection. Forgive me if I've made some apparent technical errors or if I've been too brief.

Perhaps Prof. Feser would like to write some articles on Thomistic psychology?

Daniel said...

Empirical realism and existentialist realism (sorry M. Gilson) present very different perspectives which may or may not be compatible with the classical perspective. The classical realist, if he is bold, will simply call himself an essentialist.

Regulars of this blog will not be surprised to hear that this occasioned a victory lap of the room.

On a serious note thanks to Anon for this summary. Far be it from me to disagree with it but the account they give has a more a more Platonic ring to it than that usually given by Thomists - in fact talk of being 'primarily interested in purifying the intellect and knowing essences' sounds like a good description of Realist Phenomenology.

Glenn said...

John,

Re: Turing being a materialist-
The Wikipedia article on him cites a Time Magazine article saying he was an atheist and a materialist.
But the quote from Sean Robsville suggests he was not a materialist, or at least he had very eccentric views for a materialist. Anyone know for sure? Is this a matter of evolving views (i.e. when he wrote the letter Sean quotes, he believed in reincarnation but then later decided that materialism was true)? Or is one source inaccurate?


Given the wiki source (an apparent copy of which may be read here), it would seem to be a matter of, as you say, 'evolving views':

"This loss shattered Turing's religious faith and led him into atheism and the conviction that all phenomena must have materialistic explanations. There was no soul in the machine nor any mind behind a brain."

Further supporting the 'evolving views' view is the following, from A. Hodges 1983 Alan Turing: The Enigma, pp 107-108, (which predates the wiki source above):

"Obviously there was a connection between the Turing machine and his earlier concern with the problem of Laplacian determinism. The relationship was indirect. For one thing, it might be argued that the 'spirit' he had thought about was not the 'mind' that performed intellectual tasks. For another, the description of Turing machines had nothing to do with physics. Nevertheless, the had gone out of his way to set down a thesis of 'finitely many mental states', a thesis implying a material basis to the mind, rather than stick to the safer 'instruction note' argument. And it would appear that by 1936 he had indeed ceased to believe in the ideas that he had described to Mrs Morcom as 'helpful' as late as 1933 -- ideas of spiritual survival and spiritual communication. He would soon emerge as a forceful exponent of the materialist view and identify himself as an atheist."

Hodges also included the following in his much later SEP entry on Alan Turing:

"The upshot of this line of thought [by Turing] is that all mental operations are computable and hence realisable on a universal machine: the computer. Turing advanced this view with increasing confidence in the late 1940s, perfectly aware that it represented what he enjoyed calling 'heresy' to the believers in minds or souls beyond material description."

Daniel said...

(I will probably post this again in one of the general/links of interest entries)

Out of interest could anyone point me in the direction of a Thomist account of Propositions/Meanings and their relation to Truth-making entities (Facts/States-of-Affairs)? I know Ed briefly mentions a Thomist understanding of Propositions akin to that of Universals when discussing the PSR in GSM.

Reading Loux on this a while ago left me wondering what it would mean if Propositions and their Truth-makers had the same Universals as constituents and how this would relate to the Intellect's becoming the known entity.

Alan said...

Reason or hubris? Among our beastly brethren we find incredibly complex behavior and problem solving to the extent I suggest requires thought. Love, compassion, empathy, sorrow and mourning. Insects navigate, birds and mammals care for young and construct tools. Our beastly companions upon this mortal coil plan for their future, plot and conspire against prey and predator. They collaborate with trusted associates, punish freeloaders, taunt rivals and cast insults.
Whatever God granted reason we are blessed with, the beasts too are endowed with capabilities which mirror our treasured intellect to a significant degree. Behaviors far too complex and inconsistent to dismiss as reactions – they are calculating as well.

@ Scott (from the reply to Daniel) - I have not said, nor intended to say anything that would contradict your statement: ‘I would and do deny, though, that this level of complexity either constitutes intellect/rationality or is by itself a sufficient condition for it.’ The claim you are granting (‘it had to reach a sufficient level of complexity … that are a perquisite to human rationality.’) is the only claim I am making.

@Daniel: Play nice, dude. Enough with the ad hominems. Have you ever read a post from Alan Fox?
A challenge to your intellect is not an attack to your person.

Timocrates: A friend of mine was once given a couple of hatchling ducks. When he threw them food, they ran in fear. He then borrowed an adult duck who led the chicks to the food and all ate. After that he was able to return the adult duck the chicks continued to eat. Not even a duck knows by instinct what to eat. Repeating myself, solving problems differentiates animals from vegetables.

Scott (and everyone): I am happy to grant that: ‘The heart of the matter from a Thomist point of view is that to "think" of something is to receive its form into the intellect without actually becoming the object …’
So here’s the issue: When a wolf goes after a rabbit, it adopts a hunting technique appropriate to catching rabbits. When a pack of wolves attack a caribou, their technique is wholly appropriate to caribou and quite unique from the technique for most other game. How could that be possible without a wolf’s ability to comprehend what a rabbit or caribou is (their form)? Dogs, to use one more example, respond to individual humans differently but also appropriately. Such is only possible because they comprehend the unique and individual forms of the various bodies they encounter.

Anonymous said...

It's not Fox. Fox wouldn't try to mount an argument, even a sloppy one, outside of a safe haven. He'd just complain and snark. People who make claims and arguments are open to potentially devastating reply.

As we're seeing here.

Whatever God granted reason we are blessed with, the beasts too are endowed with capabilities which mirror our treasured intellect to a significant degree. Behaviors far too complex and inconsistent to dismiss as reactions – they are calculating as well.

Alan, please: define "intellect" here. What does the thomist mean by it? What does the thomist argue that animals lacking an intellect can nevertheless do?

Anonymous said...

And, for the record...

Our beastly companions upon this mortal coil plan for their future, plot and conspire against prey and predator. They collaborate with trusted associates, punish freeloaders, taunt rivals and cast insults.

No. In the main, they don't. And of the ones that are claimed to do so, the case is controversial to say the least.

It's not enough to keep saying 'They do this complex thing... they have to have an intellect as the Thomists define it!' That's not an argument, it's just a claim borne out of incredulity. It's not even flowing from metaphysical presuppositions, or at least if it is, they aren't any that you've outlined here.

Give us more. Give us SOMEthing. Passion for and emotional investment in 'animal brethren' isn't going to do the job here. Not even a scientific study is going to do the job, because this is not a scientific, but a metaphysical and philosophical, dispute.

Give us an argument.

Alan said...

Anon: While I am making a lot of loose assertions, most are well documented, others not so. I have no intention of trying to define intellect, but Scott threw out an example as I noted above, work with that. Read back through my posts – the more specific claims I was making regard: walking, navigating, hunting and differentiating between individuals.

Matt Sheean said...

"Not even a duck knows by instinct what to eat."

Eh, this strikes me as a pretty hasty analysis by which this is supposed to be shown. I've got a three month old daughter here that only recently ceased lurching after anything that rubbed against her cheek. I am reluctant to say that she previously "thought" each such sensation was the sign of a nearby nipple.

Simon said...

@John West: I think you're mis-understanding me. I was suggesting a different soul would attach to the copy, just as one presumably does to a baby. Within the confines of a soul not making any difference, that's indistinguishable from a copy of an identically-functioning machine. This is not compatible with Thomist thinking, as you've made clear.

DNW said...

"No. In the main, they don't. And of the ones that are claimed to do so, the case is controversial to say the least."

Somebody mention hunting earlier?

And speaking of which.

So, it's pretty obvious right, how a whitetail with a body of 140 lbs and a brain the size of two golf balls, plans and invests and projects and has all the virtues and powers and moral sensibilities of, say, the average politically progressive male in this culture.

I saw this behavior on display a couple of years ago. I may have mentioned it already.

I was sitting on a hilltop, facing south. And there across the valley, no more than a hundred yards or so, I noticed a modest 4 point head on the next crest over, peaking out occasionally from behind a tree.

Glassing, I eventually discovered that there were two deer there; both peacefully bedded down some yards apart, enjoying the warm afternoon sunlight.

Being the sensitive kind of man that I am, I delayed for a while, and considered what I ought to do.

But, as I had at least one still unfilled tag, I rested my rifle on my knee, took careful aim, and put a round through the neck of the buck, just below where it joined the skull.

Now, what was the reaction of its companion?

Obviously it was startled by the rifle report and the flop over of the other deer.

It jumped to its feet, it stepped toward the deer I had just killed, nosed around, stomped its foot, jerked its head, paused, and then slowly wandered off ... browsing winter buds as it went.

I actually felt pretty bad for a bit. I had already filled one tag and didn't need the additional.

But, then I started thinking about all the deer skulls you find on the forest floor in the spring ... before the mice get to them.

You see one there. It's bleached clean, so you kick it over, look down, and realize that eye sockets take up one heck of a lot of internal space, and then peering closely at the underside of the skull you say to yourself: "Where the hell does the brain go?" http://i55.photobucket.com/albums/g126/boston33redsox61/Website/Nature/IMG_8265.jpg

http://www.brainmuseum.org/specimens/artiodactyla/deer/sections/1340whtdeercelllg.jpg

Remember: a centimeter is 10 millimeters, and there are 25.39 millimeters to an inch.

Now, it's my understanding that recent discoveries have indicated that the incredible amount of information, or at least discriminatory sensitivity, possessed by only one neuron, has scientists baffled. At least that's the news that showed up in my e-mail box the other day.

But if one rat neuron is capable of all that, then what qualitative difference must there be between a collection of them weighing a gram or two, and 3 lbs of them together?

At least that is the question we should probably consider before shrugging our shoulders, granting person-hood to rodents, and crying out: "None! Fallacy of composition!" Because, a brain as we use the term, is certainly about neurons in networks, if anything ...

Of course intellect only becomes useful for moral sorting or discrimination, if a progressive believes he can lead you to believe that he has more of it. You know, more than some "gap toothed hillbilly"

On the other hand, if that hillbilly turns out to have been brought up on Shakespeare, and was taught Greek and Latin by some maiden aunts before going off to Harvard on a scholarship, then, brain power doesn't count so much in assigning status to humans.

Then it is all about "love" or something.

And thinking, or having thoughts and intentions are an illusion anyway. Unless men have them. In which case so does everything else.

Sort of like how in reverse, abortion is always sacred, unless some jerk wants to pay women to abort homosexual foeti. Then it would be categorically wrong.

Now you will have to excuse me. My CNC chucker is having anxiety attacks as it tries to grab a tool out of an empty carousel.

Poor bastard. May have to put it down as an act of compassion. Or cut the power for a bit.

John West said...

I think you're mis-understanding me. I was suggesting a different soul would attach to the copy, just as one presumably does to a baby. Within the confines of a soul not making any difference, that's indistinguishable from a copy of an identically-functioning machine. This is not compatible with Thomist thinking, as you've made clear.

Understood. Have a good evening, Mr. Simon.

ccmxnc said...

I apologize for the total derail, but I have been having issues mulling over divinde freedom given God's simplicity, as I am sure most of you guys are familiar with. Given the fact that God does not have passive potency, how can it be said that he could act in any other way than He does? Any ideas or recommended treatments? I have Dolezal's work that takes a mysterian treatment that I don't think ultimately addresses his critics. While we are on the topic, I wouldn't balk at any good treatments of the Eastern Orthodox take on Aquinas's simplicity either, but that is secondary to my first question.
Thanks in advance

John West said...

Edward Feser, "Davies on Divine Simplicity and Freedom"

Anonymous said...

God's Pure Act is his Infinite Power. That is to say that there is no distinction between the actual and the possible in God. That is to say that there is no limiting condition of God's absolute power. That is to say that God is the unconditioned condition of all possibilities or as Nicholas of Cusa refers to the Divine, He is 'Posse Est'.

In the end, the Unity of God's Act is another way of stating the Infinity of God's Power. God is Absolute on the one hand and Unconditioned on the other, etc. etc. etc.

The ultimate metaphysical principle is not meant to submit to discursive understanding, but rather to be the Final Term of all such reasoning.

Regular reader said...

Professor Feser, I wonder if you could at some time write a post presenting the Thomist position on the subject of this recent book:

http://books.google.com/books?id=BTQeAwAAQBAJ

"Beyond the Control of God? Six Views on The Problem of God and Abstract Objects". Edited by Paul Gould.

Daniel said...

@Regular reader,

I too would be interested in such a post. The position Ed and other Classical Theists endorse would be that of Theistic Activism though they perphaps would not care for the terminology some people use to describe it i.e. God necessarily creating Abstract Objects as Divine Ideas. Instead they would prefer to talk about the Ideas/Exemplars as necessarily grounded in the Divine Nature.

Regular reader said...

Good to hear that I'm not alone in this. Related to the subject and even more related to Professor Feser's areas of interest, I'm impressed by the usefulness of Benacerraf's objection to mathematical platonism to argue for the immateriality of the soul. Quoting Callard (2007) "The Conceivability of Platonism":

"Benacerraf [1983, p. 403] tells us that it is unintelligible how
we could have mathematical knowledge if the objects of that knowledge were abstract"

Well, it is unintelligible for materialists, but not for theists who hold the immateriality of the human soul.

Regarding this, in philosophy of mathematics I am a "theistic platonist", not an aristotelian. More specifically, I suscribe to "plenitudinous" or "full-blooded" theistic platonism (Balaguer 1998, without the theistic part): God knows all possible consistent formal systems and man progressively discovers them. Some of those formal systems correspond to features of the physical universe, some of them do not.

The subject is also related to voluntarism. Does God arbitrarily decide which formal systems are consistent and which are not? (Clearly not.) But then, how does that not place a limitation to God's omnipotence? (I know it doesn't, but Professor Feser can articulate the "how" much better than I can.)

Alan said...

Matt: Yes, that analysis came to me as an insight, full formed, upon hearing the anecdote I noted – following decades of working with a wide range of animals. I find that fairly typical of how thinking works in adults. Your daughter was reacting to instinct until three months of experience allowed her to develop the ‘thought’ that led her to over-ride the instinct.

Up thread a bit, Scott stated: ‘The heart of the matter from a Thomist point of view is that to "think" of something is to receive its form into the intellect without actually becoming the object …’

This appears to be an idea in the mind of an adult with decades of experience that does not acknowledge the work that has gotten them to that point of understanding. This mind is not ‘receiving a form’, but (to switch from Thomist to common-speak): Recalling to conscious attention a mental model of that something that has been created in memory across a lifetime of experiences.

Regular reader said...

BTW, both Benacerraf's epistemological objection to mathematical platonism and the "plenitudinous" of "FB" flavor thereof apply both to classical object-platonism (abstract objects + rules) and to "ante rem" structuralism (abstract structures of abstract places + relations).

John West said...

Regular reader wrote: Good to hear that I'm not alone in this.

No, you're not alone in this (but I'm going to continue restraining myself from replying, in this thread, to avoid going way off topic).

Glenn said...

ccmxnc,

Given the fact that God does not have passive potency, how can it be said that he could act in any other way than He does?

Okay, I'll stick my neck out. (I'll be sticking my neck out twice, so keep that axe in waiting nice and sharp.)

1. The question asked is a general question, so it will receive a general response. And the general response is as follows:

To say that God could act in any other way than He does is to say that God could do 'this' rather than 'that'.

That being so, it is now asked:

What is the view in light of which God doing 'this' rather than 'that' is problematic?

If, as seem to be me to be the case, the only view in light of which God doing 'this' rather than 'that' is problematic is the view that everything God wills is absolutely necessary, then that view must needs stand on it having been established that nothing God wills is anything other than absolutely necessary.

But the contrary was established by St. Thomas in ST 1.19.3, i.e., in ST 1.19.3 St. Thomas established that not all things willed by God are absolutely necessary.

Therefore, there is nothing problematic about God doing 'this' rather than 'that', i.e., it can be said that God could act in some way other than He does simply and precisely because not everything He does must be done (in that way, at the time, or at all).

2. The question asked is a complex question, and thus needs to be divided.

That is, we should first ask whether God could act in any other way than He does before proceeding to the second question of how can it be said that He could (act in any way other than He does).

If the answer to the first question turns out to be 'yes', then the second question is moot.

If, however, the answer to the first question turns out to be 'no', then the second question might be answered by saying that saying that God could act in any other way than He does requires speaking metaphorically rather than literally (in order to avoid speaking falsely).

Glenn said...

Alan,

Given your recent responses to Matt and Scott...

Would it be fair to say that you agree with Aristotle's claim -- made thousands of years ago in his Nicomachean Ethics -- that years of experience are necessary for the development of practical wisdom?

If not, why not?

If so, then how might it follow from the fact that years of experiences recorded by the brain constitute a necessary condition for the development of practical wisdom that it simultaneously constitutes a sufficient cause of it?

Glenn said...

(s/ b "...that those self-same years of experiences recorded by the brain simultaneously constitute a sufficient cause of it (i.e., of practical wisdom)?")

Alan said...

Glen: I would say necessary, but not sufficient. Take the doe in the hunting anecdote recounted above by DNW: She had the time (years of experience), but not the right experiences to develop appropriate responses to protect herself from an armed predator. Then, particularly for humans due to our hugely greater capacity to remember, our personal engagement, the interest and enthusiasm we bring to any experience, has a dramatic impact on what we actually learn or experience from any encounter. We absolutely have the mental acuity to stay as mind-numbingly stupid as we desire. Free will works both ways.

Alan said...

I might add here that socialism is a powerful contributor to our (humans in general) belief that we can live a just wonderful life without actually doing anything.

Glenn said...

Wow, that one was bad. Real bad. So bad, in fact, that the only thing one can do is make humor out of the horror.

Dear DNW,

I'd like to borrow/rent your CNC chucker (which was "having anxiety attacks as it trie[d] to grab a tool out of an empty carousal"). Just for a short while, and only for a small experiment. I'd like to see what happens when it tackles a new task, that of parsing a phrase for which, blush, I am responsible: "...as seem to be me to be the case..." The phrase, and indeed the whole sentence having had the misfortune to include it, was blissfully responded to by Word 2010's grammar with a thumbs up, and I'd like to see if your CNC chucker might respond in a more appropriate manner.

Thanks,
Glenn


Dear Glenn,

I can think of four good reasons why my immediate, instantaneous knee-jerk answer of "no" is the only sane, rational and correct response to your odd-ball request:

1. you must be kidding
2. the poor thing is still recovering from its earlier anxiety attack
3. it is not insured
4. even if it was insured, I wouldn't want the hassle of having to fill out the paper work after you got done messing with it

You're welcome,
DNW

PS Now I'd like to ask you a question, partly to keep this somewhat on topic, and partly out of curiosity. Why did you rely so heavily on an unthinking instantiation of the Universally Thoughtless Turing Machine to check your work? You seem like the sort of person who knows better than that. Just sayin'.

Glenn said...

Alan,

Thanks for the response. Yes, necessary but not sufficient.

- - - - -

We absolutely have the mental acuity to stay as mind-numbingly stupid as we desire.

I think that both mental acuity and an unrelenting industriousness are needed; you mentioned the former, but not the latter. Oh, wait; you did imply the latter in your follow-up comment. ;)

Tom said...

Ladies and gentlemen, I really hate to derail this thread, but I've stumbled upon one of the greatest ironies of all time, featuring Dr. Feser himself.

Glenn said...

Alan,

1. Scott regarding something you wrote:
- - -
"[A] thinking brain evolved to be its host."

Whatever its hosting duties may be, the brain itself doesn't "think"; what thinks is the person, the human being, the rational animal, the intellectual substance. As Timocrates has pointed out, thought (intellect) "has an immaterial aspect about it that separates it not only from machines and animals but distinguishes it even from our power of imagination." The heart of the matter from a Thomist point of view is that to "think" of something is to receive its form into the intellect without actually becoming the object, and in order for that to be possible, the intellect must be immaterial.
- - -

2. You regarding what Scott wrote:
- - -
Up thread a bit, Scott stated: ‘The heart of the matter from a Thomist point of view is that to "think" of something is to receive its form into the intellect without actually becoming the object …’

This appears to be an idea in the mind of an adult with decades of experience that does not acknowledge the work that has gotten them to that point of understanding. This mind is not ‘receiving a form’, but (to switch from Thomist to common-speak): Recalling to conscious attention a mental model of that something that has been created in memory across a lifetime of experiences.
- - -

3. I would guess that there may have been a 'disconnect' between Scott's use of the term 'form' and -- at least as seems indicated by your response to him ("This mind is not 'receiving a form'") -- your understanding of his use of the term.

As best I can tell, the term 'form' was not being used in the sense of, e.g., something that might be seen with the physical eyes, or in the sense of some independent self-subsisting entity existing in a kind of Platonic realm, but in the sense of, let us say, that which is or constitutes the nature of essence of a thing.

It just doesn't seem likely that when we think of an object we either become that object or receive the object itself in our intellect. And it isn't entirely clear that it is unquestionably the case that there is nothing more going on when we think of an object than a blindingly fast retrieval and assembly of retained sense impressions into a cognitive effigy.

- - - - -

o [I]t is quite true that the mode of understanding, in one who understands, is not the same as the mode of a thing in existing: since the thing understood is immaterially in the one who understands, according to the mode of the intellect, and not materially, according to the mode of a material thing. -- ST 1.85.1.1

DNW said...

Glenn says,

"Dear DNW,

I'd like to borrow/rent your CNC chucker (which was "having anxiety attacks as it trie[d] to grab a tool out of an empty carousal"). Just for a short while, and only for a small experiment. I'd like to see what happens when it tackles a new task, that of parsing a phrase for which, blush, I am responsible: "...as seem to be me to be the case..." The phrase, and indeed the whole sentence having had the misfortune to include it, was blissfully responded to by Word 2010's grammar with a thumbs up, and I'd like to see if your CNC chucker might respond in a more appropriate manner.

Thanks,
Glenn




Dear Glenn,

I would be happy to accommodate your request - if only I could.

The problem with my Chucker is that it can only tell me what I have already told it and therefore know myself. If I didn't pre-prepare - programing I think they call it - it to trigger at certain points of feedback, it won't notice any problem either.

I'll try allowing other people to tell it stuff, and see what it spits out, er, I mean, "thinks".

I just hope that as a result it doesn't futilely rapid traverse it's "tool" into a dead end at 600 inches per minute.

Wait, is this the sex post relating to proper aims; or was that the last? I guess with Turing as the instant case, the remark immediately above might be just as apt in any event.


Regards,

DNW

DNW said...

"PS ...I'd like to ask you a question ... Why did you rely so heavily on an unthinking instantiation of the Universally Thoughtless Turing Machine ..."


Because it is obviously unthinking, though an idiot might think it does, or is alive in some way.

It's a fancier version of Crude's Clockwork; what should be an evolutionary hybrid on AI's own terms, which in fact does actually exist and can be examined.

And, if we can't find the AI thinking soul equivalent there in Crude's clock, and it seems too, uh, Crude and harsh to suggest it might, then maybe we can find a glimmer of the AI thinking soul in something that does a lot more, mimics all kinds of human tool using activities, has feedback loops, and sophisticated math processors.

To quote Feser in part: "Could the machine be programmed in such a way that the interrogator could not determine from the conversation which is the human being and which the machine? Turing proposed this as a useful stand-in for the question “Can machines think?” And in his view, a “Yes” answer to the former question is as good as a “Yes” answer to the latter. "

But why allow Turing to stipulate "language" unless we wish to indulge him as he obscures any intermediate steps?

And so, as hard as I've looked so far, it doesn't appear that these tools - I hope they are not offended by my use of the term - really do think. They just go where you tell them to go, change direction on command or trigger, and run into walls and stop dead when you don't do your set-up job properly.

Which is rather different from the broom pusher who runs into the wall in the alleyway, and having no further instructions doesn't just stand there but goes off to lunch.

Which kind of tells us that real thinking is more complexly nested in a biological-ends package which makes the retaining of the "thought" in a man qualitatively different from whatever electromechanical stimulus response and storage mechanism we develop for our amusement or convenience.

Not to get too Heidegger-like here, but what we mean when we say the word "thought" as applied to a man is obviously different from what an electronic peg board, punch card, perforated tape, or other system entails when it outputs.

Is it just a matter of fooling people who are set up to be fooled?

Now, Turing died before NC machines were common, and before CNC was developed.

But why not try a lesser test in checking for a rudimentary ability to "think"? [or are we insisting that mechanical thinking only emerges full-blown at some as yet undefined point of complexity ?]:

An inspector sits in a room off of the shop floor.

Out on the floor which he cannot see, a machinist turns out an aluminum spool of X dimensions and finish on a conventional lathe.

On another electronically programmable machine, a similar spool is turned out.

The parts are given to the inspector to judge, and he cannot tell by tolerance and finish, which was put out by what.

Now in limiting our little quasi-turing test to this out-put instance only, should we not conclude that as the parts were being produced, the NC lathe was thinking just as the conventional lathe operator was?

If not, on what real world Turing Test grounds, would we have to deny it?

" ... to check your work?"

Because I tend to agree with this remark below myself - depending on just how the notion is conceived or developed:

" ... what thinks is the person, the human being, the rational animal, the intellectual substance."

Though I am probably conceiving of it in a somewhat more materialistic way than you are.

Now however, with my pausing before the unqualified immaterial intellect threshold, while still rejecting the crass games played with artifacts, might I get caught up in some kind of epiphenomenalism as regards human thought and consciousness?

Yeah, maybe. I'm still thinking about that.

Alan said...

Glen, thanks. That is almost exactly what I was hoping you would say – I wanted an excuse to expand on my ideas.
One of my larger issues, across this entire A-T perspective on life is that it is far too open to misinterpretation – particularly as demonstrated on this topic. I was not (as you note) contradicting Scott, but rather pointing out the ambiguities in the statement. Another issue I have is too much reliance on smoke and mirrors. As used on these threads, ‘immaterial aspect’ sounds too much like ‘don’t look behind the curtain’. As humans with the power of reason and intellect, we should be able to identify ‘aspects’ that have significance (material or not) and draw rational lines between man and beast. I don’t agree that you can rely on any ‘immaterial aspect’ that does nothing and claim a rational argument. That said, thought cannot be explained in a purely material way (if at all) but it still does a lot – it has a very comprehensible, explainable impact. Intellect similarly cannot be explained in a purely material way, yet it too does a lot. I also think we should separate the two. While I cannot define to my own satisfaction either thought or intellect, I will suggest a couple lines in the sand: To comprehend form requires thought, to comprehend a triangle requires intellect.

Glen said: ‘It just doesn't seem likely that when we think of an object we either become that object or receive the object itself in our intellect.’

Agreed. To me that comment was dismissing the Homunculus style arguments, which I think we can all agree are best dismissed.

Re: Point 3: There is no 'disconnect' between Scott's use of the term 'form' and -- my understanding of his use of the term. As I see it, the disconnect is what we believe to be the significance of comprehending ‘that which is or constitutes the nature o[r] essence of a thing.’

That was the whole point of my hunting example. If the word comprehension is to mean anything, there must be some technique for identifying does from does not. The wolf cannot use his words to convince us he knows, but he can demonstrate a hunting style unique for and suited to hunting a rabbit. The only thing that makes that possible is the wolf’s comprehension of ‘that which is or constitutes the nature or essence of a rabbit.’ Most wolves know the form of a rabbit better than most humans.

Glen said: ‘… it isn't entirely clear that it is unquestionably the case that there is nothing more going on when we think of an object than a blindingly fast retrieval and assembly of retained sense impressions into a cognitive effigy.’

Well, no, which is why I chose the word ‘model’ and thought I included enough discussion to be clear this was not a plush toy representation. To reiterate: First, the wolf must know himself – his own form. His strengths, weaknesses, speed, endurance and agility to name a few. Then he must know the form of the rabbit and the form of the habitat – the lay of the land, the texture and footing of the ground. The nature and locations of the obstacles. He must plan how he starts the chase, anticipate the moves of the rabbit, correct his course as he goes and update his approach as he closes. This is a very dynamic, multi-dimensional problem for him to solve – and solve quickly or the rabbit gets away. Switching to my second example, the problem gets much worse hunting the caribou: If he anticipates the moves of the prey wrong, the caribou can deliver a fatal injury. Plus, the attack must be coordinated among several wolves who must anticipate the actions of the other wolves as well as the prey. Each wolf is running a dynamic, multidimensional simulation in its mind which is synchronized with all the wolves in the attack. This business of form you like to throw around as a privileged word represents extreme complexity that we (and wolves) spend our lifetimes learning and working with.

Alan said...

Consider this – a dream is a simulation of a situation that may be close to a situation that you may face in real life. A mind that dreams is a mind that runs simulations. Birds and mammals appear to dream. The hunting and fleeing behavior of birds and mammals appear to anticipate a behavior in the foe. I don’t think my caribou scenario is a stretch at all, but stick with the rabbit if you don’t like it. Either case requires comprehension of form.

DNW said...



Apologies for: "an evolutionary hybrid "

Not sure why I used that term other than some preoccupation as I rewrote the sentence.

A 'stepping stone' toward ... would express the idea of an emergent thinking device not quite there but plainly identifiable as such in principle, more accurately.

Have a good weekend.

Glenn said...

Alan,

Thanks for expanding on your ideas.

In doing so, you have:

1. agreed that a human does not become the object it thinks of;

2. agreed that the object itself is not received in the intellect when a human thinks of it;

3. agreed that a human thinking of an object entails more than a retrieval and assembly of prior sense impressions into a cognitive effigy; and,

4. denied that an object's form is received in the intellect when a human thinks of the object. **

What, then, do you think happens when a human thinks of an object?

Glenn said...

DNW,

Thanks for the rejoinders.

Here's something I'm wondering (seriously): if thought and consciousness are nothing more than or are reducible to mere epiphenomenalism, would the targets of your delightfully trenchant critiques then get a 'free pass' for being the disordered lot that they are?

Glenn said...

(I don't mean to say that I'm wondering about something which is of great concern to me, only that I'm serious in saying that I wonder about that.)

Alan said...

Glen: As suggested above, the foundation of thought as I see it are dreams, aka: simulations, action scenarios, rehearsals or mental training. So to ‘think of an object’ would involve incorporating that objects model or form into a dream. You may choose to explore that object by freezing the simulation and running a series of scenarios focused on that object, interacting with it in various ways.

DNW said...

Glenn said...

DNW,

Thanks for the rejoinders.

" Here's something I'm wondering (seriously): if thought and consciousness are nothing more than or are reducible to mere epiphenomenalism, would the targets of your delightfully trenchant critiques then get a 'free pass' for being the disordered lot that they are?

February 20, 2015 at 9:57 PM"


Strictly speaking, if I were willing to cross that threshold, yes: insofar as how we think of moral responsibility vis-a-vis properly ordered operations in the usual sense, and as including internal psychological phenomena as central to it all.

That would represent the classic idea of moral action within a framework which presumes and preserves the psychic and other unity of mankind, and does not allow for the emergence of competing moral species.

"Moral", then in the lesser sense wherein "disordered" would lose much of its universal meaning, would imply only the accepted customs of more or less natural allies as they deal with and suffer one another.

But that does not mean that "disordered" as it conditions "moral" would have no application anywhere or at all; just not the universal one.

This second and lesser framework, which "allow(s) for the emergence of competing moral species" would still permit a kind of objective disorder to be conceived, while reducing moral questions to the behaviors of natural subtypes within larger unnatural or conventional aggregates: behaviors, either noxious or beneficial to the well-being of X and its natural or like kind, versus behaviors beneficial or antithetical to the well-being of Y and its natural or like kind.

In this scenario the outright bad guy becomes not so much wrongheaded, as an existential enemy to the core.

And I think that this latter idea, is actually the take most progressives now have adopted for all intents and purposes, save for one aspect that is basically rhetorical.

The aspect which they are for now frantic to rhetorically promote, while simultaneously edging away from the idea in practice, is the unity of the species taken on traditional terms. They do this by expanding the notion of tolerance beyond its carrying capacity concerning skin tone or ethnicity, to an "inclusiveness" which comprehends extreme and de-structive behavioral aberrations within what are supposedly beneficial social arrangements.

In other words they are trying to change predicate horses, before the mass of the population notices them doing it.

For now, under the waning impulse of natural law, we officially have one humankind: X, in which the agent is conceived of as having free will and choice and so forth.

But that is not-so-subtly changing on the political scale.

This becomes clear, as we all have by now noticed, if you take the "we are all God's children and all the same behaviorally under the skin" claim structure advanced for civil rights in the 1950s and 60's, and compare it with the bases for staking claims to the rightness of homosexual unions now.

It's one thing to say that you should treat Johnny well, because it is unjust to assume that he is a catty, treacherous, and untrustworthy sissy boy, just on the basis of his hair color or skin tone.

It is logically quite another to say that in a meaningless universe with no intrinsic rules, you should accommodate Johnny to your own cost, because while he is all of those obnoxious things right down to his double helix, it is just something you have to put up with ... because ... it's the world someone or another envisions, or finds comforting.

So, yes: to "not their fault" on the one hand. And also, assuming the same premises: "so what if it is not their fault ... it buys them nothing anyway", on the other hand.

It's not their fault; yet they don't get a free pass.

Georgy Mancz said...

Apologies for commenting off-topic

@ DNW

It would seem that both of your comments under the previous post failed to appear in time, as I only just read them.

I'm not aware of an online text of Krylenko's work, nor of any substantial translation. I'm going to have to ask around the university (incidentally, we still have surviving pockets of 'Socialist legal awareness').

Glenn said...

Alan,

You equate, or at least synonymize, 'form' with 'model' ("to 'think of an object' would involve incorporating that objects model or form into a dream").

This, however, does not lend support to the earlier claim that you understand Scott's usage of the term 'form'.

Form, as Scott used the term, is both a something intrinsic to the object and a something whose existence is neither dependent upon nor a function of the perception of a conscious agent (such as humans or other animals), and also is neither constructed nor generated by that agent.

Form, as you have used the term, is not a something intrinsic to the object but instead an extrinsic representation of the object, the existence of which is both dependent upon and a function of the perception of a conscious agent, and also is either constructed or generated by that agent.

So, not only does there exist a 'disconnect' between Scott's usage of the term and your understanding of his usage of the term, the 'disconnect' which exists between the two is huge.

(It may be true that you do indeed understand his usage of the term, but nonetheless use the term in a way which is obviously different. Upon the removal of a single, unelaborated claim (that you do understand his usage of the term), however, your comments on the whole strongly suggest otherwise.)

It will be recalled that Scott did not merely say that to 'think' of something is to receive its form into the intellect, but that that is so from a Thomist point of view.

Now, you may disagree that to 'think' of something is to receive its form into the intellect, but you cannot credibly disagree that that is so from a Thomist point of view without providing some credible evidence that the Thomist point is otherwise than as was stated.

You have said that the adult mind recalls "to conscious attention a mental model of that something that has been created in memory across a lifetime of experiences", and that "to 'think of an object' would involve incorporating that object[']s model or form into a dream." A mental model is received into conscious attention in the former case, and an object's model or form (in your sense of the term) is received into a dream in the latter case.

From these two cases there may be abstracted a generic pattern, that of: "to 'think' is to receive one thing into another thing".

But that is the same generic pattern which may be abstracted from the Thomist point of view.

That is, although the Thomist point of view may be specifically stated as "to 'think' of something is to receive its form into the intellect", the Thomist point of view also may be generically stated as "to 'think' of something is to receive one thing into another thing".

It now may be said that though you disagree with the Thomist point of view (that to 'think' of something specifically is to receive its form into the intellect), not only are you not antagonistic, hostile, resistant or unwelcoming to the generic pattern which may be abstracted from that point of view, you actually depend on it. That is, not only do not disagree that 'to 'think' of something is to receive one thing into another thing', you have made two statements the value of which would be lost to you were that not the case.

A further instance of both your lack of resistance to the generic pattern and your actual embracing of it may be found in a third statement of yours -- to wit, that a person "may choose to explore that object by freezing the simulation and running a series of scenarios focused on that object, interacting with it in various ways." The running of a series of scenarios on the object, and other various interactions with it, entails the shuttling of ancillary things in and out of conscious attention, the 'dream', working memory, etc.

(cont)

Glenn said...

I hope you can see what is going on:

the Thomist point of view as an instance of the generic pattern is more metaphysical and less non-metaphysical, while your arguments as instances of the generic pattern are more non-metaphysical and less metaphysical.

And I hope you also can see that non-metaphysical arguments against a metaphysical position do not, as you have said to Daniel that they do, constitute an intellectual challenge but constitute an intellectual mistake.

Glenn said...

Alan,

Returning to:

So to ‘think of an object’ would involve incorporating that object[']s model or form into a dream. You may choose to explore that object by freezing the simulation and running a series of scenarios focused on that object, interacting with it in various ways.

There is little reason to doubt that one may choose to explore an object in the manner described.

For example, Nikola Tesla, after stating that he "turned seriously to invention" at about the age of seventeen, wrote in My Inventions:

"Then I observed to my delight that I could visualize with the greatest facility. I needed no models, drawings or experiments. I could picture them all as real in my mind. Thus I have been led unconsciously to evolve what I consider a new method of materializing inventive concepts and ideas, which is radically opposite to the purely experimental and is in my opinion ever so much more expeditious and efficient... My method is different. I do not rush into actual work. When I get an idea I start at once building it up in my imagination. I change the construction, make improvements and operate the device in my mind. It is absolutely immaterial to me whether I run my turbine in thought or test it in my shop. I even note if it is out of balance. There is no difference whatever, the results are the same. In this way I am able to rapidly develop and perfect a conception without touching anything. When I have gone so far as to embody in the invention every possible improvement I can think of and see no fault anywhere, I put into concrete form this final product of my brain."

On a Thomist account, each of imagination, reason and intellect is a cognitive power, although the immaterial intellect is above the lower, more organic-based levels of imagination and reason.

What you and Tesla write about -- the visualization, the running of scenarios, the performance of tests, the making of corrections or improvements, etc. -- primarily have to do with the imagination and reason.

Anonymous said...

Dr Kevin Scharp Wednesday, Feb. 25, 2015
Kevin Scharp, Philosophy
"Philosophy and Defective Concepts"

From familiar concepts like tall and table to exotic ones like gravity and genocide, they guide our lives and are the basis for how we represent the world. However, there is good reason to think that many of our most cherished concepts, like truth, freedom, knowledge, and rationality, are defective in the sense that the rules for using them are inconsistent. This defect leads those who possess these concepts into paradoxes and absurdities. Indeed, I argue that many of the central problems of contemporary philosophy should be thought of as having their source in philosophical concepts that are defective in this way. If that is right, then we should take a more active role in crafting and sculpting our conceptual repertoire. We need to explore various ways of replacing these defective concepts with ones that will still do the work we need them to do without leading us into contradictions. RSVP Here

Alan said...

Glenn: Thanks for the response. To your lesser point, I think just a slip of the pen - form as I take it, represents the object, independent of any observer. Model refers to our mental representation, our understanding of the object. I could have been more careful. To your larger point I appear to be blatantly, deliberately (if unwittingly) guilty. I must consider the significance of that.

Alan said...

Glenn: All I am getting from your repeated delineation of cognitive powers is that a Thomist draws somewhat arbitrary lines between grades of thought, imagination being the most trivial. However, that still leaves through neglect that by demonstrating a comprehension of form, the Thomist accepts that birds and beasts possess intellect. A position I am not comfortable with.

Glenn said: Now, you may disagree that to 'think' of something is to receive its form into the intellect, but you cannot credibly disagree that that is so from a Thomist point of view without providing some credible evidence that the Thomist point is otherwise than as was stated.

I was making no challenge as to what the Thomist point of view was, but pointing out the significance of that particular statement: Intellect is not denied to bird nor beast.

Glenn said...

Alan,

I was making my way down a very large hill in Denali National Park one dreary, overcast day, when I heard what sounded like the cry of an animal.

The sound came from about 9 o'clock (assuming I was facing 12 o'clock), and, looking in that direction, I saw a bird appear from behind a large mound. The mound was about 200 yards away, and at a much lower elevation. The bird headed towards what would be my 12 o'clock, then dipped and banked to the right. When the bird got somewhere between 10 and 11 o'clock, I had an eery feeling, and thought, "Holy smokes, it's coming over to check me out."

That thought could have been the product of wishful thinking or maybe a sudden realization that, being in an unfamiliar environment and clueless as to how wildlife might react to the presence of a human, I was justified in being mildly concerned that the bird might attack with claws and beak.

Who knows.

At any rate, I had a camera with me, so quickly plotted what I thought the flight path of the bird would be, were it indeed 'coming over to check me out', picked out a point along that path, pointed the camera at that point, and began clicking the shutter button quickly as I could.

From my point of view, I had not, with one exception, done anything that a non-human animal cannot do.

My attention was drawn to a sound, I saw an object that seemed to be associated with that sound, recognized that it was moving through the air, anticipated its course, and made a judgment (or several judgments) as to how it might be intercepted.

The one exception, of course, was that I could not intercept the moving object by running, flying or leaping high in the air, so had to pretend I could make competent use of the technological device I was carrying.

With a healthy dose of luck the attempted interception turned out to be rather successful, as may be seen here.

Now, I'll say that I'm one of those people to whom that result quite suspiciously looks like the form of a bird.

But I'll also say that I'm also one of those people who would say that that form isn't quite the form that is meant when a Thomist speaks of a form being received into an intellect.

So, if it is to be said that "by demonstrating a comprehension of form, the Thomist accepts that birds and beasts possess intellect", and if by 'demonstrating a comprehension of form' is meant, e.g., that an object is seen (and, perhaps, its movement anticipated or worked out), then I would have to politely disagree, as well as point out that it would be rather silly to think that Thomists suggest that it is something a human has in common with a non-human animal that serves to differentiate the two.

Alan said...

Glenn: Nice shot, but no, and no. The object is seen, and its form understood. Form uniquely differentiates objects, the wolf uniquely differentiates a wide range of objects and demonstrates a significant understanding of many of them. The issue is not differentiating human from non-human in this thread. The issue is differentiating the capabilities. How do you propose to differentiate the capabilities of a wolf from intellect. How do you differentiate the wolf’s ability to understand rabbits from what a Thomist means by a form being received into an intellect.

DNW said...

"However, there is good reason to think that many of our most cherished concepts, like truth, freedom, knowledge, and rationality, are defective in the sense that the rules for using them are inconsistent."


That, with provisos, seems unobjectionable enough.

However, I am not sure why any given concept itself, would necessarily be defective, if the problem lay in sloppy use or lack of discrimination of senses.

I suppose you could misuse the term "tree" if you were ignorant enough, much as children amusingly do, not having a clear idea of what the concept means to adults who use it.

Keeping the political pot stirred here, and adverting to the mention of "freedom" we might take as an example of a misused concept, the left or left-fascist concept of "liberty" as conceived of by our own Chief Executive.

That would of course be "positive liberty": that is to say, an enabling environment which ensures one a smorgasbord of politically constructed social options and choices, guaranteed to ensure the partaker of a maximally satisfying experience of self-actualization.

Now of course, others formerly at liberty, may have to be harnessed against their will to provide this experience. But all they have really lost is their old fashioned negative liberty. That is to say, their supposed right to be left alone to their own devices, while peaceable.

What they subjectively feel they have lost is, however, more than gained back by the exchange - at least when viewed through a properly abstract utilitarian, or even distributive justice, lens.

Some, may of course, try to argue that endlessly pulling at the oars under a government mandate and penalty, is a clear loss of liberty to them.

But those who truly feel the attraction of distributive justice, or the pull of utility, will quickly recognize that the complaint of some that they have been against their will harnessed to others and damned to a shared fate much more intrusive and unreasonable than the mere political, merely represents a failure of their imagination and sympathy. They - these negative liberty lovers - suffer from an inability to appreciate how their bondage, to use Garry Wills loving term, is in fact, their real self-actualization.

According to expert opinion, and so forth and so on ....

Glenn said...

Alan,

The object is seen, and its form understood.

You earlier had 'form' as a something which took a lifetime to learn, and now have it as a something which can be understood simply by seeing an object in which it is.

You also had 'form' as a term signifying 'extreme complexity', so I'd like to ask how a human might understand -- never mind how a wolf might 'understand' -- the extreme complexity of a thing merely by looking at it.

How do you differentiate the wolf’s ability to understand rabbits from what a Thomist means by a form being received into an intellect.

It isn't clear to me that wolves do actually 'understand' rabbits.

(If they did, why don't they 'realize' that it would be more efficacious to capture rabbits and let them breed rather than to constantly hunt them? Wouldn't a steady supply of readily available meals be more conducive to survival than intermittent meals obtainable only via the hardships of a hunt?)

Alan said...

Glenn: ‘The object is seen, and its form understood’ – Seeing triggers the recollection in an instant, the knowledge has been acquired over a lifetime. The process appears to me to be the same for man or beast, as I stated earlier.
I explained earlier how wolves demonstrate their understanding. And perhaps it is wisdom that keeps them from domesticating rabbits - Jared Diamond (of Guns Germs and Steel fame) on the domestication of food by humans: ‘The Worst Mistake in the History of the Human Race’
http://discovermagazine.com/1987/may/02-the-worst-mistake-in-the-history-of-the-human-race

Through both acquaintances and anecdotes across all history, men seem to find hunting more fulfilling than hardship.

Glenn said...

Alan,

The process appears to me to be the same for man or beast, as I stated earlier.

Nonetheless, you believe that humans alone are capable of rational thought, and are uncomfortable with birds and beasts possessing intellect.

So, perhaps we can wind up this discussion by agreeing to each of three things:

1. At least most of the regulars here would agree that humans alone are capable of rational thought;

2. none of them has suggested that birds and beast possess intellect; and,

3. no orthodox Thomist would suggest that intellect is possessed by either bird or beast.

AlK said...

Thanks again for yet another excellent essay on the meaning of intelligence. There is one thing I would like tackled directly, though.

Can the Singularity even happen? I know that computers cannot think from the article, but do they necessarily have to? All they have to do is modify their own source code to improve themselves at modifying their source code...

Wait a second.

Define "improve".*

*And no, I'm not British.

raapustus said...

Greetings from Finland. This is a great blog. ;)

Maybe someone (if Ed is busy) could help me fill out a Scholastic hole that was left glaring in the original post -- at least in my thinking:

Left to themselves, metal bits don’t display time, and stones don’t fly. And left to themselves, machines don’t converse. So, that we can make them converse no more shows that they are intelligent than throwing stones or making watches shows that stones have the power of flight or that bits of metal qua metal can tell time.

Well, left to "themselves", atoms don't photosynthesize, either. The supposedly substantial form of chlorophyll's power of photosynthesis comes from information in DNA -- information that orders the atoms into molecules that, when combined properly, will create sugars and oxygen from water and carbon dioxide using light energy.

This organizing information comes originally from an outside source, the Mind that has a teleological purpose in view for these molecules.

I fail to see why a watch lacks the substantial form of telling time or a computer lacks the substantial form of conversing, if an outside mind with a teleological purpose in view has given the information and arranged the metal bits or electronic components in such a configuration that telling time or conversing is possible.

Why, from a Scholastic point of view, is it not an accidental form of atoms to photosynthesize, if, left to themselves (without God's teleology), they would do no such thing?

DVH said...

>Whether pyrite might be taken by someone to be gold and whether pyrite is in fact gold are just two different questions

Holy cow! Mr. Feser, you need to read more lesswrong.com

"Gold" is a category of observable properties. "Pyrite" is also a category of observable properties. Some of these overlap, hence the untrained eye can mistake one for the other.

However enough investigation always tells the difference, because if no observable properties whatsoever would differ, "gold" and "pyrite" would be synonymous categories, same ways a "gold" and "aurum" are.

There is no "in fact". This is just the fallacy of thinking things have an essence, the fallacy of thinking things-as-such exist, when things are just categories for properties occuring together.

Really please read this: http://lesswrong.com/lw/no/how_an_algorithm_feels_from_inside/

machinephilosophy said...

Recognition, labeling, etc. will be problems once humanoid robots are indistinguishable from human-borns and clinic-borns.

And then there's the problem of increasingly mechanized humans because of injuries or whatever, especially those who have command arrays as well as memory arrays interfaced to their brains. This technology is surely headed for deeper and more comprehensive proxy capabilities in or in place of the brain.

But there's a kind of self-reference problem as well. If a machine behaves in ways that require being described as intelligence, thinking, deliberation, reflection, or even being upset, confused, or in pain, then there's no justification for denying that the machine has consciousness, because that behavior is the only evidence we have for saying that humans have consciousness.