At the recent Society of Catholic
Scientists conference,
Peter Koellner gave a lucid presentation on the relevance of Gödel’s
incompleteness results to the question of whether thought can be
mechanized. Naturally, he had something
to say about the Lucas-Penrose argument.
I believe that video of the conference talks will be posted online soon,
but let me briefly summarize the main themes of Koellner’s talk as I remember
them, so that the remarks I want to make about them will be intelligible.

Lucas and
Penrose argue that Gödel’s results show that thought cannot be mechanized in
the sense of being entirely captured by the algorithmic rules of a Turing
machine. It is sometimes supposed that
Gödel himself took his Incompleteness results to show this. But as Koellner pointed out, what Gödel
actually thought was that the Incompleteness results entail a disjunctive
proposition to the effect that

*either*thought cannot be mechanized*or*there are mathematical truths that outstrip human knowledge. Of course, the disjunction is not exclusive. It could be that both disjuncts are true. The point, though, is that Gödel claimed only that his results show that at least one of them must be true. They don’t by themselves tell us which.
Another way
to think about it is as follows. There
is (1) the realm of mathematical truth, (2) the realm of human thought, and (3)
the realm of matter. Gödel thought his
Incompleteness results show

*either*that the first is irreducible to the second*or*that the second is irreducible to the third (or, again, maybe both). But they don’t show which.
It is
important to emphasize that what is at issue here is what can be

*formally demonstrated*. Gödel did think that thought cannot be mechanized, but Koellner’s point is that Gödel did not claim that*that*could be formally demonstrated. He claimed only that the disjunctive statement could be. Was Gödel right about that much? Koellner thinks so. More precisely and cautiously, he thinks that*if*we confine ourselves to the question of what we can know by way of mathematical truth, and*if*we work with what he takes to be a plausible formalization of the notion of knowledge,*then*Gödel can be said to be correct that his disjunction can be formally demonstrated.
Lucas and
Penrose claim that the first disjunct of Gödel’s disjunction (to the effect
that human thought cannot be mechanized) can also be demonstrated, but Koellner
argues that their attempts to show that fail.

There’s more
to Koellner’s presentation that that, but this suffices for present
purposes. The issue I want to consider
is the idea of a formal demonstration of the proposition that human thought
cannot be mechanized. Let us grant for
the sake of argument that Lucas and Penrose fail to provide such a thing. The question I want to ask is:

*If*thought cannot be mechanized, should we*expect*to be able to provide a formal demonstration to that effect?
The whole
idea seems fishy to me, though it is difficult to point

*precisely*to the reason why. But blogs exist in part for the purpose of airing inchoate or half-baked ideas, so here goes.
For human
thought to be mechanized would entail that it could be entirely captured in a
formal system. So to give a formal
demonstration that thought cannot be mechanized would entail giving a formal
demonstration that thought cannot be entirely captured in a formal system. And
of course, it would be a human thinker who would be giving this
demonstration. It is the conjunction of
these elements – the idea of human thought giving a formal demonstration that
not everything about human thought can be captured in a formal demonstration – that
seems fishy.

But the way
that I’ve just stated it is not precise enough to show that there really is any
inconsistency or incoherence here. For
there is nothing necessarily suspect about the idea of a formal result having
implications about the limits of formal results in general – that is what Gödel’s
Incompleteness theorems themselves do, after all. And human thinkers are the ones who discover
these implications. So what’s the
problem?

The problem would
be something like this. If you are going
to produce a formal result concerning

*all*human thought, it seems that you would first have to be able to*formalize*all of human thought, so that you would be able to say something about human thought*in general*in the language of your formal system. But in the case at hand, that means that you would have to be doing for all of human thought – namely, formalizing it – exactly what your purported formal demonstration is supposed to be showing cannot be done. In other words, a formal demonstration to the effect that human thought cannot be mechanized would presuppose that all human thought can be mechanized. In which case the idea is incoherent.
If this is
correct, then if human thought cannot be mechanized, then we should expect that
we should

*not*be able to give a formal demonstration that it cannot be.
Notice that
this does

*not*mean that we should expect not to be able to give a*compelling philosophical argument*for the conclusion that thought cannot be mechanized. On the contrary, I think there*are*compelling philosophical arguments to that effect (for example, Kripke’s argument, which is something I talked about in my own SCS talk). The point is rather that we should expect not to be able to give for this conclusion a*formal demonstration*of the kind familiar from mathematics and formal logic. The limitation in question concerns only*that particular kind*of argument.
But again, I’m
just spitballing here.

Related
posts:

Accept
no imitations [on the Turing test]

From
Aristotle to John Searle and Back Again: Formal Causes, Teleology, and
Computation in Nature [a 2016 article from the journal

*Nova et Vetera*]
I've heard the following take on the implications of Gödel: one could build a mechanized construct ultimately capable of humanlike thought ... but by the time it became so capable, its workings would necessarily have passed beyond anything that qualified as mechanistic. Even if some human engineer started the mechanical ball rolling, the enormous complexity of the end result would be formally unanalysable.

ReplyDeleteWhether built or begotten, such an intelligence would end up as mysterious to others, and to itself, as human intelligence already is.

Not necessarily. To make a formal demonstration that not all human thought can be formalised, all one would need to do seems to be to simply

ReplyDeleteattemptto formalise all human thought, and upon seeing the failure in trying to do this conclude that thus not all human thought is formalisable.This would though be more akin to a proof by contradiction or exclusion rather than a positive review of all human thought.

I was thinking the same thing. Though I'd add that we need reason to think that all formalizations would fail and not

Deleteonlyour own attempt.But how are you determining that you've failed to formalize all human thought? If you mean that you might attempt to formalize thought but find that your formalization doesn't capture something you think it should, then you are not supplying a purely formal demonstration.

DeleteHow about this?

If all human thought is formalizable, then there exists a system S such that it is that formalization. Let p be the proposition "for all q, if q is a human thought, then the formalization F(q) is in S". If q is equal to F(q), then it follows that for all q, if q is a human thought, then q is in S. Therefore, q and ~q are in S.

If q is not equal to F(q), then F(q) and F(~q) are in S. It follows that F(S) is in S. But if F(S) is in S, then S is in F(S). Therefore, either by antisymmetric property F(S) is in S implies S is not in F(S), or S is equal to F(S) and S is not equal to F(S).

This comment has been removed by the author.

DeleteGreetings Prof. Feser,

ReplyDeleteAlthough I didn't attend this conference and can't speak to what Gödel himself thought about these matters, I've spent a fair amount of time reflecting on the Lucas-Penrose argument.

As I see it this argument successfully establishes that human mathematical thought cannot be mechanized in the relevant sense so long as one is willing to grant the proposition that if human mathematical thought could be represented in terms of a formal system, then the principles underlying that system would be "knowably consistent" (following Penrose); in other words, we would recognize them as being logically consistent. Why think such a thing? Well, if one is inclined to think that the principles of modern mathematics approximate what such a formal system might be like, then the fact that the former is knowably consistent suggests that the latter would be as well.

But in any case the Lucas-Penrose argument does not run afoul of the problem you raise in this blog post since it turns on a claim about what human mathematical thought would be like assuming it could be formalized, which is not the same thing as presupposing that human mathematical thought could actually be formalized/mechanized.

Ben

I think that we need the Ofloinn's take on the video when it comes out.

ReplyDeleteGrodrigues also. I've seen him discuss Godel's Theorem here before, specifically regarding Lucas's take on its implications. Don't believe he was sold on it, if I remember right.

DeleteDo you actually have to produce a formal systematization of human thought to prove that no such thing exists? Wouldn't a non-constructive proof do the job?

ReplyDelete@reighley:

Delete"Wouldn't a non-constructive proof do the job?"

Non-constructive proofs are just as "formal" as constructive ones, unless by "non-constructive" you have something specific, and quite different from its usual mathematical meaning, in mind.

It seems like Dr. Feser's argument depends on actually developing a complete formalization of thought. It seems like you could get by with an incomplete formalization. When he writes that "you would first have to be able to formalize all of human thought", the word "all" carries a lot of weight. But to prove the impossibility, it seems like formalizing only some of human thought would be enough if you could use it to derive a contradiction. The proof I imagine would look like:

Delete(1) state the properties a formal system has (pretty well agreed upon)

(2) state, formally, at least one property that human thought must have. (frankly, I have no idea what properties human thought must have)

(3) assume the existence of a system with properties both (1) and (2)

(4) derive a contradiction.

The lemma (2) seems weaker than formalizing all of human thought. You need to create a formalism that applies to all of human thought, but that is obviously possible (we can think of a trivial formal system that applies to everything, human thought included). You do not need to create a formalism from which all properties of human thought can be derived, in order to prove that no such system exists. Which seems to me to be Dr. Feser's thesis here.

I would agree that "thought cannot be mechanized in the sense of being entirely captured by the algorithmic rules of a Turing machine," but this doesn't mean we can't build robots that have minds fully like ours. We would just need to build a neural network system rather than an algorithmic system.

ReplyDeleteThe difference is that an algorithm (or a formal demonstration) is a description of the system, whereas a neural network is the actual functioning system itself. We can still have functioning minds without exhaustively detailed rules describing our minds' functioning.

Re:

Deletebut this doesn't mean we can't build robots that have minds fully like ours. We would just need to build a neural network system rather than an algorithmic system.This is roughly John Searle's position. And it rests on the (I think correct) idea that the computationalist theory of mind is false.

Is exhaustively detailed rules really what we are looking for though? It seems like we all have somewhat different personalities, likely somewhat different minds. Which is to say : obviously we can't build a robot with a mind _fully_ like ours, since at best it could only be _fully_ like just one of ours. I feel like there is a category error being made here in the distinction between "thought in general" and "the behavior of a particular neural network".

DeleteI think that the point of mechanizing thought is that any material thing (neural network, Turing machine, etc.) is necessarily indeterminate. I am using the Philosophy of Mind definition of indeterminate. That is to say, any material thing cannot de determined to be representing either addition or quaddition (given by Kripke). Human minds can be determined in this regard (we have first hand experience of such). Therefore human minds are immaterial and cannot be mechanized by any material thing.

DeleteBut artificial neural networks are usually implemented in software on computers, and can therefore be implemented on a Turing machine, since a Turing machine is a generalized model of a computer. So, it seems to me that if a neural network can implement a mind like ours, then so can a Turing machine.

Delete@g_pepper There are a couple of technical differences between brains, computers, and Turing machines. One is that Turing machines have access to infinite memory and computers do not (brains probably do not). Another is that brains probably have access to sources of random noise, whereas computer programs and Turing machines must simulate randomness with a pseudorandom number generator. Do either of these observations have an impact on a theory of the mind?

Delete"

DeleteDo either of these observations have an impact on a theory of the mind?"For the first, in theory yes, in practice, no. Humans use external storage. So we can make our tape as big as our technology and the universe allows.

Turing machines can simulate randomness (but, as von Neumann said, "Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin.") However, peripherals can provide sources of true randomness, and Turing machines can incorporate peripherals. See The Toaster Enhanced Turing Machine.

The infinty plays an important theoretical role in both Godel's theorem and Kripke's plus/quus argument. If there are thoughts I can only have with pencil and paper in hand, are they still thoughts?

DeleteSure. The tape is not the thought. The tape is just the storage.

DeleteProving a negative like this is problematic anyway.

ReplyDeleteOn a side note, what do you mean by mechanization? Surely all human thought is represented by processes in the brain, but not entirely contingent on the limitations of matter (neurons, connections and so forth)? That something has a qualitative aspect, or that the human mind has immaterial powers of abstraction does not preclude the possibility of thought being fully represented, but not fully contained or dependent upon matter?

If you mean thought cannot be fully represented in a system then again I do think it already has been. Language and logic and so forth all can offer systematizations of human thought.

I am open to the possibility that a substantial form can be derived from human actions to create a thinking 'machine' in the sense of hard AI. Whether such a thing could have abstract reason is obviously a more complex question. We can after all create 'new' animals and creatures already. It isn't inconceivable that some form of programmed 'wetware' could 'think' with a uniform telos.

"mechanized" means "reproducible". If a form is the result of a production, like the product is the result of a function, then this production can be "mechanized". So basically "mechanized" is one kind of repeatable process where the steps are always the same, so the forms even if the content of the forms can change. So the process a + b / 2 = c it's a 5 step process where each step is a form and every form has a content. The content of "a", "b" and "c" may change, the content of "/" and "2" doesn't change. This means that if I don't know which kind of form is "a" I cannot know what is the content of "a", I can see a content but I don't know... its form. Now the form of "a" it's the thought, it's a form of course but it cannot be "mechanized" because what is mechanizable could be the process and not the form that the process use. To suppose a "mechanized form" is to start a "regresso all'infinito" because that "mechanized form" is a process itself so that "mechaniezed form" uses other forms and so on. Sorry for the horrible english but for reasons unknown to me I cannot access that mechanized process named "google translator".

ReplyDeleteThat is helpful.

DeleteBut is this to do with determinism or 'essential content versus symbolic representation'?

Mechanized does not mean "reproducible". The spinning cage that tumbles the numbered balls used to pick winning lottery numbers is a mechanism -- but the odds of getting the same result are virtually impossible.

Delete"The content of "a", "b" and "c" may change, the content of "/" and "2" doesn't change."

DeleteBut that's only because you've pre-defined it to be that way. There is no distinction between data and code -- other than convention -- so it's quite possible to have self-modifying systems.

@AND both of them, "/" and "2" doesn't change so it's "determinism" (@A yep the content of those two guys are pre-determinated) while "a", "b" and "c" are not. All the 5 forms have to do with essential content "vs" representation, the fact is that the essential relation that binds the steps cannot be mechanized.

Delete@A, in the spinning cage case all is mechanized but the "result" of the ball. So the process is entirely reproducible, as the "ball" that pops out the cage while the number on it it's just pre-determinated. That means an old one: randomness it's not a process.

Delete“... “/" and "2" doesn't change ...”

DeleteThere is nothing that prevents them from changing. In homoiconic systems, there is no difference between code and data, so the code can change itself. It’s no difference, in principle, in what the “plastic” brain does.

“... randomness is not a process... “

If you say so, but randomness is an essential part of nature. Our theories of nature are built around describing that randomness (quantum mechanics), powerful computer algorithms exploit randomness; any theory of mind will have to include it (because changes to brain wiring include randomness).

@a the code change itself only if it's coded to change itself, that means that there is something in the code that doesn't change. I haven't said that randomness doesn't exists, I've said it can't be a process. But at saying that can't be a process doesn't follow that it can't be used by a process, it just means that you can't have a randomless process.

Delete"@a the code change itself only if it's coded to change itself"

DeleteSure. But a biological computer changes it's code as new neural connections are made. Changes to wiring can be changes to code.

"...it just means that you can't have a randomless process."

Ok, but what does this have to do with the nature of thought? Brain wiring is both determined (by genetics) and random (by nature).

@a biological or not it's coded, that means that there's a coder, that means that you can't have a "regressum ad infinitum" of coders, that means that first there's one coder not the code. What randomless has to do with the nature of thought? Absolutely nothing, if it's thought it's not random, there's a reason for that thought; or a nature if you like but not a randomless one.

Delete"

Deletethat means that there's a coder"You just moved the goalposts.

"

What randomless has to do with the nature of thought? Absolutely nothing"A great deal, actually. The arrangement of the wiring in your brain is partly formed by the randomness of Nature, and the arrangement of the wiring in the brain is the program. Randomness also affects the electrical activity in the brain, since electrons behave quantumly.

@a I don't belive that I've "moved the goalposts", instead it seems to me rather that your argumet is like begging the question. If the code, that is a form, is randomless generated then there's no need for a coder but the topic here is exactly know if forms can be always the result of a mechanized process and if a mechanized process can be totally randomless. The rational answers are: no and no. Because if forms are always the result of a mechanized process that means that there is a series of predeterminated steps, each step and all the connections between the steps are forms, so if each of them is the result of a mechanized process you have another series of steps and so on ad libitum.

Delete

Delete"If the code, that is a form, is randomless generated..."You've moved the goalpost from what a form does with how a form is made.

A tree is a form. If I said, "a tree does this and that..." you wouldn't be arguing with how the tree was made.

A computer is a form. What computers do is explained by the theory of computation.

A brain is a form. One side claims that a brain can do things outside of our theory of computation; the other side claims it cannot.

So far, a brain has never done anything outside of our theory of computation. Those who try to use Gödel to say that is can, and has, don't understand computation in general and Gödel in particular.

Because if forms are always the result of a mechanized process that means that there is a series of predeterminated steps...That's simply (and demonstrably) not true. New, functional, forms can arise from random steps.

The rational answers are: no and no.The problem with this particular argument is the mistaken idea that randomness is disconnected from order. It isn't. Yes, random events, in isolation, are unpredictable. But random events, taken en masse, follow a probability distribution. That's why if you drop balls down a Galton board, you'll get a shape that follows the "normal" distribution (aka a "bell curve"). For whatever reason, Nature imposes order on randomness.

@anon 2018.06.28.8:50am

DeleteIf the question is the possibility of a totally mechanized form then it's obvious that we need to know if all the forms are built. Since "built" imply a building process then it's obvious that beside of the possibility named "all the forms are built" we need to know what a form does. Now since I don't see any "moving the goalpost" in addressing the topic on that two lines, the reasons that I've proposed about the impossibility of "all the forms are built"still seems to me valid. Speaking on randomness, please take care that personally I don't pose an absolute disconnection between order and casuality, what I claim is that a process cannot be a casuality so a casuality cannot be a process, the reason for that it's evident: process and casuality are essentially incompatible, if there's a process there isn't casuality and vice versa. Now, from that claim doesn't follow that in a process there can't be random events, because even if random events are essential for the result of the process they cannnot be essentially the process: keep in mind that a process it's not only made but it's also running; is possible that exists a randomly running process?

So let me start from the back and move forward.

Deleteis possible that exists a randomly running process?Why not? Given what we know about the quantum foundations of Nature, we can say that the universe is a randomly running process. Below the level of our everyday experience is a froth of random events.

what I claim is that a process cannot be a casuality so a casuality cannot be a process,Ok, I guess. But you can't have a process without a causality (otherwise the process wouldn't process), and a process can effect its environment (i.e. cause other things). Really, the only question is whether process/causality is like Escher's Drawing Hands, or whether it's like a domino effect. But I don't see how this is relevant to whether or not the human mind is described by the theory of computation, or whether the human mind goes beyond it.

If the question is the possibility of a totally mechanized form then it's obvious that we need to know if all the forms are built.Ok. That leads us back to the heart of this discussion. Do Gödel's theorems imply that the human mind can do things that are not possible under the lambda calculus? And the answer is, clearly, no. That is, if the human brain can do things outside of the lambda calculus, then Gödel doesn't help show that. Another line of argument is needed. The human brain can't decide the truth of "the liar paradox". Neither can the lambda calculus. There are problems that can't be solved with one set of information that can be solved by a different set. That's all Gödel formalized.

Why must we assume that the "realm of matter" must be formalizable?

ReplyDeleteBecause that's our experience of matter. We experience matter with a form, we don't have any knowledge of a "pure" matter, of matter that is just matter.

DeleteThis comment has been removed by the author.

ReplyDeleteA Goedel sentence has a perfectly clear meaning - that a certain logical system can't produce a proof for it. Such sentences don't claim to be

Deletetrue- in fact, no theorem in a formal system can do that (see Tarski's theorem.)And the whole point of Goedel's first incompleteness theorem is that no logical system that includes Peano arithmetic can satisfy (2), unless it's inconsistent and thus can prove any statement whatsoever, even falsehoods. That's what mathematicians

meanby "incomplete".@Cogniblog:

Delete"All Gödel sentences are meaningless. Think about it: what does it possibly mean for a sentence S to be "true but unprovable" in a logical system where (1) logical implication implies logical deduction [The Deduction Theorem]"

The "true" in "true but unprovable" does not mean "true in every model" (or equivalenty, by Gödel's completeness theorem, presumably what you call the "deduction theorem", provable from the axioms) but true in a disquotational sense. If you were correct, Gödel would have proved, with his completeness and his incompleteness theorems, an inconsistency.

To my mind, Gödel's Theorem is related to the halting problem and the halting problem comes closer to proving that thought cannot be mechanized, though, perhaps not amounting to a formal proof. https://orthosphere.wordpress.com/2018/05/17/the-halting-problem/

ReplyDeleteDo you think a human mind can answer in all cases the question which the halting problem asks? Does this program halt given these inputs? I am not so sure.

DeleteIf a human could solve the halting problem then that would be a proof-by-demonstration that thought isn't mechanizable. But you are no more able to solve the halting problem than a computer is.

DeleteMy own two cents (after reading both Richard Cock's and Floinn's blog posts on the subject) is that their are some interesting philosophical implications for this.

ReplyDeletea) In theology we can never know everything about God, there will always be something new to learn hence Heaven will NEVER be boring ( Lewis I think cottoned on to this at the end of the Last Battle)

b) The proofs of God's existence can only ever be approximate in the knowledge they can give us, I think that Aquinas experienced this in the vision after which he stopped writing the Summa (I think that Mr Cocks' is too hard on the Angelic Doctor here).

Re: reighley and Anonymous: The halting problem effectively asks if there is an algorithm for finding all and only algorithms. The answer is no. Humans can test the validity and truth of algorithms, but an algorithm cannot - hence human thinking is not purely algorithmic and the halting problem does not apply to us. If the halting problem could be solved, then an algorithm could be found for solving all the outstanding problems in mathematics. There is no such algorithm. Nonetheless, us humans will continue to solve these outstanding problems. But a proper answer can be found in my not too long article mentioned above and in this one (both drawing heavily on Roger Penrose) https://orthosphere.wordpress.com/2018/05/19/godels-theorem/

ReplyDeleteOh, and I like Just another Catholic’s comments - including about Aquinas :).

“The halting problem effectively asks if there is an algorithm for finding all and only algorithms”

DeleteNot quite. The halting problem asks if there is an algorithm which can tell if any algorithm will halt. And the answer is no. Furthermore, humans can’t escape this any more than a computer can. I can easily give you any number of algorithms where you can’t answer whether a given algorithm halts. I can even use the trick Gödel used in his proof.

Yes quite! An algorithm is an effective procedure - a step-by-step method for reaching a determinate result. An algorithm that does not halt is not an algorithm at all. An impossible halting machine and whether it halts or not was just Turing's way of answering David Hilbert's original question which is “Is there some mechanical procedure [an algorithm] for answering all mathematical problems, belonging to some broad, but well-defined class?” You're not quite following what I have written above. The halting problem proves that the outstanding problems of mathematics cannot be solved algorithmically. E.g., Goldbach's conjecture or Fermat's last theorem. The halting problem is not a proof that these problems will never be solved at all. Hence, mathematicians must go beyond mere algorithms in their thinking. Mathematical truth and validity is not determined algorithmically. Symbolic logic is about relationships between propositions. The soundness (truth) of valid arguments is not settled by logic, but by comparing it with the real world. Something comparable is going on with mathematical truth.

DeleteI should probably add for clarity that there is no one algorithm for finding all other algorithms. Fermat's last theorem might be solved via an algorithm, but the process of finding that algorithm won't be via an algorithm.

DeleteCan you anon? Present algorithm and input, settle this once and for all!

Delete"An algorithm that does not halt is not an algorithm at all."

DeleteWell, yes, by definition. But an algorithm is a subset of a computation -- and computations don't have to halt.

"The halting problem proves that the outstanding problems of mathematics cannot be solved algorithmically."

That's simply not true. If a search of a large state space finds the result, it halts and outputs the answer. In which case that particular problem was solved by an algorithm. But, until then, it keeps looking. Just like humans do.

Anonymous Anonymous said...

Deletereighly: "Can you anon? Present algorithm and input, settle this once and for all!"

I can present what I claimed I could present, namely, I can give you code where you can't answer whether or not it will halt:

function reighley ()

{

if (reighley_says_that_this_function_will_halt())

loop;

else

halt;

}

That's the "trick" Godel used. It isn't any more mysterious than that.

The problem is that some people believe that the function reighley_says...() cannot be implemented. Therefore reighley is not a Turing machine. Is the function reighley_says...() computable? This is the very question at hand.

Delete"

DeleteIs the function reighley_says...() computable? This is the very question at hand."Sure. Suppose you give an answer. Then, in theory, we could have recorded the flow of electrons in your brain, traced their paths through the neurons, and reverse engineered the computation.

Suppose you don't give an answer. Then we could do the same thing and see where you stop and/or go into a loop.

The point I was trying to make is that it is not at all clear that this reverse engineering procedure is possible. That's what the whole argument is about. You beg the question a bit by making that step.

DeleteThere's no doubt that it's incredibly difficult in practice, even for systems simpler than the brain. I know, I've done it for many years. But being unable to do something in practice is no argument against being able to do it in theory, especially when the only difference is a more complex physical structure.

DeleteToo, all the function is really asking is "can I think what reighley thinks?" The only way that's not so is if you are able to think things that no other humans can think. That's doubtful, based on thousands of years of experience. So that would only be the case if you aren't willing to communicate your thoughts to others, or if you have a secret oracle that only you have access to. But if you claim to have a secret oracle, then all "what_reighley_thinks()" has to do is to return a random answer. If you always correctly predict the outcome, then you'll win all sorts of prizes.

People who have known me for years assure me that what_reighley_thinks() frequently returns random results.

DeleteI read your article, but I still think you have introduced the proposition that "humans can test the validity and truth of algorithms" without adequate demonstration. Some algorithms, of course, but the theorem requires ALL algorithms and this is a tall order.

ReplyDeleteHi, reighley: neither machines nor people can determine in all instances what is or is not an effective procedure (algorithm). Goedelian propositions cannot be proved algorithmically - they are not decidable by the techniques permitted by the axiomatic system - and yet us humans can see their truth and thus the consistency of the system. In English Goedelian propositions say "this statement is not provable in this axiomatic system." You and I can that that is true. If I can prove it, then I have proved that it is not provable because that is what the proposition states. If I cannot prove it, then this agrees with the statement that I can't prove it. Hence, it is true regardless. But this truth is not decidable from within the system and is not provable by an effective procedure. The point under discussion is whether human thought is mechanizable. Answer - not all of it! The fact that we can go beyond machine "thinking" does not make us omniscient. There is still plenty we don't know.

DeleteI do not think your argument shows that human thought is not mechanizable. I think it shows that natural language is inconsistent as a formal system. For my part I hold that the human mind is actually finite, and so is not constrained to Godel's theorem because it is incomplete (any complete system would permit arbitrarily long sentences). A formal system must be either incomplete or inconsistent. Alas we sinners are both. Indeed the algorithm to determine if a thought process will halt is trivial.

DeleteThis is completely wrong.

Delete"

they are not decidable by the techniques permitted by the axiomatic system - and yet us humans can see their truth and thus the consistency of the system."In the case where we can "see" the truth of a statement that cannot be proved within an axiomatic system, it's because we can prove it in a system with more axioms. See, for example, the MU puzzle.

"

English Goedelian propositions say "this statement is not provable in this axiomatic system." You and I can that that is true."That's exactly what you can't see. Something that isn't provable is neither true, nor false. It's akin to the Liar paradox: "this statement is false". The statement can't be false, because then it would be true. It can't be true, because then it would be false. It's undecidable.

The Liar paradox shows how a consistent system contains undecidable statements. The MU puzzle shows how statements that can't be proved in on axiomatic system can be proved in a system with more axioms.

You haven't gone beyond machine thinking, except in practice. But that's because you're currently more complex than current machines. Do something that, in principle, a machine can't do. Solve the halting problem. Compute Busy Beaver numbers.

reighley: I am just stating the natural language version of the arithmetic.

DeleteAnonymous: Love it! "This is completely wrong." What a delightful manner you have. When you quote from my replies please don't garble them. Just copy and paste. Try reading Roger Penrose's "The Emperor's New Mind" or my article to see where you are going wrong. If you try to incorporate Goedelian propositions in a new larger axiomatic system you just generate more unprovable Goedelian propositions.

Anonymous: "this statement is not provable in this axiomatic system" is not akin to "this statement is false" - the former is true on any reading, the latter is undecidable on any reading.

DeleteArithmetic allows a version of "this statement is unprovable", but it seems to me you also offered a proof of that very statement. At least you claimed to know that it was true.

DeleteRC: "

Delete"this statement is not provable in this axiomatic system" is not akin to "this statement is false" - the former is true on any reading,..."And that's wrong. The former is neither true nor false -- because you made a claim but offered no proof!

"This statement is not provable" requires proof. If you prove it, the statement is false. If you show you can't prove it, the statement is true. If you can't offer a proof either way, it's undecided. If you prove it cannot be proved then, like the liar paradox, it's undecidable.

Let "this statement" be "x**n + y**n != z**n" with the usual constraints on x, y, z, and n. Then an example of "this statement is not provable..." would be "Fermat's last theorem is not provable...". But Wiles managed it after 400 years.

For a statement to be demonstrated to be not provable, the statement has to be self-referential. And it has to be self-referential in such a way such that any attempt to prove it gets stuck in a loop. Like the liar paradox.

Anonymous: If I prove it can't be proved it's true since that is what the statement says. It can't be proved in the axiomatic system which generates it, but I can "see" that it is true anyway, as Roger Penrose writes. Goedel's Theorem and the halting problem prove that legitimate thinking cannot be reduced to algorithmic certainty - there is an ineliminable role for intuition and informal thinking. Stanley Jaki: Stanley Jaki writes: “The fact that the mind cannot derive a formal proof of the consistency of a formal system from the system itself is actually the very proof that human reasoning, if it is to exist at all, must resort in the last analysis to informal, self-reflecting, intuitive steps as well. This is precisely what a machine, being necessarily a purely formal system, cannot do, and this is why Gödel’s Theorem distinguishes in effect between self-conscious beings and inanimate objects.” As I have written before, the halting problem proves that there is no algorthimic way of solving the outstanding problems of mathematics - once and for all.

Delete@Anonymous:

Delete"For a statement to be demonstrated to be not provable, the statement has to be self-referential. And it has to be self-referential in such a way such that any attempt to prove it gets stuck in a loop. Like the liar paradox."

This is false. The Continuum Hypothesis is independent of ZFC (e.g. neither it nor its denial follow from the axioms) and yet it is not a self-referential statement.

Gödel's incompleteness is typically proved by constructing self-referential statements akin to the liar paradox, so-called Gödelian sentences, but there are other proofs available that do *not* rely on such -- this is standard mathematical logic stuff.

WHYyyyy is there a catholic scientist conference. Please stop this segregation already if there was a protestant one it would be declared unconstitutional and immoral and discriminatory and exclusive etc etc etc.

ReplyDeleteI do see god/bible as relevant in science but why not just Christian or religious??

Catholicism historically was not relevant to scientifuic progress as it was the protestant motivations that raised the intelligence quotient of the common people and then from this a upper class etc curve on scientific progress.

otherwise a catholic civilization would still have us in the 14-1500's. Not super bade but way behind and barely ahead of non christian civilizations.

Science is used today to attack God/genesis.

thats where all believers , scientifically interested, should unite in a common forceful movement.

please no segregation!

It's funny how you complain about segregation, suggest that all believers should unite in a common movement, and yet perpetuate myths about Catholicism "historically not being relevant to scientific progress" and even how "a Catholic civilization would still have us in the 14-1500s". Did you forget to take your medicine today or what? Do you want unity or division after all?

DeleteAnd why this random diatribe over a Catholic scientist conference? Why would a protestant one be deemed "unconstitutional"? The only reason there is a Catholic scientist conference is because there are lots of Catholic scientists who enjoy discussing the relationship between science and the Catholic faith; and the Catholic faith is rather more doctrinally specific (and also philosophically, having the known traditions of thomism, etc) so it is more comfortable for these scientists to discuss with their peers on these issues.

People can make a Christian scientist conference too. Or a Protestant one. Just like how there are Evangelical journals, as well as Catholic journals, etc. Sheesh.

Naw. Its not that way. its segregation in a subject unrelated to religion. There never was or would be a protestant, or evangelical, science conference. why?? Sheeesh plus its absurd.

DeleteReligion has nothing to do with science except where , some, try to use science to deny God/Genesis. Very special cases.

Further I never find 'Catholic' peoples very interested in anything religious much less bringing it to science.

INSTEAD what it is IS a Catholic scientists feeling the pressure from science to reject certain catholic/Christian doctrines. lIke God/soul existence etc etc.

Well then unite with all Christian sciency types for a united defence against this pronble and fix it or bring correct status etc.

no segregatiuon and suspicious they want nothing to do with bible believing protestants or any protestants.

north america is already too segregated to the loss of the common people and particular peoples.

There is a time to bitch.

"Religion has nothing to do with science"

DeleteEnjoy the road to atheism.

religion has nothing to do with science. science is a man idea not God.

DeleteConclusions about god and genesis are relevant and there. yet these are conclusions that are in place before scientific investigation for Christians.

when athesists say SEPARATE religion from science they mean exclude religious conclusions BEFORE/during investigation of nature called science. very different.

In his superb book The Divided Brain & The Making of the Western World Iain McGilchrist describes the process by which the Western mind became increasingly mechanized. Or how the presumptuous left-brained Emissary usurped and diminished the intrinsically wordless Wisdom of the right-brained Master.

ReplyDeleteWe now all "live" in a spirit-crushing tower of left-brained babel/babble - with no exceptions.

That's a great book now called "The Master and His Emissary." And you're quite right. Goedel and Turing, somewhat against their own wishes perhaps, have helped show the limitations of the left hemisphere.

DeleteIs this Luke Breuer by any chance?

Delete@Ed Feser: "The problem would be something like this. If you are going to produce a formal result concerning all human thought, it seems that you would first have to be able to formalize all of human thought, so that you would be able to say something about human thought in general in the language of your formal system."

ReplyDeletePerhaps you could prove formally that no formal system had property X, where X is a property that human thought manifestly has.

Your argument suggests, though, that you couldn't prove *formally* that human thought has properly X. We would have to have some other basis for thinking that human thought has property X.

But formal systems can formally define properties without possessing those properties. (An example of such a property is the one in Gödel's result: No consistent formal system that contains a certain amount of arithmetic can prove its own consistency. But such a system *can* formally define the property of being "a consistent system that proves its own consistency".)

So there is no obvious obstacle to an argument showing formally that no formal system has property X. The formal problem would be in showing that humans do have property X.

Are you saying that human thought minus the un-formizable portion, is that than which nothing greater can be thought?

ReplyDeleteJust a thought, kind of a tangent to the information-theoretic side of the mind-body problem, but a mind problem nonetheless.

ReplyDeleteIn my discussions with certain transhumanists and atheists, I notice a common theme that tends to occur, this being the idea that you can transform a human into a robot by slowly replacing neurons one by one. Like Theseus' ship, the argument is that since one neuron cannot make a difference between personhood and non-personhood, then replacing biological neurons with mechanical neurons (whether real, or simulated) should produce no discernable effect on the subject's consciousness.

My take on it is that a computer and a biological brain operate on such fundamentally different principles that it would be foolish to claim that some aspect of personhood could be retained without accepting some form of dualism.

Take, for example, the idea of a lookup table. Everyone knows this is not a conscious being. However, if you build a lookup table that, for any combination of inputs and outputs, matches what a human would do about the matter, then it would be impossible to determine if it's a conscious entity on the basis of looking at its reaction to stimuli alone. Everyone knows that a lookup table is not conscious, and there is nothing physically about it that is even remotely analogous to a human brain's functioning. So it seems that if a person were transformed into a lookup table, he would have nothing physically in common with a lookup table and it's pretty clear that he would be dead even though his outward behavior wouldn't change.

But concerning less obvious cases like a simulation of a human, or a human with metal neurons that function using semiconductor physics rather than biochemistry, what would A-T metaphysicists say about the possibility of transforming a person into one of these kinds of beings?

It seems to me that one would not have to formalize all of human thought. This would, after all, be an argument by contradiction. One can imagine something having the following general form:

ReplyDelete1) If thought could be mechanized, then this argument could not be understood by a human.

2) But it can.

3) So thought can't be mechanized.

Why would 1 hold? Well of course, that's where the genius would have to come in. But I'm just saying that, form-wise, this possibility seems like a defeater for your spitball. Or I may be missing something.

(I suppose, to my prior post, one could object that "understand" is not really a formal or even formalizeable concept. That may be right. But perhaps "verified" or something... at which point, it does seem like a computer, too, could be taught to verify it, so maybe my argument just fails. Hmmm.)

ReplyDeleteI use Mook’s lookup table argument with all the computer science AI enthusiasts I meet; they always frown as if one isn’t playing fair and probably isn’t fit for polite society.

ReplyDeleteNeural nets in many commonly used forms - eg deep learning so called - are just approximations to the Mookian total lookup table. They exhibit limitations arising from the way they are “trained” and can be (hilariously) fooled. It may be there is a theorem lurking here that these systems are intrinsically so limited.

More profound designs are really dynamical systems with feedback, based on abstracting natural neural systems. These have remarkable behavior analogous to and imitative of the natural systems.

No matter how it’s done, a machine is just a material artifact, an artificial form. It doesn’t know anything as man does because that knowing requires an immaterial part. Any “Intelligence” the machine has is just the relict of the intelligence that built the machine. That person deliberately built something that would have certain material and formal properties. But not being capable of it, they could not add enough form to make a knower.

Gödel doesn’t say anything about knowing as such, only something about formal systems. These systems are limited mini-sublanguages cooked up so that certain truths, known in the intellect, including aspects of argumentation, can be written down in a way free from equivocation and that make natural argumentation representable by chains of expressions.

These systems, not unexpectedly, turn out to be useful for the purpose and things they were designed for. Somewhat unexpectedly, because there was an opposite prior enthusiasm, it turns out that grammatical or well formed expressions in the system exist for which there is no system rule-based “proof” or formal chain starting from the primary rules and ending in that statement. This is purely a mechanical kind of result. It turns out any system with a sufficient complexity has this funny property. But this says nothing about knowing, only something about yet another human tool construct.

ReplyDeleteI use Mook’s lookup table argument with all the computer science AI enthusiasts I meet; they always frown as if one isn’t playing fair and probably isn’t fit for polite society. ... Neural nets in many commonly used forms - eg deep learning so called - are just approximations to the Mookian total lookup table.I'm not sure what a "Mookian total lookup table" is. DuckDuckGo doesn't return any early matches.

Nevertheless, a lookup table is not a neural network, nor is it a Turing Machine, nor does it implement the Lambda Calculus. A lookup table is one category, a neural network is another, and the Turing Machine/Lambda Calculus are another. A neural network is a Turing Machine, but without an infinite table. However, since humans use external storage, our tapes can be as large as the universe allows.

... and can be (hilariously) fooled...So can humans.

It doesn’t know anything as man does because that knowing requires an immaterial part.That's the part you have to prove. If you understand the Lambda Calculus, it's clear that the only difference between man and today's machines is complexity of wiring.

> Mookian

ReplyDeleteCommenter Mook above. (Probably will be available in a week or so after the search crawlers finish reindexing the web.)

> lookup table is not a neural network

Any function (in the mathematical sense) can be viewed as a lookup table. Neural nets are machines that implement a function. A machine directly implementing a table lookup suitable for AI would have to have a pretty vast memory, so the neural net with its memory consisting mostly a small number of weights etc. is more efficient, but it is not really doing anything more.

> So can humans.

Just tossed that in because of the tendency out there to over-hype these systems

> That's the part you have to prove.

Summary sketch of argument:

One has to start by accepting that knowing is possible. We know what and also that things are. It’s what we experience, and denying knowing ends in the well known self-contradiction.

The knower and the thing as known have to be identical in the act of knowing. If they were different, the knower would not actually be knowing the thing, but something else or perhaps nothing. But this identity can’t be physical and material, because the knower would then be the physical thing and would in consequence no longer be able to know. So knowing must be immaterial.

It’s accessibly treated in Father Joseph Owens’s book “Cognition” and also his book “Elementary Christian Metaphysics”.

> understand the Lambda Calculus

It’s just a nice, even elegant, formalism for expressing reasonings about (recursive) functions. But it is just a tool of reasoning.

> the only difference between man and today's machines is complexity of wiring.

Complexity is not of the essence. One doesn’t even get more machine capability by simply making the machine more complex. Better designed and less complexity may provide more capability. It’s what species of complexity, or the form, that counts. But material forms can’t rise to knowing.

ReplyDeleteAny function (in the mathematical sense) can be viewed as a lookup table.Only insofar is it gives a result. But a lookup table

does not calculatethe value that is in the table. A Turing-machine does. It is the calculation of the value that is the important part. So by comparing a lookup-table to a neural network is the fallacy of false equivalence.Just tossed that in because of the tendency out there to over-hype these systemsThere's a possible ambiguity as to which system is being over-hyped. Humans certainly fit that bill.

So knowing must be immaterial.Ok, let's assume I grant that. Where is the materiality in the lambda calculus? Remember, the lambda calculus is the "platonic form" of a Turing machine.

But it [the λ calculus] is just a tool of reasoning.It's more than that. It's the fundamental description of computation, i.e. what goes on in the human brain and in machines.

One doesn’t even get more machine capability by simply making the machine more complex.Oh, but we do get more capability out of more complexity. The computers of 2018 can do things than the computers of 1978 could not do (in practice -- not in theory). Today's recognizers based on neural nets weren't possible 40 years ago. A machine that could beat a Go grandmaster wasn't possible 40 years ago. Self-driving cars weren't possible 40 years ago.

Furthermore, what is always overlooked, is that the programming is the wiring. The more complex the wiring, the more complex the programming. And the human brain has more complex wiring than today's machines.

"But a lookup table does not calculate the value that is in the table. A Turing-machine does."

DeleteI think this distinction is really important. We must distinguish between the function as mathematical object, its presentation as a lambda-term or Turing machine or lookup table or whatever and the rule for reducing a lambda or running a Turing machine or looking up from the table.

"I think this distinction is really important."

DeleteIs it? Anonymous is not even making a coherent argument. He moves from lambda calculus or Turing machines, which are two provably equivalent formal models of computation (they compute the same set of functions), making some pretty baffling -- to be polite -- statements along the way (e.g. "Remember, the lambda calculus is the "platonic form" of a Turing machine." What in the blazin' hell does that even mean? And there are more where this came from), to concrete machines and their "complexity", when whatever concrete implementation of a machine you come up with can only ever compute a finite set of total functions due to bounded memory, a material constraint that is not going away. Bounded memory machines are provably less powerful than a Turing machine and they can all be implemented by (finite) lookup tables. Unless the assertion really is either that the particular algorithm implementing a function or the concrete material implementation of said algorithm are actually important details, which of course is not only a complete absurdity because it defeats the whole purpose of bringing the theory of computation into the picture, is completely unargued for.

DeleteIs it?It certainly is.

What in the blazin' hell does that even mean?A Turing machine is usually considered to be a physical device. A tape that moves, a print head that marks a square in a tape (or erases it), a "program counter" that selects different states in the machine's memory. The λ calculus, however, is simply symbol manipulation. It deals with ideas.

And there are more where this came fromAll you have to do is ask.

can only ever compute a finite set of total functions due to bounded memory, a material constraint that is not going awayI said as much. But humans use external storage, so our "bounded memory" is as big as the universe, however big that is (and however small we can make our symbols).

and they can all be implemented by (finite) lookup tables.What puts the value in a particular index in a lookup table? That's they key question you have to deal with. A lookup table is not a Turing machine. Something has to put the value into each location in the lookup table, and something has to know where to look.

"Is it? Anonymous is not even making a coherent argument. "

DeleteWell, I didn't mean to imply that Anon's argument made sense in the end, only that he was making a distinction which is usually elided in this debate but which I would prefer to maintain.

Anon of course tends to elide other distinctions, and hand wave away certain facts. For example :

"A lookup table is not a Turing machine. Something has to put the value into each location in the lookup table, and something has to know where to look."

And something also has to program the Turing machine and give the entire abstraction a semantics. This is the point Wittgenstein is making when he begins Philosophical Investigations with the example of a lookup table. Of course the process can be represented by a lookup table, but how did we know how to look up from tables? Surely not another lookup table. The same point exactly holds for Turing machines. But the distinction between the lookup table as static data set and Turing machine as running mechanism is essentially the distinction between lookup table as representation vs. Turing machine as representation + machine semantics.

Anon imagines himself actually building a Turing machine and watching it run. Physics would then supply the last step in the chain of lookup tables. We do not (for some reason) have to ask how to obey the laws of physics.

@Anonymous:

Delete"It certainly is."

No it is not.

That was easy.

"A Turing machine is usually considered to be a physical device"

A Turing machine is a formal, mathematical object. It is not a physical object, not here, not in Mars, not anywhere in the universe. It is usually considered "a physical device" only be ignorant people. Go read a book, you do not know what you are talking about.

"All you have to do is ask."

No thanks, there is only so much that I can stand.

"What puts the value in a particular index in a lookup table?"

Quite obviously you have not read, much less understood, what I said.

A lookup table is an *implementation* of a function. There is a technique called memoization that trades time by space, by caching the results of a function. In a language like Python, this is single line (well two, if you count the import statement). In a language like Haskell that features immutable data structures by default and lazy evaluation, this is done automatically behind the scenes. This can even be turned into a *compilation* technique. In a language like C++ this is possible in principle (the macro language is Turing-complete) but an exercise in masochism. In a homoiconic language like Scheme with hygienic macros this is a trivial exercise -- so now you have a function that *is* implemented as a lookup table by pre-computing the values in the compilation phase.

So to repeat myself, an egregious sin you forced upon me, "Unless the assertion really is either that the particular algorithm implementing a function or the concrete material implementation of said algorithm are actually important details, which of course is not only a complete absurdity because it defeats the whole purpose of bringing the theory of computation into the picture, is completely unargued for."

@Anonymous:

DeleteI missed this:

"I said as much. But humans use external storage, so our "bounded memory" is as big as the universe, however big that is (and however small we can make our symbols)."

Since we are discussing the nature of the mind itself, the fact that we can have recourse to "external storage" itself is irrelevant. And whatever storage we have access to it is still finite by fundamental physical constraints (unless some truly spectacular overturning of physics as we know it occurs). To name just one, since the light speed puts a cap on the speed of signal transmission, no we do not have access to the entire universe. To name the second: we cannot make our symbols arbitrarily small, by which I mean that a quantum of information cannot occupy an arbitrarily small volume.

"Unless the assertion really is either that the particular algorithm implementing a function or the concrete material implementation of said algorithm are actually important details"

DeleteMight they be important details? A lot of philosophy of the mind arguments (Searle's Chinese Room and Kripke's plus/quus for example) seem to depend at least a little on the architecture of the machine in question.

Your point is well taken that if we do not abstract away implementation details then we will not be able to apply much of the theory of computation, but honestly I think we cross that bridge when we admit that our own minds are probably finite.

I don't think it is a total loss though. A lot of the methods (Godel numbering things, simulation of one machine by another, maybe even lambda reduction to a fixed point) might be of use on Minds as well as on Functions even if it turns out that those two classes are in no way isomorphic to one another.

Deletereighley:And something also has to program the Turing machine and give the entire abstraction a semantics.Sure. Remember, the program is the wiring and your wiring is a product of Nature.

Deletegrodrigues:No, it is not. That was easy.I think you want room 12A.

It is usually considered "a physical device" only be ignorant people.How does a non-physical thing transition from state to state? How does a non-physical thing put a symbol on a tape?

so now you have a function that *is* implemented as a lookup table by pre-computing the values in the compilation phase."Pre-

computing".@Anonymous:

Delete"I think you want room 12A."

Sorry, but the joke is lost on me. I would imagine this is an americanism.

"How does a non-physical thing transition from state to state? How does a non-physical thing put a symbol on a tape?"

I said a Turing machine is a "formal, mathematical object". I did not speak anywhere of immateriality, non-physicality, etc. And when in a logic, or computer science, they use terms like "tape", "head", etc. these terms are either used in an informal, suggestive way, or they have precise definitions. When Beilinson speaks about "perverse sheaves", only an idiot would wonder about the moral virtue of a sheaf.

Honestly, just take my suggestion, go read a book.

""Pre-computing"."

Well, I am not going to repeat, for what would be the third time, what I said. If you do not want, or cannot, read, there is not much I can do.

Dude, give up... you are projecting your brain operations on to the computer. All your arguments depend on this one sleight.

Delete@grodrigues

Delete"Sorry, but the joke is lost on me. I would imagine this is an americanism."

I think the reference is to Monty Python's "The Argument Clinic". British.

https://www.youtube.com/watch?v=XNkjDuSVXiE

Deletegrodrigues:Well, I am not going to repeat, for what would be the third time, what I said. If you do not want, or cannot, read, there is not much I can do.You said, "Bounded memory machines are provably less powerful than a Turing machine (true) and they can all be implemented by (finite) lookup tables. (false)"

A lookup table cannot evaluate whether or not the parenthesis in an arbitrary expression are balanced.

I said a Turing machine is a "formal, mathematical object". I did not speak anywhere of immateriality, non-physicality, etcWhat is the nature of a mathematical object?

@Anonymous:

Delete"A lookup table cannot evaluate whether or not the parenthesis in an arbitrary expression are balanced."

This is the difference between a full blown parser and a regular expression parser. And the main difference between the two is that the second has bounded memory and cannot perform arbitrary recursion -- oh wait, that was *precisely* what I said.

And even if we leave aside it is simply false that "and they can all be implemented by (finite) lookup tables. (false)" is false per your parenthethic remark. Either you are objecting to the finite in between parenthesis or to the universal quantifier opening the sentence. The finite is because I spoke of bounded memory machines so what you must be objecting to is the universal quantifier. A Turing machine, any one Turing machine, computes a function. And the abstract, set-theoretic definition of function, *just is* a look-up table -- minus the finite in between parenthesis.

Go read a book, I am out of patience with your ignorance.

"What is the nature of a mathematical object?"

Good try at changing the subject.

@reighley:

Delete"I think the reference is to Monty Python's "The Argument Clinic". British."

Ah, that makes sense. But now I would say the irony is lost on Anonymous too.

@reighley:

DeleteForgot this:

"A lot of philosophy of the mind arguments (Searle's Chinese Room and Kripke's plus/quus for example) seem to depend at least a little on the architecture of the machine in question."

I do not see how this could be. The chinese man is a stand-in for a universal Turing machine; it does not depend on any architectural details. Kripke's example arguably depends on the difference between finite and infinite memory, but that is a material constraint, not an architectural one. And as James Ross points out, even the finite qualifier is not really important for the indeterminacy of the physical.

Deletegridrigues:And the main difference between the two is that the second has bounded memory and cannot perform arbitrary recursionNo, that's not the main difference at all. Both (can) have bounded memory. After all, in many systems, the stack is put at the top of memory and the head at the bottom. The stack grows down, the heap grows up. Overflow results if they collide. The difference is that a lookup table isn't a stack. It reads, but it cannot write.

The whole point of this silly argument is the claim that

Everyone [a lookup table] knows this is not a conscious being.Comparing a human brain to a lookup table is clearly ludicrous. A human brain can tell if the parenthesis balance in the expression "(((((((((((((((((1+3))))))))))))))))". A lookup table cannot.Good try at changing the subject.Good try at evading the issue. You claimed

I said a Turing machine is a "formal, mathematical object". I did not speak anywhere of immateriality, non-physicality, etc. If mathematical objects are immaterial, then you certainly did. If mathematical objects aren't, then you didn't. So I'm simply asking you to clarify your position.@Anonymous:

Delete"No, that's not the main difference at all. Both (can) have bounded memory."

Yes, that is the main difference -- you should read more carefully -- and no, Turing machines do not have bounded memory, period. Every book on mathematical logic that I know of (and I know a few) defines them so. The entry on Turing machine on the wikipedia begins the second paragraph as "The machine operates on an infinite[4] memory tape divided into discrete cells.[5]" Off the top of my head, I can remember that Friedl's "Mastering Regular Expression" has a proof that a regular expression parser cannot recognize arbitrarily nested pairs of balanced parenthesis, but it *can* recognize pairs of balanced parenthesis up to a given constant depth n, where n depends on the memory available, or the number of states of the state machine if you want to frame it that way -- which is the reason, or a reason, why in practice bounded memory is not really that much of a constraint, since code in typical languages tends to be flat and shallow, and even in non-typical languages like Scheme or a concatenative language, where the nesting can go very deep, the memory available is more than enough to cope with it before the stack blows up.

Of course stack overflow happens, because no *concrete implementation* of a Turing machine can have unbounded memory (so strictly and narrowly speaking, it is not an implementation of a Turing machine). That you cannot even maintain the difference in your head between a purely mathematical object, which is after all an *abstraction*, and its concrete material implementation just shows the depths of your ignorance. Go read a book.

"The whole point of this silly argument is the claim that Everyone [a lookup table] knows this is not a conscious being."

I do not know what "silly argument" you are referring to; but then again, I do not think any naturalist worth listening to ever made such an obviously dumbass statement, so it is not like it needs refutation.

"Comparing a human brain to a lookup table is clearly ludicrous."

It is a comparison that follows logically from *your* claims, not mine. If you find it ludicrous, that probably says something about your position.

"If mathematical objects are immaterial, then you certainly did."

I never said, or even so much as suggested, that mathematical objects are immaterial. It is also completely irrelevant to what I actually said, because I have restricted myself to clarifying the logical entailments of *your* position, insofar as your position is even coherent, not in defending mine. So once again, nice try at deflection.

@grodrigues

Delete"The chinese man is a stand-in for a universal Turing machine; it does not depend on any architectural details."

I take the Chinese Room argument to be, essentially that a man simulating a man who knows Chinese does not necessarily know Chinese. You could imagine a system in which the big stack of instructions in the Chinese room were exercises in Chinese (caching the results of the computation in the finite store, you might say) and then after that man in the room does understand Chinese and proceeds. The two systems compute the same function, Chinese, but of one we predicate "understanding" and of the other we don't. Since the Chinese Room is not in this sense a black box I think it fair to say that its architectural details matter.

"Kripke's example arguably depends on the difference between finite and infinite memory, but that is a material constraint, not an architectural one."

Kripke's argument runs in the opposite direction. He is asking me to ignore the fact that I learned about addition in the first instance by induction. I only required a finite number of cases, and after that I add large number using paper and pencil. I am capable of only finitely many quus like functions because of this, and in fact my working memory is very small. Quus and plus are actually different, and this is knowable and everybody knows it, but Kripke's point is that if you regard plus as simply an abstract function you cannot know it. Therefore we take our interior operations into account when we speak of "to know" (or so say I, not necessarily Kripke, who I think goes pretty badly awry in Rules And Private Language anyway).

This is really what I mean when I say "architectural" difference. Of two machines which compute the same function, different predicates may be applied. In such cases we would have to rely on either the internal construction of the machine, or its context in a larger system.

In all honesty I don't see the point of pointing to an algorithm of any kind to help you understand the Mind.

ReplyDeleteAn algorythm is just a series of physical events that WE attribute meaning to them.

Just think about a crazy coil going down 23 steps on a stair. Everytime it hits one of the steps, it lights up Billy Jean Style. Now you would think nothing of it, if it wasn't for the letters of the alphabet showing up on each step in the correct alphabetical order!

You come to the conclusion that the stairs and the coil can cite the alphabet correctly... orrrr... You have set the system to imitate that behaviour.

It is easier to see there is nothing much about computers when you think of them as MECHANICAL instead of working as ELECTRICAL/QUANTUM. The electrical is mysterious, hard to spot, and we only know if it is working through the screen that already uses all the symbols we want. The mechanical ones are slow and not as amazing but can do the exact same thing as long as you project the same symbols and functions on them.

Anyone who has done a paper computer which is THE EXACT same thing as using a lookup table, knows how a computer is meant to imitate what we humans do and so is any Turing Machine.

I meant to say a SLINKY...

Deletefreaking coils, springs, sprirals XD.

My browser isn't able to reply to post properly, so I will have to start a new comment. But I have to say that I agree with

ReplyDeletegrodrigues that bringing up computational complexity defeats the purpose of treating the mind as a computer.

My dissatisfaction with computationalism is the lack of a satisfactory mechanism by which arranging matter in a way that _approximates_ the functioning of an algorithm _creates_ some sort of immaterial soul that can feel things. If the mind is an algorithm that can be instantiated by something other than the brain, say a neural network, then how can we be sure that the algorithm is truly being implemented and not merely approximated to some degree?

If the function of the brain boils down to the wiring, then how does that differ from simply defining an algorithm by its results? What measure is there to judge by? Can we look at a set of neurons and say "yup, this instantiates the algorithm just fine," and then look at a different set of neurons which give exactly the same results, and say "nope, that one's not doing _real_ computation." Is there some "information flow" by which we have to interpret what really amounts to 0's and 1's in NAND memory, yet attach meanings to these 0's and 1's as if they relate to an abstract algorithm. Yet it seems to be that consciousness is not something that is derived from extrinsic observers observing us. Consciousness is one of the only things in the world that are irrevocably intrinsic.

My point is that if there is some sort of "information flow" that is _truly_ going on regardless of external observation, that implies that information (or whatever it is that makes an algorithm _properly_ implemented as opposed to merely having its inputs and outputs matched) is some sort intrinsic substance distinct from an emergent epiphenomenon. Saying that consciousness is an illusion doesn't do anything to satisfy the problem of consciousness though. We ought to explain why we have these subjective experiences all the time. The Dennett solution in the end seems to result in a Nagelian conundrum where the boundaries we've drawn for ourselves preclude us from ever reaching a satisfactory explanation of the world.

DeleteBut I have to say that I agree withgrodrigues that bringing up computational complexity defeats the purpose of treating the mind as a computer.

How so? If the argument is that the human brain is a Turing machine, then the typical response is that humans can do things that computers can't do. So then we have to ask is the difference between minds and machines one of kind, or degree? By looking at physical structures, we can see that it's one of degree. Complexity of wiring is correlated with complexity of ability. The computers I worked with 40 years ago did not have the complexity of today's machines, and today's machines can do things that simply weren't possible back then.

My dissatisfaction with computationalism is the lack of a satisfactory mechanism by which arranging matter in a way that _approximates_ the functioning of an algorithm _creates_ some sort of immaterial soul that can feel things.You put a label ("soul") on something you don't understand and can't explain and are then dissatisfied with your system. For example, does this "immaterial soul" exist independently of the structure of your brain? Why, or why not? Does this "immaterial soul" depend on the complexity of the wiring of your brain? Why, or why not?

how can we be sure that the algorithm is truly being implemented and not merely approximated to some degree?Maybe you can't. So what? Nature isn't required to satisfy your intuition of how she ought to work.

If the function of the brain boils down to the wiring, then how does that differ from simply defining an algorithm by its results?It doesn't. I suspect, however, that you think that everyone has the same algorithm. But since our brain wiring is as individual as our fingerprints, our algorithms are as individual as our fingerprints.

Can we look at a set of neurons and say "yup, this instantiates the algorithm just fine,"No. You can't tell from a computer circuit what that circuit does. All you can do is look at what it does and then see if you can figure out how the circuit does it. But you simply can't tell if a gate is a NAND gate or an AND gate or some other gate, except by looking at what the entire system does (and if you tell me that you can, I'll show you what unwarranted assumptions you're making).

Consciousness is one of the only things in the world that are irrevocably intrinsic.Suppose that's true? So what? It's still tied to brain wiring. Sever certain wires and you're no longer conscious.

My point is that if there is some sort of "information flow" that is _truly_ going on regardless of external observation,"External" is superfluous. Observation can be internal as well as external.

Saying that consciousness is an illusion doesn't do anything to satisfy the problem of consciousness though.Just to note, I've never said that consciousness is an illusion.

preclude us from ever reaching a satisfactory explanation of the world.That assumes that a "satisfactory" explanation of the world exists. First, Nature isn't required to satisfy you. Second, "satisfaction" is subjective. Your neural net may be so configured that nothing will satisfy you.

"No. You can't tell from a computer circuit what that circuit does. All you can do is look at what it does and then see if you can figure out how the circuit does it."

DeleteI think your argument would be much clearer if you worked to resolve the contradiction in this pair of sentences. We cannot, apparently, tell from a computer circuit what that circuit does ; but we can look at what it does. So the operation of "telling from it what it does" and "looking at what it does" must be different.

DeleteI think your argument would be much clearer if you worked to resolve the contradiction in this pair of sentences.You're absolutely right. Let me try again. Consider a binary "logic" gate. It operates on two distinct objects (it doesn't matter what they are), and produces one of the two objects as output. If you look at the behavior of a single gate, it is impossible to tell if it is an AND gate or a NAND gate; an OR gate or a NOR gate, etc. Furthermore, if you look at two gates that have the same behavior, it's still impossible to tell if they're the same gate (the gate could be used as a NAND gate in one place and an AND gate in another). We might say that they are, due to economies of scale of mass production, but you can't determine that by looking at them.

So I'll leave it as an exercise to figure out how to tell what a particular circuit does. (And your answer, BTW, should solve the "problem" of qualia).

@Anonymous

DeleteI feel like you are trolling me with nominalism.

Deletereighley:I feel like you are trolling me with nominalism.I'm not. In fact, I think that if you do the exercise I suggested, it will lead away from nominalism. Can you tell by looking at an object with two inputs and one output what logic function it implements? You can't. So, then, how do you get meaning out of a circuit or a neural net? Is meaning emergent or fundamental?

@Anonymous:

Delete"If the argument is that the human brain is a Turing machine, then the typical response is that humans can do things that computers can't do. So then we have to ask is the difference between minds and machines one of kind, or degree?"

First point, is that the brain *cannot* be a Turing machine because it has bounded memory. Period. But let us put aside detail. Both a brain (viewed as a Turing machine) and any garden variety computer you can buy at a store are *universal Turing machines*, that is, they themselves can simulate any Turing machine. Mathematically, there is no difference between any two universal Turing machines. So you bring up "complexity"; ok, define "complexity" for us, and I mean a precise definition, not your usual hand-waving baloney -- and then prove that human brains are indeed "more complex" than any existing computer. You will not be able to do it, but hey, prove me wrong as it will be a humbling exercise for myself which as a Catholic I can only welcome.

Deletegrodrigues:First point, is that the brain *cannot* be a Turing machine because it has bounded memory. Period.Sure. But humans use external storage, so our memory is (at least) as large as the universe lets it be. Some people might want to argue that the human mind isn't constrained by the physical constraints of the universe but, then, they need to demonstrate that.

So you bring up "complexity"; ok, define "complexity" for us.See Circuit Complexity. Given the computation performed by neurons, we can estimate their complexity in terms of boolean circuits. One estimate (Superficial Analogies and Differences between the Human Brain and the Computer) says the "Human brain’s memory power is around 100 terra flops. (i,e,100 trillion calculations/sec). 100 trillion synapses hold the equivalent memory power around 100 million mega bytes."

So that enables a comparison of raw power. However, it still misses the point that organization is important, too. Differences between programs are differences in wiring. So not only is memory capacity and speed important, so to is how the wires are arranged.

which as a Catholic I can only welcome.The truth shall set you free. The one who actually raised Himself from the dead said that, IIRC.

@Anonymous:

Delete"See Circuit Complexity. Given the computation performed by neurons, we can estimate their complexity in terms of boolean circuits. One estimate (Superficial Analogies and Differences between the Human Brain and the Computer) says the "Human brain’s memory power is around 100 terra flops. (i,e,100 trillion calculations/sec). 100 trillion synapses hold the equivalent memory power around 100 million mega bytes.""

As expected, more hand-wavy baloney.

"How so? If the argument is that the human brain is a Turing machine, then the typical response is that humans can do things that computers can't do. So then we have to ask is the difference between minds and machines one of kind, or degree? By looking at physical structures, we can see that it's one of degree. Complexity of wiring is correlated with complexity of ability. The computers I worked with 40 years ago did not have the complexity of today's machines, and today's machines can do things that simply weren't possible back then."

ReplyDeleteWe have no strong reason to think that any sufficient complexity leads to consciousness, only that some level of complexity is required for consciousness. It's trivial to claim that conscious systems are complex systems because the only evidence of a conscious system we have is highly complex. Doing so does not tell us how to build our own.

"You put a label ("soul") on something you don't understand and can't explain and are then dissatisfied with your system. For example, does this "immaterial soul" exist independently of the structure of your brain? Why, or why not? Does this "immaterial soul" depend on the complexity of the wiring of your brain? Why, or why not?"

Don't assume I am proposing an immaterial soul. I am saying that an algorithmic interpretation of the brain leads to an immaterial component of the algorithm that makes up the brain. If each algorithm is unique and you cannot transmit the same algorithm across different neural substrates, then the utility of an algorithmic interpretation of the brain disappears.

"No. You can't tell from a computer circuit what that circuit does. All you can do is look at what it does and then see if you can figure out how the circuit does it. But you simply can't tell if a gate is a NAND gate or an AND gate or some other gate, except by looking at what the entire system does (and if you tell me that you can, I'll show you what unwarranted assumptions you're making)."

A NAND gate and an AND gate can take many forms. Nothing is intrinsically a logic gate until you quantify what voltage levels are intrinsically 1's, what voltage levels are intrinsically 0's, what timescales you are looking at, and so on, which is to say that logic gates are only intrinsically assigned. But that just goes to show that no 'algorithm,' in the sense we usually talk about algorithms, can be implemented in a physical, only approximated, which to my mind casts some serious doubt on the idea that consciousness originates in the algorithm itself and not something in the physical properties.

"Suppose that's true? So what? It's still tied to brain wiring. Sever certain wires and you're no longer conscious."

That's not a given. If consciousness originates inside neurons then consciousness would end when our neurons die, not when the wiring is severed. I don't claim to know. But I'm not claiming it's settled science either, not when there is no empirically testable scientific definition of consciousness.

DeleteWe have no strong reason to think that any sufficient complexity leads to consciousnessThere you go with subjective adjectives, again. I think the structure of neural networks (which we know compute) and their complexity in the human brain is quite a strong argument. This argument is bolstered by increasing capability tracking increasing complexity in our computers.

Doing so does not tell us how to build our own.We can try to mimic it via continued algorithmic development. I'm not sure how successful that will be. We could also evolve it, just like Nature did.

I am saying that an algorithmic interpretation of the brain leads to an immaterial component of the algorithm that makes up the brain.Just FYI, I agree with you, although we may (or may not) agree on what the immaterial component entails.

If each algorithm is unique and you cannot transmit the same algorithm across different neural substrates,...Why would you thing algorithms can't be transmitted? Surely I could teach you Euclid's algorithm for finding the greatest common divisor of two numbers, or an algorithm for sorting. The problem isn't the transmission of an algorithm. WRT the human brain, the problem is knowing that the algorithm is.

But that just goes to show that no 'algorithm,' in the sense we usually talk about algorithms, can be implemented in a physical, only approximated...How else is an algorithm implemented, if not a physical way? We can think about the λ calculus using unphysical symbols using unphysical connections taking no time at all to generate, but I don't know how to communicate that except by physical things.

If consciousness originates inside neurons then consciousness would end when our neurons die, not when the wiring is severed.Do a thought experiment. Sever all of the neurons. What, then, is the difference between electrons not being able to flow across severed connections and not being able to flow at all?

"Do a thought experiment. Sever all of the neurons. What, then, is the difference between electrons not being able to flow across severed connections and not being able to flow at all?"

ReplyDeleteI don't think that certain theories of quantum neuroscience have much backing at the moment. But I can dig up more than just papers from Hameroff and Penrose on the subject (See: A New Spin on Neural Processing: Quantum Cognition, by Weingarten et al.) As a non-neuroscientist I don't think my inclinations have any authority. But at the same time I don't want to restrict myself to creating a problem by attaching dtringent definitions to things. My own journey through philosophy of mind has had a couple of those moments:

Considering materialism -> Creates the problem that subjective experience can't exist, requiring us to discount our own experiences

Considering epiphenomenalism -> Creates the problem of abstract concepts such as arithmetic and relativity not having any causal power, despite LIGO making a successful observation of relativity, and mathematical concepts clearing having some bearing on reality

Considering (naive) A-T hylomorphism -> Creates the problem that Sapient A.I seems impossible, despite the fact that there seems to be no qualitative difference between playing around with biomolecules, and playing around with semiconductor circuits in as small scale

I think that A-T can admit the theory that the wiring between neurons has irreducible causal powers. But I am attracted to theories of consciousness that place consciousness inside neurons because they fit a lot of our intuitions with regards to out experiences as subjective observers. Namely:

-It fits with the idea we don't lose our identity if the wiring between our neurons shifts radically, such as from youth to adulthood, or perhaps from a human to a posthuman consciousness.

-It fits with the intuition that _not all_ lookup tables approximating human outward behaviors are conscious because not all lookup tables would possess the special features inside neurons

-It fits with the idea that we can temporarily suspend our brain function, perhaps on very long timescales, without ceasing to exist or losing our identity.

That last scenario in particular makes me doubt that instantiating any algorithm can confer personhood (although tangent to the subject of consciousness), since an algorithm doesn't seem like it could be instantiated by an arrangement of matter that is not currently interacting, any more than it could be instantiated by the storage of said algorithm inside the memory of a machine which could conceivably run it, but isn't currently running it because the machine is offline.

My intuition about the above machine is if an algorithm could confer personhood (I am not sure if this is your viewpoint) then the CPU cannot confer personhood, because it alone does not contain an algorithm, and the memory alone could not confer personhood, because it is not capable of executing the algorithm (a "P-algorithm"). But merely attaching the memory to the CPU does not confer personhood if there is, say, a single bit in memory that is always zero, and by a safety feature of the CPU it blocks off access to the P-algorithm unless it is set to 1.

It doesn't seem to me like combining the CPU with the memory would create a person, because there the P-algorithm is not being executed. But at the same time, if you were to flip that bit to 1 and attach the CPU to a solar power unit orbiting the sun, then if the Hypothesis is true (that an algorithm confers personhood) the CPU + memory would be performing computation. Now suppose that the clock speed slows down as the star providing solar power to the computer runs out of hydrogen to perform fusion and transitions to a white dwarf. (to be continued)

(continued)

ReplyDeleteOur computer is still performing computation, albeit the difference in time between each instance of computation is increasing exponenitally. Eventually there will be a point where the next clock cycle will never occur. But the computer will still gain enough energy to partially run through a clock cycle, even if the algorithm is now frozen in time forever.

Assuming the computer was a person, is it still a person now that it is always getting closer to the next clock cycle, but will never succeed in running another clock cycle in the future without outside intervention.

Now assume that by indeterminate chance, a person shines a flashlight into the solar panels of this computer 1000 trillion years later. Is the person that has just been recreated still the same person? Is the conscious experience continuous? If the conscious experience isn't continuous, then is it true that there was a threshold after which the clock cycles became too far apart for conscious exdperience to be continuous?

I'm afraid I'm not 100% sure what your views are with regards to computation and personhood, so I'd love to hear your opinion on whether or not a synchronous CPU can be a person. Other people are also welcome to comment.