tag:blogger.com,1999:blog-8954608646904080796.post8856275870024473481..comments2019-03-21T03:29:08.867-07:00Comments on Edward Feser: Gödel and the mechanization of thoughtEdward Feserhttp://www.blogger.com/profile/13643921537838616224noreply@blogger.comBlogger118125tag:blogger.com,1999:blog-8954608646904080796.post-85648673299927642392018-07-13T17:14:17.910-07:002018-07-13T17:14:17.910-07:00@grodrigues
"The chinese man is a stand-in fo...@grodrigues<br />"The chinese man is a stand-in for a universal Turing machine; it does not depend on any architectural details."<br /><br />I take the Chinese Room argument to be, essentially that a man simulating a man who knows Chinese does not necessarily know Chinese. You could imagine a system in which the big stack of instructions in the Chinese room were exercises in Chinese (caching the results of the computation in the finite store, you might say) and then after that man in the room does understand Chinese and proceeds. The two systems compute the same function, Chinese, but of one we predicate "understanding" and of the other we don't. Since the Chinese Room is not in this sense a black box I think it fair to say that its architectural details matter.<br /><br />"Kripke's example arguably depends on the difference between finite and infinite memory, but that is a material constraint, not an architectural one."<br /><br />Kripke's argument runs in the opposite direction. He is asking me to ignore the fact that I learned about addition in the first instance by induction. I only required a finite number of cases, and after that I add large number using paper and pencil. I am capable of only finitely many quus like functions because of this, and in fact my working memory is very small. Quus and plus are actually different, and this is knowable and everybody knows it, but Kripke's point is that if you regard plus as simply an abstract function you cannot know it. Therefore we take our interior operations into account when we speak of "to know" (or so say I, not necessarily Kripke, who I think goes pretty badly awry in Rules And Private Language anyway).<br /><br />This is really what I mean when I say "architectural" difference. Of two machines which compute the same function, different predicates may be applied. In such cases we would have to rely on either the internal construction of the machine, or its context in a larger system.reighleynoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-88959171965715886702018-07-13T16:01:52.625-07:002018-07-13T16:01:52.625-07:00@Anonymous:
"No, that's not the main dif...@Anonymous:<br /><br />"No, that's not the main difference at all. Both (can) have bounded memory."<br /><br />Yes, that is the main difference -- you should read more carefully -- and no, Turing machines do not have bounded memory, period. Every book on mathematical logic that I know of (and I know a few) defines them so. The entry on Turing machine on the wikipedia begins the second paragraph as "The machine operates on an infinite[4] memory tape divided into discrete cells.[5]" Off the top of my head, I can remember that Friedl's "Mastering Regular Expression" has a proof that a regular expression parser cannot recognize arbitrarily nested pairs of balanced parenthesis, but it *can* recognize pairs of balanced parenthesis up to a given constant depth n, where n depends on the memory available, or the number of states of the state machine if you want to frame it that way -- which is the reason, or a reason, why in practice bounded memory is not really that much of a constraint, since code in typical languages tends to be flat and shallow, and even in non-typical languages like Scheme or a concatenative language, where the nesting can go very deep, the memory available is more than enough to cope with it before the stack blows up.<br /><br />Of course stack overflow happens, because no *concrete implementation* of a Turing machine can have unbounded memory (so strictly and narrowly speaking, it is not an implementation of a Turing machine). That you cannot even maintain the difference in your head between a purely mathematical object, which is after all an *abstraction*, and its concrete material implementation just shows the depths of your ignorance. Go read a book.<br /><br />"The whole point of this silly argument is the claim that Everyone [a lookup table] knows this is not a conscious being."<br /><br />I do not know what "silly argument" you are referring to; but then again, I do not think any naturalist worth listening to ever made such an obviously dumbass statement, so it is not like it needs refutation.<br /><br />"Comparing a human brain to a lookup table is clearly ludicrous."<br /><br />It is a comparison that follows logically from *your* claims, not mine. If you find it ludicrous, that probably says something about your position.<br /><br />"If mathematical objects are immaterial, then you certainly did."<br /><br />I never said, or even so much as suggested, that mathematical objects are immaterial. It is also completely irrelevant to what I actually said, because I have restricted myself to clarifying the logical entailments of *your* position, insofar as your position is even coherent, not in defending mine. So once again, nice try at deflection.grodrigueshttps://www.blogger.com/profile/12366931909873380710noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-75148121176914513192018-07-13T15:59:34.169-07:002018-07-13T15:59:34.169-07:00@Anonymous:
"See Circuit Complexity. Given t...@Anonymous:<br /><br />"See Circuit Complexity. Given the computation performed by neurons, we can estimate their complexity in terms of boolean circuits. One estimate (Superficial Analogies and Differences between the Human Brain and the Computer) says the "Human brain’s memory power is around 100 terra flops. (i,e,100 trillion calculations/sec). 100 trillion synapses hold the equivalent memory power around 100 million mega bytes.""<br /><br />As expected, more hand-wavy baloney.grodrigueshttps://www.blogger.com/profile/12366931909873380710noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-56202512066980957672018-07-13T13:10:03.429-07:002018-07-13T13:10:03.429-07:00(continued)
Our computer is still performing com...(continued)<br /><br /><br />Our computer is still performing computation, albeit the difference in time between each instance of computation is increasing exponenitally. Eventually there will be a point where the next clock cycle will never occur. But the computer will still gain enough energy to partially run through a clock cycle, even if the algorithm is now frozen in time forever.<br />Assuming the computer was a person, is it still a person now that it is always getting closer to the next clock cycle, but will never succeed in running another clock cycle in the future without outside intervention.<br /><br />Now assume that by indeterminate chance, a person shines a flashlight into the solar panels of this computer 1000 trillion years later. Is the person that has just been recreated still the same person? Is the conscious experience continuous? If the conscious experience isn't continuous, then is it true that there was a threshold after which the clock cycles became too far apart for conscious exdperience to be continuous?<br /><br />I'm afraid I'm not 100% sure what your views are with regards to computation and personhood, so I'd love to hear your opinion on whether or not a synchronous CPU can be a person. Other people are also welcome to comment.Mooknoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-82873682457901989772018-07-13T13:09:43.673-07:002018-07-13T13:09:43.673-07:00"Do a thought experiment. Sever all of the ne..."Do a thought experiment. Sever all of the neurons. What, then, is the difference between electrons not being able to flow across severed connections and not being able to flow at all?"<br /><br />I don't think that certain theories of quantum neuroscience have much backing at the moment. But I can dig up more than just papers from Hameroff and Penrose on the subject (See: A New Spin on Neural Processing: Quantum Cognition, by Weingarten et al.) As a non-neuroscientist I don't think my inclinations have any authority. But at the same time I don't want to restrict myself to creating a problem by attaching dtringent definitions to things. My own journey through philosophy of mind has had a couple of those moments:<br /><br />Considering materialism -> Creates the problem that subjective experience can't exist, requiring us to discount our own experiences<br />Considering epiphenomenalism -> Creates the problem of abstract concepts such as arithmetic and relativity not having any causal power, despite LIGO making a successful observation of relativity, and mathematical concepts clearing having some bearing on reality<br />Considering (naive) A-T hylomorphism -> Creates the problem that Sapient A.I seems impossible, despite the fact that there seems to be no qualitative difference between playing around with biomolecules, and playing around with semiconductor circuits in as small scale<br /><br />I think that A-T can admit the theory that the wiring between neurons has irreducible causal powers. But I am attracted to theories of consciousness that place consciousness inside neurons because they fit a lot of our intuitions with regards to out experiences as subjective observers. Namely:<br />-It fits with the idea we don't lose our identity if the wiring between our neurons shifts radically, such as from youth to adulthood, or perhaps from a human to a posthuman consciousness.<br />-It fits with the intuition that _not all_ lookup tables approximating human outward behaviors are conscious because not all lookup tables would possess the special features inside neurons<br />-It fits with the idea that we can temporarily suspend our brain function, perhaps on very long timescales, without ceasing to exist or losing our identity.<br /><br />That last scenario in particular makes me doubt that instantiating any algorithm can confer personhood (although tangent to the subject of consciousness), since an algorithm doesn't seem like it could be instantiated by an arrangement of matter that is not currently interacting, any more than it could be instantiated by the storage of said algorithm inside the memory of a machine which could conceivably run it, but isn't currently running it because the machine is offline.<br /><br />My intuition about the above machine is if an algorithm could confer personhood (I am not sure if this is your viewpoint) then the CPU cannot confer personhood, because it alone does not contain an algorithm, and the memory alone could not confer personhood, because it is not capable of executing the algorithm (a "P-algorithm"). But merely attaching the memory to the CPU does not confer personhood if there is, say, a single bit in memory that is always zero, and by a safety feature of the CPU it blocks off access to the P-algorithm unless it is set to 1.<br />It doesn't seem to me like combining the CPU with the memory would create a person, because there the P-algorithm is not being executed. But at the same time, if you were to flip that bit to 1 and attach the CPU to a solar power unit orbiting the sun, then if the Hypothesis is true (that an algorithm confers personhood) the CPU + memory would be performing computation. Now suppose that the clock speed slows down as the star providing solar power to the computer runs out of hydrogen to perform fusion and transitions to a white dwarf. (to be continued)<br />Mooknoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-19106673323780619332018-07-13T13:06:05.938-07:002018-07-13T13:06:05.938-07:00grodrigues: First point, is that the brain *cannot...<b>grodrigues</b>: <i>First point, is that the brain *cannot* be a Turing machine because it has bounded memory. Period.</i><br />Sure. But humans use external storage, so our memory is (at least) as large as the universe lets it be. Some people might want to argue that the human mind isn't constrained by the physical constraints of the universe but, then, they need to demonstrate that.<br /><br /><i>So you bring up "complexity"; ok, define "complexity" for us.</i><br /><br />See <a href="https://en.wikipedia.org/wiki/Circuit_complexity" rel="nofollow">Circuit Complexity</a>. Given the computation performed by neurons, we can estimate their complexity in terms of boolean circuits. One estimate (<a href="http://paper.ijcsns.org/07_book/201007/20100724.pdf" rel="nofollow">Superficial Analogies and Differences between the Human Brain and the Computer</a>) says the "Human brain’s memory power is around 100 terra flops. (i,e,100 trillion calculations/sec). 100 trillion synapses hold the equivalent memory power around 100 million mega bytes."<br /><br />So that enables a comparison of raw power. However, it still misses the point that organization is important, too. Differences between programs are differences in wiring. So not only is memory capacity and speed important, so to is how the wires are arranged.<br /><br /><i> which as a Catholic I can only welcome.</i><br />The truth shall set you free. The one who actually raised Himself from the dead said that, IIRC.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-4918186152841173872018-07-13T12:49:31.881-07:002018-07-13T12:49:31.881-07:00gridrigues: And the main difference between the tw...<b>gridrigues</b>: <i>And the main difference between the two is that the second has bounded memory and cannot perform arbitrary recursion</i><br />No, that's not the main difference at all. Both (can) have bounded memory. After all, in many systems, the stack is put at the top of memory and the head at the bottom. The stack grows down, the heap grows up. Overflow results if they collide. The difference is that a lookup table isn't a stack. It reads, but it cannot write.<br /><br />The whole point of this silly argument is the claim that <i>Everyone [a lookup table] knows this is not a conscious being.</i> Comparing a human brain to a lookup table is clearly ludicrous. A human brain can tell if the parenthesis balance in the expression "(((((((((((((((((1+3))))))))))))))))". A lookup table cannot.<br /><br /><i>Good try at changing the subject.</i><br />Good try at evading the issue. You claimed <i>I said a Turing machine is a "formal, mathematical object". I did not speak anywhere of immateriality, non-physicality, etc</i>. If mathematical objects are immaterial, then you certainly did. If mathematical objects aren't, then you didn't. So I'm simply asking you to clarify your position.<br />Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-10217647182740552102018-07-13T11:34:58.284-07:002018-07-13T11:34:58.284-07:00@reighley:
Forgot this:
"A lot of philosoph...@reighley:<br /><br />Forgot this:<br /><br />"A lot of philosophy of the mind arguments (Searle's Chinese Room and Kripke's plus/quus for example) seem to depend at least a little on the architecture of the machine in question."<br /><br />I do not see how this could be. The chinese man is a stand-in for a universal Turing machine; it does not depend on any architectural details. Kripke's example arguably depends on the difference between finite and infinite memory, but that is a material constraint, not an architectural one. And as James Ross points out, even the finite qualifier is not really important for the indeterminacy of the physical.grodrigueshttps://www.blogger.com/profile/12366931909873380710noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-29015636358117792202018-07-13T06:23:02.350-07:002018-07-13T06:23:02.350-07:00@reighley:
"I think the reference is to Mont...@reighley:<br /><br />"I think the reference is to Monty Python's "The Argument Clinic". British."<br /><br />Ah, that makes sense. But now I would say the irony is lost on Anonymous too.grodrigueshttps://www.blogger.com/profile/12366931909873380710noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-53887958322319491882018-07-13T05:53:39.903-07:002018-07-13T05:53:39.903-07:00@Anonymous:
"If the argument is that the hum...@Anonymous:<br /><br />"If the argument is that the human brain is a Turing machine, then the typical response is that humans can do things that computers can't do. So then we have to ask is the difference between minds and machines one of kind, or degree?"<br /><br />First point, is that the brain *cannot* be a Turing machine because it has bounded memory. Period. But let us put aside detail. Both a brain (viewed as a Turing machine) and any garden variety computer you can buy at a store are *universal Turing machines*, that is, they themselves can simulate any Turing machine. Mathematically, there is no difference between any two universal Turing machines. So you bring up "complexity"; ok, define "complexity" for us, and I mean a precise definition, not your usual hand-waving baloney -- and then prove that human brains are indeed "more complex" than any existing computer. You will not be able to do it, but hey, prove me wrong as it will be a humbling exercise for myself which as a Catholic I can only welcome.grodrigueshttps://www.blogger.com/profile/12366931909873380710noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-88382354814887329162018-07-13T05:51:54.359-07:002018-07-13T05:51:54.359-07:00@Anonymous:
"A lookup table cannot evaluate ...@Anonymous:<br /><br />"A lookup table cannot evaluate whether or not the parenthesis in an arbitrary expression are balanced."<br /><br />This is the difference between a full blown parser and a regular expression parser. And the main difference between the two is that the second has bounded memory and cannot perform arbitrary recursion -- oh wait, that was *precisely* what I said.<br /><br />And even if we leave aside it is simply false that "and they can all be implemented by (finite) lookup tables. (false)" is false per your parenthethic remark. Either you are objecting to the finite in between parenthesis or to the universal quantifier opening the sentence. The finite is because I spoke of bounded memory machines so what you must be objecting to is the universal quantifier. A Turing machine, any one Turing machine, computes a function. And the abstract, set-theoretic definition of function, *just is* a look-up table -- minus the finite in between parenthesis.<br /><br />Go read a book, I am out of patience with your ignorance.<br /><br />"What is the nature of a mathematical object?"<br /><br />Good try at changing the subject.grodrigueshttps://www.blogger.com/profile/12366931909873380710noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-65428402074064843132018-07-12T19:17:49.857-07:002018-07-12T19:17:49.857-07:00grodrigues: Well, I am not going to repeat, for wh...<b>grodrigues</b>: <i>Well, I am not going to repeat, for what would be the third time, what I said. If you do not want, or cannot, read, there is not much I can do.</i><br />You said, "Bounded memory machines are provably less powerful than a Turing machine (true) and they can all be implemented by (finite) lookup tables. (false)"<br /><br />A lookup table cannot evaluate whether or not the parenthesis in an arbitrary expression are balanced.<br /><br /><i>I said a Turing machine is a "formal, mathematical object". I did not speak anywhere of immateriality, non-physicality, etc</i><br />What is the nature of a mathematical object?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-64162191278334185432018-07-12T18:51:57.147-07:002018-07-12T18:51:57.147-07:00reighley: I feel like you are trolling me with nom...<b>reighley</b>: <i>I feel like you are trolling me with nominalism.</i><br /><br />I'm not. In fact, I think that if you do the exercise I suggested, it will lead away from nominalism. Can you tell by looking at an object with two inputs and one output what logic function it implements? You can't. So, then, how do you get meaning out of a circuit or a neural net? Is meaning emergent or fundamental?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-13883060748720955652018-07-12T17:25:46.269-07:002018-07-12T17:25:46.269-07:00@grodrigues
"Sorry, but the joke is lost on ...@grodrigues<br /><br />"Sorry, but the joke is lost on me. I would imagine this is an americanism."<br /><br />I think the reference is to Monty Python's "The Argument Clinic". British.<br /><br />https://www.youtube.com/watch?v=XNkjDuSVXiEreighleynoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-59277161397578570672018-07-12T16:50:22.763-07:002018-07-12T16:50:22.763-07:00@Anonymous
I feel like you are trolling me with no...@Anonymous<br />I feel like you are trolling me with nominalism.reighleynoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-59499087841082637602018-07-12T16:01:18.927-07:002018-07-12T16:01:18.927-07:00Dude, give up... you are projecting your brain ope...Dude, give up... you are projecting your brain operations on to the computer. All your arguments depend on this one sleight.Eduardohttps://www.blogger.com/profile/13394763910547162114noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-1986134570839705642018-07-12T15:33:53.437-07:002018-07-12T15:33:53.437-07:00@Anonymous:
"I think you want room 12A."...@Anonymous:<br /><br />"I think you want room 12A."<br /><br />Sorry, but the joke is lost on me. I would imagine this is an americanism.<br /><br />"How does a non-physical thing transition from state to state? How does a non-physical thing put a symbol on a tape?"<br /><br />I said a Turing machine is a "formal, mathematical object". I did not speak anywhere of immateriality, non-physicality, etc. And when in a logic, or computer science, they use terms like "tape", "head", etc. these terms are either used in an informal, suggestive way, or they have precise definitions. When Beilinson speaks about "perverse sheaves", only an idiot would wonder about the moral virtue of a sheaf.<br /><br />Honestly, just take my suggestion, go read a book. <br /><br />""Pre-computing"."<br /><br />Well, I am not going to repeat, for what would be the third time, what I said. If you do not want, or cannot, read, there is not much I can do.grodrigueshttps://www.blogger.com/profile/12366931909873380710noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-65966772393078494802018-07-12T14:24:16.344-07:002018-07-12T14:24:16.344-07:00grodrigues: No, it is not. That was easy.
I think...<b>grodrigues</b>: <i>No, it is not. That was easy.</i><br />I think you want room 12A.<br /><br /><i>It is usually considered "a physical device" only be ignorant people.</i><br />How does a non-physical thing transition from state to state? How does a non-physical thing put a symbol on a tape?<br /><br /><i>so now you have a function that *is* implemented as a lookup table by pre-computing the values in the compilation phase.</i><br />"Pre-<b>computing</b>".<br />Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-59418732049618827322018-07-12T14:12:27.771-07:002018-07-12T14:12:27.771-07:00reighley: And something also has to program the Tu...<b>reighley</b>: <i>And something also has to program the Turing machine and give the entire abstraction a semantics.</i><br /><br />Sure. Remember, the program is the wiring and your wiring is a product of Nature.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-50112148145903656602018-07-12T14:06:58.990-07:002018-07-12T14:06:58.990-07:00We have no strong reason to think that any suffici...<i>We have no strong reason to think that any sufficient complexity leads to consciousness</i><br />There you go with subjective adjectives, again. I think the structure of neural networks (which we know compute) and their complexity in the human brain is quite a strong argument. This argument is bolstered by increasing capability tracking increasing complexity in our computers.<br /><br /><i> Doing so does not tell us how to build our own.</i><br />We can try to mimic it via continued algorithmic development. I'm not sure how successful that will be. We could also evolve it, just like Nature did.<br /><br /><i>I am saying that an algorithmic interpretation of the brain leads to an immaterial component of the algorithm that makes up the brain.</i><br />Just FYI, I agree with you, although we may (or may not) agree on what the immaterial component entails.<br /><br /><i>If each algorithm is unique and you cannot transmit the same algorithm across different neural substrates,...</i><br />Why would you thing algorithms can't be transmitted? Surely I could teach you Euclid's algorithm for finding the greatest common divisor of two numbers, or an algorithm for sorting. The problem isn't the transmission of an algorithm. WRT the human brain, the problem is knowing that the algorithm is.<br /><br /><i>But that just goes to show that no 'algorithm,' in the sense we usually talk about algorithms, can be implemented in a physical, only approximated...</i><br />How else is an algorithm implemented, if not a physical way? We can think about the λ calculus using unphysical symbols using unphysical connections taking no time at all to generate, but I don't know how to communicate that except by physical things.<br /><br /><i>If consciousness originates inside neurons then consciousness would end when our neurons die, not when the wiring is severed.</i><br />Do a thought experiment. Sever all of the neurons. What, then, is the difference between electrons not being able to flow across severed connections and not being able to flow at all?<br /><br />Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-44643895538470473352018-07-12T13:47:15.799-07:002018-07-12T13:47:15.799-07:00I think your argument would be much clearer if you...<i>I think your argument would be much clearer if you worked to resolve the contradiction in this pair of sentences.</i><br />You're absolutely right. Let me try again. Consider a binary "logic" gate. It operates on two distinct objects (it doesn't matter what they are), and produces one of the two objects as output. If you look at the behavior of a single gate, it is impossible to tell if it is an AND gate or a NAND gate; an OR gate or a NOR gate, etc. Furthermore, if you look at two gates that have the same behavior, it's still impossible to tell if they're the same gate (the gate could be used as a NAND gate in one place and an AND gate in another). We might say that they are, due to economies of scale of mass production, but you can't determine that by looking at them.<br /><br />So I'll leave it as an exercise to figure out how to tell what a particular circuit does. (And your answer, BTW, should solve the "problem" of qualia).Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-41714914329235033582018-07-12T13:01:15.639-07:002018-07-12T13:01:15.639-07:00"Unless the assertion really is either that t..."Unless the assertion really is either that the particular algorithm implementing a function or the concrete material implementation of said algorithm are actually important details"<br /><br />Might they be important details? A lot of philosophy of the mind arguments (Searle's Chinese Room and Kripke's plus/quus for example) seem to depend at least a little on the architecture of the machine in question.<br /><br />Your point is well taken that if we do not abstract away implementation details then we will not be able to apply much of the theory of computation, but honestly I think we cross that bridge when we admit that our own minds are probably finite.<br /><br />I don't think it is a total loss though. A lot of the methods (Godel numbering things, simulation of one machine by another, maybe even lambda reduction to a fixed point) might be of use on Minds as well as on Functions even if it turns out that those two classes are in no way isomorphic to one another.reighleynoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-29144647695319023592018-07-12T12:51:40.032-07:002018-07-12T12:51:40.032-07:00@Anonymous:
I missed this:
"I said as much....@Anonymous:<br /><br />I missed this:<br /><br />"I said as much. But humans use external storage, so our "bounded memory" is as big as the universe, however big that is (and however small we can make our symbols)."<br /><br />Since we are discussing the nature of the mind itself, the fact that we can have recourse to "external storage" itself is irrelevant. And whatever storage we have access to it is still finite by fundamental physical constraints (unless some truly spectacular overturning of physics as we know it occurs). To name just one, since the light speed puts a cap on the speed of signal transmission, no we do not have access to the entire universe. To name the second: we cannot make our symbols arbitrarily small, by which I mean that a quantum of information cannot occupy an arbitrarily small volume.grodrigueshttps://www.blogger.com/profile/12366931909873380710noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-91399205111524384802018-07-12T12:18:14.171-07:002018-07-12T12:18:14.171-07:00@Anonymous:
"It certainly is."
No it i...@Anonymous:<br /><br />"It certainly is."<br /><br />No it is not.<br /><br />That was easy.<br /><br />"A Turing machine is usually considered to be a physical device"<br /><br />A Turing machine is a formal, mathematical object. It is not a physical object, not here, not in Mars, not anywhere in the universe. It is usually considered "a physical device" only be ignorant people. Go read a book, you do not know what you are talking about.<br /><br />"All you have to do is ask."<br /><br />No thanks, there is only so much that I can stand.<br /><br />"What puts the value in a particular index in a lookup table?"<br /><br />Quite obviously you have not read, much less understood, what I said.<br /><br />A lookup table is an *implementation* of a function. There is a technique called memoization that trades time by space, by caching the results of a function. In a language like Python, this is single line (well two, if you count the import statement). In a language like Haskell that features immutable data structures by default and lazy evaluation, this is done automatically behind the scenes. This can even be turned into a *compilation* technique. In a language like C++ this is possible in principle (the macro language is Turing-complete) but an exercise in masochism. In a homoiconic language like Scheme with hygienic macros this is a trivial exercise -- so now you have a function that *is* implemented as a lookup table by pre-computing the values in the compilation phase.<br /><br /> So to repeat myself, an egregious sin you forced upon me, "Unless the assertion really is either that the particular algorithm implementing a function or the concrete material implementation of said algorithm are actually important details, which of course is not only a complete absurdity because it defeats the whole purpose of bringing the theory of computation into the picture, is completely unargued for."grodrigueshttps://www.blogger.com/profile/12366931909873380710noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-16587927525890275982018-07-12T10:52:34.996-07:002018-07-12T10:52:34.996-07:00"How so? If the argument is that the human br..."How so? If the argument is that the human brain is a Turing machine, then the typical response is that humans can do things that computers can't do. So then we have to ask is the difference between minds and machines one of kind, or degree? By looking at physical structures, we can see that it's one of degree. Complexity of wiring is correlated with complexity of ability. The computers I worked with 40 years ago did not have the complexity of today's machines, and today's machines can do things that simply weren't possible back then."<br /><br />We have no strong reason to think that any sufficient complexity leads to consciousness, only that some level of complexity is required for consciousness. It's trivial to claim that conscious systems are complex systems because the only evidence of a conscious system we have is highly complex. Doing so does not tell us how to build our own.<br /><br />"You put a label ("soul") on something you don't understand and can't explain and are then dissatisfied with your system. For example, does this "immaterial soul" exist independently of the structure of your brain? Why, or why not? Does this "immaterial soul" depend on the complexity of the wiring of your brain? Why, or why not?"<br /><br />Don't assume I am proposing an immaterial soul. I am saying that an algorithmic interpretation of the brain leads to an immaterial component of the algorithm that makes up the brain. If each algorithm is unique and you cannot transmit the same algorithm across different neural substrates, then the utility of an algorithmic interpretation of the brain disappears.<br /><br />"No. You can't tell from a computer circuit what that circuit does. All you can do is look at what it does and then see if you can figure out how the circuit does it. But you simply can't tell if a gate is a NAND gate or an AND gate or some other gate, except by looking at what the entire system does (and if you tell me that you can, I'll show you what unwarranted assumptions you're making)."<br /><br />A NAND gate and an AND gate can take many forms. Nothing is intrinsically a logic gate until you quantify what voltage levels are intrinsically 1's, what voltage levels are intrinsically 0's, what timescales you are looking at, and so on, which is to say that logic gates are only intrinsically assigned. But that just goes to show that no 'algorithm,' in the sense we usually talk about algorithms, can be implemented in a physical, only approximated, which to my mind casts some serious doubt on the idea that consciousness originates in the algorithm itself and not something in the physical properties.<br /><br />"Suppose that's true? So what? It's still tied to brain wiring. Sever certain wires and you're no longer conscious."<br /><br />That's not a given. If consciousness originates inside neurons then consciousness would end when our neurons die, not when the wiring is severed. I don't claim to know. But I'm not claiming it's settled science either, not when there is no empirically testable scientific definition of consciousness.Mooknoreply@blogger.com