tag:blogger.com,1999:blog-8954608646904080796.post1232174219436585900..comments2024-03-28T07:47:38.176-07:00Comments on Edward Feser: The particle collection that fancied itself a physicistEdward Feserhttp://www.blogger.com/profile/13643921537838616224noreply@blogger.comBlogger307125tag:blogger.com,1999:blog-8954608646904080796.post-25580028851922833992023-06-03T16:33:32.996-07:002023-06-03T16:33:32.996-07:00>>>That's a bizarre statement. If you...>>>That's a bizarre statement. If you aren't aware of them, how do you know they exist?<<<<br /><br />Basic, fundamental logic?<br /><br />How could such points logically not exist...without some convoluted, seemingly unwieldy "logical" arguments?Ryan Clarkhttps://www.blogger.com/profile/02626548873379732761noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-49776044278860138082023-06-03T14:24:02.401-07:002023-06-03T14:24:02.401-07:00>>>After all, as Greene himself happily a... >>>After all, as Greene himself happily acknowledges, there are no laws that allow us rigorously to predict the behavior of systems conceived of as dogs, cats, basketballs, dollar bills, human beings, etc.<<<<br /><br />To be fair, doesn't Greene and thinkers like him just mean that it would be too labor-intensive to deal with these complex systems (i.e. humans and other biological systems) in terms of fundamental physics? Don't get me wrong, I agree with the overall point you're trying to make about conscious experience, but I just think this particular point could be misconstrued...Ryan Clarkhttps://www.blogger.com/profile/02626548873379732761noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-63637161761237165972020-10-05T16:07:10.303-07:002020-10-05T16:07:10.303-07:00A "point" is not a real physical object....A "point" is not a real physical object. It is just as abstract as the concept of "infinity". In mathematics we have infinite (and infinitely small, like a point) objects because of the axiom of infinity. But an axiom is a statement that is just assumed to be true, it is not proved in any way. So you can't use it to prove something on a physical level.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-79492950323735917902020-09-28T16:09:17.762-07:002020-09-28T16:09:17.762-07:00@JoeD
Why is God immaterial etc?
Because circula...@JoeD<br /><br />Why is God immaterial etc?<br /><br />Because circularity is not an explanation except in a circular mind.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-52361554278310400192020-09-28T15:32:52.728-07:002020-09-28T15:32:52.728-07:00@JoeD
Why is God not material, etc?
Because mate...@JoeD<br /><br />Why is God not material, etc?<br /><br />Because material things are subject to the physical forces of the material world that change material things. The First Cause cannot be changed by anything or it wouldn't be the first cause. But without a first cause you have yer infamous regress which explains nothing, or you sacrifice causation and the fundamental principal of science - that things that happen are caused and they are what we call effects.<br /><br />You can see that we have a universe. What caused it?<br /><br />Materialist responses can not answer this. They are all a form of running away in accordance with wishful thinking, a prejudice that makes it just circular reasoning, fundamentally unsupported, just like, I suppose, the universe that they imagine and desperately hope exists without a founding principle. I call this "the imaginary materialist universe".<br /><br />Sorry to get sarcasmic, but you'll probably have the last word.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-21758839079958521612020-09-22T11:49:37.663-07:002020-09-22T11:49:37.663-07:00Maybe we can formulate Greene's reductionism a...Maybe we can formulate Greene's reductionism as an inference to the best explanation. Using the map and territory analogy if we have a map that is constantly able to describe the territory accurately, than perhaps the best explanation is that the territory really is just what is described on the map. Of course one can always say that there may be more to the territory than what is described by the map, but so you haven't proven reductionism 100% deductively, but the reductionist has still made an argument for his position. <br /><br />A further point when you say that "Though the details of the story have changed over the centuries, what has persisted to the present day is a tendency to treat so-called secondary qualities as merely the qualia of our conscious experience of the material world, rather than anything to be found in the material world itself. They are simply not the kind of thing that can be captured by the purely quantitative, mathematical language to which physics confines its description of matter. And the problem is that this conception of matter entails a kind of dualism. For if these qualities do not exist in the material world, then they must not exist in the brain, which is part of the material world. Yet if they do exist in the mind, then the mind must not be identical with the brain or with any other part of the material world." <br /><br />This doesn't necessarily follow. The qualia were identified as existing not in the object itself, but rather in the mind of the person experiencing the object. Now maybe the mind can be explained in physicalist terms of brain states and such, so that qualia can be said to exist in the material world, just not in the external material world. To put it another way, yes atoms are all that exist, but the color red does not exist in the atoms of the red colored object, rather in the atoms of our brain. Shaunhttps://www.blogger.com/profile/09466738560345213952noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-78423764966421244532020-09-21T18:25:09.043-07:002020-09-21T18:25:09.043-07:00Many pop scientists fancy themselves reductionists...Many pop scientists fancy themselves reductionists while covertly smuggling in Aristotelian assumptions Houdini https://www.blogger.com/profile/14359269846153635168noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-86033616319925923682020-09-10T06:16:21.096-07:002020-09-10T06:16:21.096-07:00Talmid: Just a advice: change you language a bit, ...<b>Talmid</b>: <i>Just a advice: change you language a bit, you come across as very arrogant.</i><br />I'm sure it looks that way to you. But, remember, you're the one who won't engage with the counter arguments as to why Searle is wrong.<br /><br /><i>When you come here acting like a argument so discussed and agreed as the Chinese Room is just obviously wrong is hard to take you serious.</i><br />Agreed by whom? Have you dealt with John McCarthy's objections? You know, the John McCarthy who is a legend in artificial intelligence, as opposed to Searle, who isn't?<br /><br />Do you not see how hard it is to take seriously people who think Searle's argument has merit? It's snake oil. It's easily shown to be snake oil, but you won't engage with the counter argument.<br /><br /> wrf3https://www.blogger.com/profile/04657932934353372526noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-29177982222072386022020-09-10T05:59:54.489-07:002020-09-10T05:59:54.489-07:00Talmid: Again, i think that you fail to see Searle...<b>Talmid</b>: <i>Again, i think that you fail to see Searle point.</i><br />I think you fail to see that I really do understand Searle's point, having gone through his argument line by line, and thoroughly disagree with his conclusion because I see where he makes mistakes.<br /><br />And I've provided links that show where Searle's argument is wrong, and why it's wrong, and you haven't engaged with them. It's not enough for you to claim "you don't understand Searle" when you haven't dealt with those objections. Show me where my objections to Searle's argument are wrong, and then I'll believe you when you claim I don't understand Searle.<br /><br /><i>Searle shows that the man on the room can follow rules to manipulate symbols easily while never associating they with concepts or knowing what he is doing...</i><br />Yes, that's Searle's argument. "By following these rules, a computer can translate Chinese without understanding Chinese." Yes, that's a true statement. We have such systems today. I know how they work. In fact, I've built a commercial system that does that very thing on a very small scale. But Searle then concludes "therefore, we cannot make a machine that translates and understands Chinese." Why is this the wrong conclusion? <br /><br />Because Searle doesn't understand the theory behind the rules! Searle is saying, "we can build a machine <i>using one set of rules</i> that can translate Chinese but not understand Chinese, therefore we can never build a machine that can translate and understand Chinese <i>using a different set of rules</i>.<br /><br />Searle tries to address this point in his dialog in five parts on page 11, and completely falls flat on his face in question number 5. He does not understand that the lambda calculus has both syntax and semantics.<br /><br /><i>Searle point is that this is exactly what computer does...</i><br /><br />Good grief, the clearly demonstrable problem with Searle's position is that <i>he doesn't understand the rules</i>. He's like the White Witch in The Chronicles of Narnia who is oblivious to "a deeper magic." He doesn't understand the difference between what they do do, with what they can do. He doesn't understand the difference between what the theory allows, and what our current state of engineering can produce.<br />wrf3https://www.blogger.com/profile/04657932934353372526noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-14881039996444581132020-09-09T19:54:18.915-07:002020-09-09T19:54:18.915-07:00Talmid,
"this is going nowhere "
Because...Talmid,<br />"this is going nowhere "<br />Because you just keep repeating the same non-sequitur no matter how carefully, how completely, and how explicitly your logical fallacy is explained to you.<br /><br />Yes, a computer today running software today does not have consciousness.<br /><br />It is a non-sequitur to assert, therefore, that no man made device can ever achieve consciousness.<br /><br />It does not follow. Your assertion does not follow.<br /><br />David Chalmers proposed a thought experiment much like this.<br /><br />Suppose we could build a technological device that precisely modeled all the electrochemical transfer functions of a single brain cell.<br /><br />Then, suppose that device could be implanted in your brain to replace just one single cell.<br /><br />Would you still be you?<br /><br />How about two such devices? Would you still be you?<br /><br />How about 10, 1000, or 1000000000 such devices, one at a time in such a way that the implantation was transparent and seamless to your existing brain functions. Would you still be you?<br /><br />How about if this one by one replacement continued until every brain cell had been replaced? Would you still be you?<br /><br />Clearly, yes, your repeated mantra of non-sequitur notwithstanding.<br />StardustyPsychehttps://www.blogger.com/profile/12493629973262220492noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-66808689304423651992020-09-09T19:32:31.534-07:002020-09-09T19:32:31.534-07:00But i think we should stop here, this is going now...But i think we should stop here, this is going nowhere and we both have more important things to do. <br /><br />Just a advice: change you language a bit, you come across as very arrogant. When you come here acting like a argument so discussed and agreed as the Chinese Room is just obviously wrong is hard to take you serious.Talmidhttps://www.blogger.com/profile/04267925670235640337noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-3673759896337764232020-09-09T19:24:12.879-07:002020-09-09T19:24:12.879-07:00@wrf3
Again, i think that you fail to see Searle ...@wrf3<br /><br />Again, i think that you fail to see Searle point. Our minds get sense data and abstract from it concepts, from several experiences of dogs it gets "dogness", for instance. The word "dog" is a symbol we use for that concept, it only means anything on our talk because we both associate it with a concept. Take us away, the associating between the concept and the word ceases.<br /><br />Searle shows that the man on the room can follow rules to manipulate symbols easily while never associating they with concepts or knowing what he is doing, what he does and what someone fluent with chinese does are diferent things, even if their behavior is similar to someone who tries to comunicate with them using writing.<br /><br /> Searle point is that this is exactly what computer does: act in a way that we take as following rules to manipulate symbols while never associating they with abstract concepts. That is it. A computer does not know "dogness", does not know language, it just act in a way that we take as meaningful. <br /><br />Talmidhttps://www.blogger.com/profile/04267925670235640337noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-84184029997897970752020-09-08T00:29:59.427-07:002020-09-08T00:29:59.427-07:00Talmid: Searle shows that the man on the room and ...<b>Talmid</b>: <i>Searle shows that the man on the room and computers both translate chinese while having no actual idea of the meaning of any word or even of what they are doing.</i><br /><br />Sure. Of course we can build a system that translates language without understanding the language. But that doesn't mean that we cannot build a system that translates language and understands the language. The conclusion does not follow. Searle tries to argue that it does, but his reasons are demonstrably wrong. I've given you several links as to why.<br /><br /><i>I mean, do anyone believe that, say, Iphone Siri comprehend the meaning of terms it "uses"? </i><br />Not yet. It's just a matter of time. Too, there are machines that do comprehend limited vocabularies. Certainly better than, say, my dog does.<br /><br /><i>Is not obvious that what it does is just not what a mind does?</i><br />No, it isn't obvious. You're looking at the behavior of a comparatively simple object and comparing it to the behavior of an extremely complex object and saying, "See? Isn't it obvious that it's a difference of kind and not degree?"<br /><br />wrf3https://www.blogger.com/profile/04657932934353372526noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-9553845908635678222020-09-07T20:06:04.094-07:002020-09-07T20:06:04.094-07:00@wrf3
Searle argument is way more profound than t...@wrf3<br /><br />Searle argument is way more profound than that. Searle shows that the man on the room and computers both translate chinese while having no actual idea of the meaning of any word or even of what they are doing. This pretty much means that Turing was wrong: a computer can pass on his test, sure, but this will not mean anything, by itself at least.<br /><br />The man can translate chinese very well, sure, but the activity he does is not what a speaker of chinese does, same thing with computers. The method of symbol manipulation that computers do can trick people, but is not what we do, that is Searle point. <br /><br />I mean, do anyone believe that, say, Iphone Siri comprehend the meaning of terms it "uses"? That it does have abstract concepts like us? Is not obvious that what it does is just not what a mind does?Talmidhttps://www.blogger.com/profile/04267925670235640337noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-80964145878635216012020-09-07T07:48:03.396-07:002020-09-07T07:48:03.396-07:00Talmid: More on Feser's Empiricism versus Aris...<b>Talmid</b>: More on Feser's <a href="http://edwardfeser.blogspot.com/2009/06/empiricism-versus-aristotelianism.html" rel="nofollow">Empiricism versus Aristotelianism</a>:<br /><br /><i>Since Wittgenstein, contemporary philosophers have been pretty well inoculated against the error of supposing that concepts could possibly be reduced to images of any sort.</i><br /><br />Sure. Images are not descriptions. We may (and often do) associate primitive images with primitive descriptions but, for most people, our ability to have complex descriptions outruns our ability to have complex internal visualizations.<br /><br />But, so what? Descriptions are computations for large networks of meaning (i.e. associations of symbols with values). So far, this doesn't help your objection.<br /> wrf3https://www.blogger.com/profile/04657932934353372526noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-53240075772612096782020-09-07T07:39:44.749-07:002020-09-07T07:39:44.749-07:00Talmid: Regarding Feser's Empiricism versus Ar...<b>Talmid</b>: Regarding Feser's <a href="http://edwardfeser.blogspot.com/2009/06/empiricism-versus-aristotelianism.html" rel="nofollow">Empiricism versus Aristotelianism</a>, Feser writes:<br /><br /><i>Both empiricists and Aristotelians hold that “nothing is in the intellect that was not first in the senses,” as the hoary Aristotelian dictum puts it.</i><br /><br />How does an Aristotelian classify self-reflection? Is that intellect, sense, both, or something else altogether? If something else, what?<br /><br />wrf3https://www.blogger.com/profile/04657932934353372526noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-90067239917305229762020-09-07T04:05:56.785-07:002020-09-07T04:05:56.785-07:00Talmid: Searle idea of "meaning" is prob...<b>Talmid</b>: <i>Searle idea of "meaning" is probably close to my own, since the point of the Chinese Room seems that you can easily follow correctly rules to manipulate symbols while being incapable of associating they with abstract concepts.</i><br /><br />It just astounds me that people can think that Searle's argument is right. Put simply, Searle is saying "we can build a system that translates Chinese but doesn't understand Chinese. Therefore, we cannot build a system that translates Chinese and understands Chinese." The conclusion simply doesn't follow. Furthermore, it is clear that Searle doesn't understand computation (<a href="https://stablecross.com/files/Searle_Chinese_Room.html" rel="nofollow">here</a>). Neither does Feser (<a href="https://stablecross.com/files/feser_searle.html" rel="nofollow">here</a>).<br /><br />For the longest time I was baffled why so many supposedly smart people bought Searle's argument when it was clearly a non-sequitur. Then, years later, I came across John McCarthy's take (<a href="http://jmc.stanford.edu/articles/chinese.html" rel="nofollow">here</a>). McCarthy is a true legend in computer science in general and artificial intelligence in particular. That he saw the same problem I did was reassuring.<br /><br />I'll look at the link to Feser's article later.wrf3https://www.blogger.com/profile/04657932934353372526noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-80154636984890383462020-09-06T19:47:06.394-07:002020-09-06T19:47:06.394-07:00@wrf3
i guess that by meaning i mean something li...@wrf3<br /><br />i guess that by meaning i mean something like a association between a abstract concept and a symbol. I think that this old post from Feser that i showed to Stardusty before will help understand what exactly is a abstract concept, since some confuses they with other things: http://edwardfeser.blogspot.com/2009/06/empiricism-versus-aristotelianism.html?m=1<br /><br />Searle idea of "meaning" is probably close to my own, since the point of the Chinese Room seems that you can easily follow correctly rules to manipulate symbols while being incapable of associating they with abstract concepts.Talmidhttps://www.blogger.com/profile/04267925670235640337noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-18790968973787710442020-09-05T07:58:12.964-07:002020-09-05T07:58:12.964-07:00Talmid wrote: Your claim being demonstrable or no...<b>Talmid</b> wrote: <i> Your claim being demonstrable or not does not means that you should just take the conclusion of your argument for granted.</i><br />The whole point of an argument is to demonstrate the conclusion, stepping from beginning to end with no leaps. A construction is such an argument. If you're going to deny the conclusion then you have to show a mis-step. Just denial simply isn't enough.<br /><br /><i> our ideas of what meaning formation means seems different.</i><br />That may be. I've shown you what meaning formation is, from the standpoint of computational theory. If you don't agree with it (but if you use a dictionary, you must!) then what is your definition? Make it available for examination!<br /><br /><i> Would you say that meaning formation is...</i><br />I've already said what it is. Meaning is the association of a symbol with a value. The key insight is that a "value" is taken from the same "alphabet" as the symbols. And the alphabet for symbols can be anything -- atoms will do. So meaning -> (symbol value) -> (symbol symbol) -> (atom atom) or however it happens to be implemented. It's behavior, not implementation.<br /><br />wrf3https://www.blogger.com/profile/04657932934353372526noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-1224521519386275632020-09-05T06:05:13.753-07:002020-09-05T06:05:13.753-07:00@wrf3
"It isn't begging the question if ...@wrf3<br /><br />"It isn't begging the question if the claim is demonstrable."<br /><br />Yep, it is. Your claim being demonstrable or not does not means that you should just take the conclusion of your argument for granted. But this is just waste of time, so i will not pursue it further.<br /><br />And i think that we actually founded the difficult! our ideas of what meaning formation means seens different.<br /><br />Would you say that meaning formation is just to have a symbol having a sort of casual relation with values or something like that?Talmidhttps://www.blogger.com/profile/04267925670235640337noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-43256721881640722122020-09-05T03:09:28.199-07:002020-09-05T03:09:28.199-07:00Talmid: You are begging the question here.
It isn...<b>Talmid</b>: <i> You are begging the question here.</i><br />It isn't begging the question if the claim is demonstrable. Meaning formation is described by the lambda calculus and occurs any time a symbol is associated with a value.<br /><br /><i>I think that we are wasting a lot of time discussing useless stuff...</i><br />Pointing out errors isn't useless. At least, I hope it isn't. Feser's theory of mind is demonstrably wrong.<br /><br /><i>Are associations between simbols and what they refer objective or not?</i><br />In theory, yes. In practice, it may be hard to see it. Simple meaning is just a (symbol value) pair. Complex meaning is formed by associating a symbol with many (symbol value) pairs. What needs to be understood is that the values -- what the symbol means -- <i>is composed of the symbols</i>. So meaning, inside the system, is a (symbol symbol) pair. In practice, in a bafflingly complex system, it can be hard to follow the behavior of the system to know when a symbol is being used <i>as a symbol</i> and when it is being used <i>as a value</i>.<br /><br /><i>2. Are associations between thoughts and what is thought objective or only seems to be?</i><br />The (symbol symbol) association is objective but, as above, may be difficult for a third-party to see. Taking apart a physical brain is hard, and you won't get it back together again.wrf3https://www.blogger.com/profile/04657932934353372526noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-4835073182622289622020-09-04T20:12:13.165-07:002020-09-04T20:12:13.165-07:00@wrf3
"The problem with your position is the...@wrf3<br /><br />"The problem with your position is the fact that meaning -- the association of things -- is an algorithmic, mechanical process. It doesn't matter if that process is implemented in carbon or silicon."<br /><br />You are begging the question here. What we are discussing is exactly if the creation of meaning is a mechanical process that can be made by a computer, no? <br /><br />I think that we are wasting a lot of time discussing useless stuff when our disagreement seems more on the nature of the formation of meaning and all that. Instead of answering your points, i just ask two questions, okay?<br /><br />1. Are associations between simbols and what they refer objective or not?<br /><br />2. Are associations between thoughts and what is thought objective or only seems to be?<br /><br /><br />By "objective" i mean that they are not a matter of opinion, like the fact that two plus two equals four or that things exist. <br /><br /><br />I know that it looks like i just can't answer you, but i legitimely think that we are kinda talking past each other, so i think that i should try something diferent to see if it works. If you answer, i will try to respond to questions too.Talmidhttps://www.blogger.com/profile/04267925670235640337noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-59346786337132396942020-09-04T07:53:04.884-07:002020-09-04T07:53:04.884-07:00Talmid: I don't see the problem. What i'am...<b>Talmid</b>: <i>I don't see the problem. What i'am saying is that meaning is made up BY US, not by machines.</i><br />The problem with your position is the fact that meaning -- the association of things -- is an algorithmic, mechanical process. It doesn't matter if that process is implemented in carbon or silicon.<br /><i>I would say that your method only makes sense if are looking to beings that CAN be inteligent, like a alien life form or something.</i><br />How do you know something <b>can</b> be intelligent? Do you think that only objects made of carbon can be intelligent?<br /><i>... we could just be wrong about the being behavior requiring rationality.</i><br />In two different recent discussions with two different atheists, both of them said that truth doesn't guide behavior, emotion does. Are they rational? Are they intelligent? How do you determine rational behavior from irrational behavior?<br /><i>... a mind is qualitatively different from a computer ...</i><br />You can repeat this all day long. Asserting it doesn't make it so. And your assertion flies in the face of neurobiology and computational theory.<br /><i> Try to go the by eliminativist way and deny our capacity for do that is just complete crazyness.</i><br />The "eliminativist" way doesn't deny our ability to do that. What it does say is that the wiring is the program -- whether it's the carbon wiring of neurons in the brain or silicon wiring in a Horta. It's the complexity of the wiring that gives rise to the complexity of behavior.<br /><i>... Feser and Searle arguments seems solid to me.</i><br />On the contrary, they are demonstrably wrong, e.g. <a href="http://jmc.stanford.edu/articles/chinese.html" rel="nofollow">here</a>, <a href="https://stablecross.com/files/Searle_Chinese_Room.html" rel="nofollow">here</a>, <a href="https://stablecross.com/files/feser_searle.html" rel="nofollow">here</a>.<br /><i>By "mind-independent" i mean that the relation does not depend on a unrelated observer.</i><br />Sure. It depends on the related observer -- you.<br />So now you have a bunch of observers (if you're not a solipsist) who are observing a common nature (allegedly) and forming associations in their brains (because that's what brain wiring does. Whether carbon or silicon). So, yes, brains that have connections to output devices can communicate their internal connections to other observers, who then map those communications to their own internal mappings. And, thus, meaning is shared.<br /><br /><br /><br /><br /><br /><br />wrf3https://www.blogger.com/profile/04657932934353372526noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-61577383088388669382020-09-03T20:32:31.186-07:002020-09-03T20:32:31.186-07:00"if you put him away them the relation goes a..."if you put him away them the relation goes away too"<br /><br />Talmidhttps://www.blogger.com/profile/04267925670235640337noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-56560025602191476062020-09-03T20:26:58.245-07:002020-09-03T20:26:58.245-07:00@wrf3
"But you just agreed with me that mean...@wrf3<br /><br />"But you just agreed with me that meaning "is simply arbitrary associations between things". You can't have it both ways."<br /><br />I don't see the problem. What i'am saying is that meaning is made up BY US, not by machines. <br /><br />"The comic equivocates by using shallow explorations of complex behaviors. If you have some way of determining intelligence other than by observing behavior and comparing it to your internal web of meaning, what is it?"<br /><br />I would say that your method only makes sense if are looking to beings that CAN be inteligent, like a alien life form or something. We know that at least a life form is inteligent, so if we see a life form that acts on a way that looks rational we can think that he probably is smart. I say probably because, as the comic portrays, we could just be wrong about the being behavior requiring rationality. <br /><br />I repeat the problem: a mind is qualitatively diferent from a computer and a AI is just a more complex computer, so it just won't suddently become inteligent. Sure, you deny that a mind and a computer are qualitatively diferent, but i think you agree that your method can fail, so a computer passing in it does not trouble me. <br /><br />And why should i suggest a method? The method still has problems having or not having oponents. <br /><br />"You write this as if you know what the "substance" of intelligence is. What is the "substance" of my dog's intelligence? What is the substance of your intelligence? Is there any difference other than basic brain wiring? If so, what? How do you know?"<br /><br />I don't think that dogs are rational/inteligent, but i could not care less about discussing that, so, i will not. <br /><br />By "inteligence" i mean the capacity to understand abstract concepts(like man and mortal), to put together these concepts and make judgments(like that "men are mortal") and reason from these judgments to conclusions(like picking "all mans are mortal" and "Socrates is a man" to "Socrates is a mortal"). Try to go the by eliminativist way and deny our capacity for do that is just complete crazyness.<br /><br />And yea, i would say that our inteligence is not part of the brain, even if "she" needs it to work. I know by philosophical argumentation, really, Feser and Searle arguments seems solid to me.<br /><br />"I don't understand how you can claim that a relation between a "thought about snow" and "actual snow" is mind independent. Thoughts about things can't be independent from mind, and whatever actual snow is, it isn't perceived independently of mind.<br /><br />Did I misunderstand the question?"<br /><br />You did, but this is by my fault and not yours, sorry, that was a pretty dumb error. Why i did choose "mind-independent" when talking about MINDS is beyond me, haha.<br /><br /> By "mind-independent" i mean that the relation does not depend on a unrelated observer. The relation between the word "snow" and snow(you know, the real thing, if you are american you probably saw it before) depends on a third being, a observer, if you that he away them the relation goes away too. This is not the case between a mind and what "she" thinks about, there is no need to have a unrelated observer, the mind is actually related with the thing thinked. This can't be denied by at least three reasons:<br /><br />1. We know it happens by, you known, seeing our thoughts.<br /><br />2. This idea is implicity on the concept of a observer.<br /><br />3. The idea of trying to comunicate a idea(or trying anything) presuposes that minds can understand things, not just look like they can.Talmidhttps://www.blogger.com/profile/04267925670235640337noreply@blogger.com