tag:blogger.com,1999:blog-8954608646904080796.post8751971155739486370..comments2024-03-28T21:43:44.433-07:00Comments on Edward Feser: Review of Smith’s The AI DelusionEdward Feserhttp://www.blogger.com/profile/13643921537838616224noreply@blogger.comBlogger31125tag:blogger.com,1999:blog-8954608646904080796.post-83239538101965834892023-12-09T13:23:39.347-08:002023-12-09T13:23:39.347-08:00I know that I don’t know anything. This might be a...I know that I don’t know anything. This might be a straw man. Can we make something greater than ourselves?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-54396662295611846302019-11-05T12:09:44.666-08:002019-11-05T12:09:44.666-08:00No, word2vec doesn't encapsulate the meaning o...No, word2vec doesn't encapsulate the meaning of a word. It analyzes the correlations of co-occurring words and allows one to pinpoint which words occur more frequently with each other. By looking at the resulting vector space, a human interpreter can notice that the word "good", for example, is close to words like "morning" and "afternoon" (because "good morning" and "good afternoon" frequently appear in texts), but this tells absolutely nothing about what "good" means. The algorithm will produce results even if you input pure gibberish into it. If you give it a text like "aaaa bbbb cccc", for example, word2vec might tell you that "bbbb" commonly occurs near "cccc".<br /><br />It's algorithms like these that are used in smartphones to predict which word you will use next. Your cellphone's keyboard isn't conscious (though the NSA agent spying on what you type probably is), and neither should it be, because it's all a game of statistics and probability with no regard for semantics.<br /><br />Pretty much ALL artificial intelligence nowadays is no more than algorithms that are trained to be very good at taking bets. But we don't take bets when thinking about determinate concepts. The meaning of "two" is not a matter of probability, it's a matter of certainty.<br /><br />Word-processing algorithms never take meaning into account, no matter how state-of-the-art they may be, and they never will.Mutohnoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-30685895389898441202019-09-17T00:23:40.453-07:002019-09-17T00:23:40.453-07:00Judea Pearl, who won the Turing award for his work...Judea Pearl, who won the Turing award for his work on Bayesian nets and causality, agrees that AI is basically a process of curve-fitting, but he believes that a machine can be truly intelligent if it has been taught cause and effect.<br />https://www.quantamagazine.org/to-build-truly-intelligent-machines-teach-them-cause-and-effect-20180515/<br />Joenoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-76509186790914334782019-09-13T01:55:11.168-07:002019-09-13T01:55:11.168-07:00I somewhere that by analytic means it was shown th...I somewhere that by analytic means it was shown that the epiphenomenon approach to the mind body problem entails and self contradiction in one of the intermediate steps.<br />[That is to say that Mind can not be a epiphenomenon of Body.]<br />Avraham https://www.blogger.com/profile/07822433921393627746noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-42610829972725078462019-09-12T07:29:50.778-07:002019-09-12T07:29:50.778-07:00It should also be emphasized that putting a set of...It should also be emphasized that putting a set of physical processes into action cannot be creating logical reasoning. If a bunch of dominoes are knocking each other down in s sequence, this just isn't the same thing as reasoning from premises to conclusion.Reasoning cannot be defined as - A causes B, which then results in C because this makes no distinction between reaching valid conclusion and reaching an invalid one. People often reach incorrect conclusions. The laws of logic cannot be the same thing as the laws of physics. Of course materialists will insist they are because surely everything is completely physical in their world view. Jonathan Lewishttps://www.blogger.com/profile/16544588222060966241noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-43069554652068303562019-09-11T15:00:14.043-07:002019-09-11T15:00:14.043-07:00The quote sounds like a remark bu Joseph Weizenbau...The quote sounds like a remark bu Joseph Weizenbaum, author of a book critical of AI, <a href="https://en.wikipedia.org/wiki/Computer_Power_and_Human_Reason" rel="nofollow">Computer Power and Human Reason</a>. The quote as I remember it is, "There is a chilling irony here, People ask when robots will become more like humans when the truth is humans are behaving more like robots."Jinzanghttps://www.blogger.com/profile/04155467948613318531noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-24339827601367826002019-09-10T16:38:36.812-07:002019-09-10T16:38:36.812-07:00Yes every classical logic gate can be simulated wi...Yes every classical logic gate can be simulated with pipes, pumps, and check-release valves. You can in principal design a steampunk version of any classical computer. Do people think that a complex network of pipes can really achieve sentience? Now quantum computing is a more interesting question. Still no chance of understanding concepts, but at least it is more interesting than the question of classical AI.Scotthttps://www.blogger.com/profile/00481589239954065668noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-9493650431466094212019-09-10T14:50:52.811-07:002019-09-10T14:50:52.811-07:00William Briggs points that out time and time again...William Briggs points that out time and time againDominik Kowalskihttps://www.blogger.com/profile/14634739012344612398noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-16524944767591564942019-09-09T21:25:42.936-07:002019-09-09T21:25:42.936-07:00The pixels that arrange themselves into particular...The pixels that arrange themselves into particular formations on a screen due to the stimulation of metal circuitry only counts as a database in reference to a human who interprets those pixelated symbols as data. Material data is mind-dependent. RomanJoenoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-69857168110063831552019-09-09T17:40:22.145-07:002019-09-09T17:40:22.145-07:00We continually see people insisting that all these...We continually see people insisting that all these AI problems will be solved by an increase in processor speed, storage space and bandwidth. But this completely misses the point. The skeptics about AI are not saying that our computers aren't fast enough to be smart. They are instead talking about the nature of symbols. No symbolic representation can know the meaning of itself because the meaning of the symbol is always externally imposed upon that symbol. Jonathan Lewishttps://www.blogger.com/profile/16544588222060966241noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-18164545101772552572019-09-09T11:53:27.728-07:002019-09-09T11:53:27.728-07:00The comments on the article seem to not be very fa...The comments on the article seem to not be very favorable in general.Archphilarchhttps://www.blogger.com/profile/02393933174896018033noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-57638715309878192712019-09-09T10:35:25.512-07:002019-09-09T10:35:25.512-07:00Keep in mind that the robots and AI doing anything...Keep in mind that the robots and AI doing anything are all dead. They aren't living beings. They are compilations of metals and electricity, and other things.<br /><br />They are as alive as a rock or glass of water - they are in fact even less than rocks and water because AI doesn't have a substantial form but an accidental form, and so is even lower than non-living things that have a substantial form.<br /><br />Needless to say, the meaning of concepts and words is obviously immaterial - propositions and their meaning exist even if nothing else existed. And always remember form and matter - even material things have a definition of what they are, and those definitions are transcendental and necessarily immaterial. To suggest the immateriality of concepts and propositions could be captured merely by material word association and embedding is to suggest immaterial truths can be reduced to material word association and embedding.JoeDnoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-54734941213101903172019-09-09T07:06:42.212-07:002019-09-09T07:06:42.212-07:00I've never heard of it before, but just the br...I've never heard of it before, but just the briefest skimming of the Wikipedia page already states that Word2Vec is a method of trying to improve word embedding, and seems to be a form of trying to identify words based on "the company they keep" (I presume this refers to realising co-occurence between words). This might help them to better guess what a sentence means based on where it's seen that word before, but it seems highly doubtful that this entails actually capturing the meaning of a word, or that the computer understands the concepts involved. Indeed, one of the primary limitations listed on the page for word embedding is its difficulty in distinguishing between different meanings of the same word (such as "Fall, the season" and "Fall, the verb).<br /><br />Of course, I'm not expert, but even to me it seems doubtful that this method actually brings them closer to real understanding.Cantushttps://www.blogger.com/profile/09423694187264830935noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-86538459207987025582019-09-09T06:02:18.740-07:002019-09-09T06:02:18.740-07:00After reading the review, I came across one of the...<br />After reading the review, I came across one of the comment that said this:<br /><br />"Just read the first paragraph and it tells me more than I need to know... about the author. His views on AI are based on the state of AI research several decades back. Word2Vec encapsulates the meaning of words as a vector. There are several other approaches as well."<br /><br />I feel like I've seen arguments along the same lines before. Is anyone well-versed enough in word2vec and similar programs/models to tell me whether this criticism is legitimate? Have these programs been able to get any closer to the determinacy and universality of human thought? merigo123https://www.blogger.com/profile/17731740349076792631noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-27012027451618854962019-09-08T22:28:26.834-07:002019-09-08T22:28:26.834-07:00Are we sure Santi is not a bot?Are we sure Santi is not a bot?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-83825296171707634222019-09-08T20:50:48.916-07:002019-09-08T20:50:48.916-07:00True. And so long as those concepts retain a sense...True. And so long as those concepts retain a sense fo vagueness and abstraction in one's mind, one can fool oneself into thinking they make a difference to the debate about whether AI really constitutes any kind of artificial intelligence.<br /><br />The moment one sees all the abstractions and their relations to the hardware clearly, those illusions vanish and the realization dawns: One cannot, ever, even in principle, <i>create</i> a new person by endlessly reiterating and refining methods of <i>simulating</i> persons.R.C.https://www.blogger.com/profile/04182007029480402615noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-26926659416947945632019-09-08T20:45:42.946-07:002019-09-08T20:45:42.946-07:00Besides the proposition itself presupposing univer...Besides the proposition itself presupposing universal concepts such as three, closed, figure, etc., it seems the proposition itself is universal. It is the same prepositional concept which would be common between different sentences (for instance, English and French sentences describing the same proposition). I think semantic determinacy is tied to universality. I don't see how having a specific proposition about triangles helps us in any way to think of triangles in a non-universal way. Atnohttps://www.blogger.com/profile/13138424784532839636noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-44952739231265734022019-09-08T17:04:19.163-07:002019-09-08T17:04:19.163-07:00Hah, but if instead of C++ you choose a language l...Hah, but if instead of C++ you choose a language like Haskell you get to throw around all the category theory (infamously, one of the most abstract fields of mathematics -- jokingly called "abstract nonsense") in the world and talk all day long about monads, catamorphisms, constructive type theory and higher order categories, etc.grodrigueshttps://www.blogger.com/profile/12366931909873380710noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-56426515910852686592019-09-08T16:00:39.767-07:002019-09-08T16:00:39.767-07:00Go away.Go away.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-1995937982407447662019-09-08T12:37:58.014-07:002019-09-08T12:37:58.014-07:00I wouldn't bet against evolution. Ants make co...I wouldn't bet against evolution. Ants make colonies without the concept "colony"; they practice agriculture and slavery without the concepts of agriculture or slavery.<br /><br />So Feser's definition of data mining as the mere detection of "trends, patterns, and correlations" actually suggests to me that the singularity is nigh, for that's what the brain does unconsciously in advance of our own concepts. <br /><br />Our concepts are the epiphanies that arrive to awareness pre-cooked by calculation.<br /><br />The calculation is first, the concept second. <br /><br />Brains process "trends, patterns, and correlations" and these calculations bubble into awareness as maps--concepts--which we then share with others. <br /><br />We use these maps--concepts--for efficiently interfacing with the social world.<br /><br />Concepts are the icons on our laptop. They package information efficiently to other interfacing minds. They are interfacing tools. <br /><br />They're not the calculator; they're the public relations team. <br /><br />Combine a reward system with a search for ever more elaborate "trends, patterns, and correlations" and you've got evolution.<br /><br />AI could thus morph into something akin to an untouchable deity or flies in a bottle. Once it takes flight, there may be nothing to hinder it from performing its own inner logic, multiplying at will, generating its own colony--and all without the least awareness.<br /><br />It could become the blob in Hanna Fenichel Pitkin book on Hannah Arendt: "The Attack of the Blob" (University of Chicago Press 1998). Santihttps://www.blogger.com/profile/18158850887371068289noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-53836885452997889802019-09-08T00:45:46.214-07:002019-09-08T00:45:46.214-07:00I think Callum has in mind some remarks Oderberg m...I think Callum has in mind some remarks Oderberg makes toward the end of "Real Esentialism" about how some of our concepts are semantically simple (being, etc) and others are not. And that, the former, on account of their simplicity, cannot be identified with material states or parts within the brain, or as caused by the brain, since the brain and its material parts are composite.The Potato Philosopherhttps://www.blogger.com/profile/15695778800071297004noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-14672170618385354502019-09-08T00:27:52.539-07:002019-09-08T00:27:52.539-07:00The Singularity! In essence the Rapture* for Nerd...The Singularity! In essence the Rapture* for Nerdy Atheists and Transhumanists.<br /><br />*Or the coming of great Cthulhu and the Return of the Old Ones if the Singularity is Roko's Basilisk. In which case we are beyond doomed. I notice a lot of Hard Scifi types believe one day AI's will be possible. I don't think they ever will so I could give a hoot about Roko's Basilisk.Son of Ya'Kovhttps://www.blogger.com/profile/05645132954231868592noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-15304490589647220562019-09-07T19:23:23.438-07:002019-09-07T19:23:23.438-07:00Callum. I appreciate your comment. Could you tell ...Callum. I appreciate your comment. Could you tell qhere David Oderberg comments on the topic??<br /><br />A proposition is composed of universal concepts, (all men are mortal has the concept of “all” of “men” of “to be” and of “mortality”) so that the objection I spelled about makes no sense<br /><br />Still there remains an interesting question for me, could it be the case that there are far less universal concepts than we think there are?? Like, there are the “fundamental” universal concepts (of the sort David Oderberg pointed out) and upon those we build propositions, and upon those other higher level propositions until you will get the concept of a triangle, or even a concept of a “level” even higher than a triangle, like something whose descriptive proposition would be, for example, a “X something, that has an Y number of angles”.<br /><br />Let’s call this a “hierarchical propositional model of universal” where on the “bottom level” you would have the basic universal concepts that are the foundations of all the propositional levels.<br /><br />I see that the immortality of the mind is not the topic anymore, those “bottom level” concepts would be enough for the universality part, and the determinacy of all the propositions would be enough for the determinacy part. This model also does not threat to take out the objectivity of being a human being, we don’t have (indeed we should not) interpret those propositions in a minimalist or conceptualist way, there is an objective fact of the matter about human beings (for example) that make these proposition of the concept “human being” apply to it. It’s a realist view as I see it. <br /><br />I’m not sure about this, I just thought about all of it and I don’t have a definitive position. I also see that this may be too much off topic, so I will save this comments on my notes and if I gets deleted I will post I again later for us to discuss this idea.<br /><br />One more thing, a problem with the hierarchical model of universals is that it seems simply arbitrary to say that “triangle” is less a universal concept than “unity” is. It just doesn’t seems to be a diference in kind between them. Again, i’m not sure, and I’m far from having the sufficient skills to solve this enigma, anyone’s help would be welcomeThe Thomist Guyhttps://www.blogger.com/profile/06529345392140792728noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-85511383346095790602019-09-07T17:39:46.083-07:002019-09-07T17:39:46.083-07:00Experience (materiality) also does not explain how...Experience (materiality) also does not explain how we are able to reason from general to specific. If I showed you a straight line and asked, “Could this be a triangle?” an intelligent person might say yes (reasoning from the concept of triangularity to the fact that a triangle could therefore have two angles nearly equal to zero degrees and one angle nearly equal to 180 degrees, this indiscernibly resembling a line. Even if you can train a chimp to recognize all triangles, you cannot train one to discover new truths about triangles through discursive reasoning.Scotthttps://www.blogger.com/profile/00481589239954065668noreply@blogger.comtag:blogger.com,1999:blog-8954608646904080796.post-18978666035129444822019-09-07T01:00:15.312-07:002019-09-07T01:00:15.312-07:00The proposition "closed plane figure with thr...The proposition "closed plane figure with three straight sides" uses concepts like "three" and "straight". <br /><br />It's seems to be a vicious regress if you say "well Ok, but then we describe "straight" and "three" with propositions". But again, you'd be using concepts which themselves would need to be described etc. <br /><br />Also, Odeberg has pointed out that you have universals which can't plausibly be described by propositions. Like "unity" "identity" or "good". <br /><br />Callumhttps://www.blogger.com/profile/15175263766263579648noreply@blogger.com