My review of
economist Gary Smith’s excellent recent book The
AI Delusion appears today at City Journal.
Friday, September 6, 2019
Subscribe to:
Post Comments (Atom)
"One of the best contemporary writers on philosophy" National Review
"A terrific writer" Damian Thompson, Daily Telegraph
"Feser... has the rare and enviable gift of making philosophical argument compulsively readable" Sir Anthony Kenny, Times Literary Supplement
Selected for the First Things list of the 50 Best Blogs of 2010 (November 19, 2010)
I have recently been looking for the source of a quotation that, I fear, exactly fits your review – so exactly that it might well be swapped in for the last line. Quoting from memory:
ReplyDelete‘The danger is not that machines will become as intelligent as humans, but that we will agree to meet them halfway.’
The quote sounds like a remark bu Joseph Weizenbaum, author of a book critical of AI, Computer Power and Human Reason. The quote as I remember it is, "There is a chilling irony here, People ask when robots will become more like humans when the truth is humans are behaving more like robots."
DeleteThis is the primary source of most of this speculation https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=&sr=
ReplyDeleteAs someone whose undergraduate concentration was in robotics, any talk of artificial intelligence seems laughable. If half of the problem is the general population is uneducated on philosophy of mind, the other half of the problem is that the general population is uneducated in basic engineering principles. If you have any question that computers may one day achieve sentience, go by an Arduino micro-controller for $30, download C++ on your laptop, and start playing around with it. You will realize very shortly how utterly hopeless computers are without humans.
ReplyDeleteHah, but if instead of C++ you choose a language like Haskell you get to throw around all the category theory (infamously, one of the most abstract fields of mathematics -- jokingly called "abstract nonsense") in the world and talk all day long about monads, catamorphisms, constructive type theory and higher order categories, etc.
DeleteTrue. And so long as those concepts retain a sense fo vagueness and abstraction in one's mind, one can fool oneself into thinking they make a difference to the debate about whether AI really constitutes any kind of artificial intelligence.
DeleteThe moment one sees all the abstractions and their relations to the hardware clearly, those illusions vanish and the realization dawns: One cannot, ever, even in principle, create a new person by endlessly reiterating and refining methods of simulating persons.
I’m not sure whether this I going to be off topic. If it is i’m sorry. But given that we are talking about “artificial intelligence” it doesn’t seems very out of place to talk about “real intelligence” I have been thinking about Thomistic arguments for the immateriality of the mind and the following question came to my mind:
ReplyDeleteThere are two aspects of our thought that cannot accounted for by reference only to the the body: namely their determinacy and their universality. My question concern the universality aspect of thought. For us to have knowledge of the universal triangularity we need to possess it in some way. I cannot be material, given that if it were material there would be some individualizing feature to it. A true concept has to bet truly universal. It has to be imaterial. But suppose a materialist says that we don’t have the concept “triangularity” in us in the first place. What we have, he may say, is the proposition “a closed figure with there sides”. I don’t know how to respond to this objection. For sure the determinacy of such proposition would still be a problem for the materialist, but the universality would not be one anymore.
A possible answer may be that a proposition is merely a description of a concept, so that we would need to possess “the concept” itself which the proposition describe. But it seems that the opposite could be said, it is not so much that our proposition “a close figure with three sides” describes the concept “triangularity”, but that “triangularity” is just what we call some thing X of which the proposition “a close figure with three sides” is true, where the “concept” just is the proposition, and a thing possessing triangularity is not a matter about something instantiating the universal “triangularity” at least of what we mean by such is anything different from something of which the proposition “it is a close figure with three side is true”
This is very inchoate and I think there probably is some problem with it, just not sure yet what that is
The idea seems to be this, concepts are propositions and universals too. So that possessing a universal is not a matter of instantiating some form in some way (materially or immaterially) but rather of grasping a proposition, so that the real difficulty is just the determinacy of though and not universality anymore.
Looking forward to your upcoming book on the mind Feser!! If I’m not mistaken David Bentley Hart is also going to release a book on that topic. My expection is that The Experience of God is going to be to Five proofs of the existence of God as his book on mind will be to yours (please don’t call it Five proofs of the immateriality of the mind, that would be awful)
The proposition "closed plane figure with three straight sides" uses concepts like "three" and "straight".
DeleteIt's seems to be a vicious regress if you say "well Ok, but then we describe "straight" and "three" with propositions". But again, you'd be using concepts which themselves would need to be described etc.
Also, Odeberg has pointed out that you have universals which can't plausibly be described by propositions. Like "unity" "identity" or "good".
Experience (materiality) also does not explain how we are able to reason from general to specific. If I showed you a straight line and asked, “Could this be a triangle?” an intelligent person might say yes (reasoning from the concept of triangularity to the fact that a triangle could therefore have two angles nearly equal to zero degrees and one angle nearly equal to 180 degrees, this indiscernibly resembling a line. Even if you can train a chimp to recognize all triangles, you cannot train one to discover new truths about triangles through discursive reasoning.
DeleteCallum. I appreciate your comment. Could you tell qhere David Oderberg comments on the topic??
DeleteA proposition is composed of universal concepts, (all men are mortal has the concept of “all” of “men” of “to be” and of “mortality”) so that the objection I spelled about makes no sense
Still there remains an interesting question for me, could it be the case that there are far less universal concepts than we think there are?? Like, there are the “fundamental” universal concepts (of the sort David Oderberg pointed out) and upon those we build propositions, and upon those other higher level propositions until you will get the concept of a triangle, or even a concept of a “level” even higher than a triangle, like something whose descriptive proposition would be, for example, a “X something, that has an Y number of angles”.
Let’s call this a “hierarchical propositional model of universal” where on the “bottom level” you would have the basic universal concepts that are the foundations of all the propositional levels.
I see that the immortality of the mind is not the topic anymore, those “bottom level” concepts would be enough for the universality part, and the determinacy of all the propositions would be enough for the determinacy part. This model also does not threat to take out the objectivity of being a human being, we don’t have (indeed we should not) interpret those propositions in a minimalist or conceptualist way, there is an objective fact of the matter about human beings (for example) that make these proposition of the concept “human being” apply to it. It’s a realist view as I see it.
I’m not sure about this, I just thought about all of it and I don’t have a definitive position. I also see that this may be too much off topic, so I will save this comments on my notes and if I gets deleted I will post I again later for us to discuss this idea.
One more thing, a problem with the hierarchical model of universals is that it seems simply arbitrary to say that “triangle” is less a universal concept than “unity” is. It just doesn’t seems to be a diference in kind between them. Again, i’m not sure, and I’m far from having the sufficient skills to solve this enigma, anyone’s help would be welcome
I think Callum has in mind some remarks Oderberg makes toward the end of "Real Esentialism" about how some of our concepts are semantically simple (being, etc) and others are not. And that, the former, on account of their simplicity, cannot be identified with material states or parts within the brain, or as caused by the brain, since the brain and its material parts are composite.
DeleteBesides the proposition itself presupposing universal concepts such as three, closed, figure, etc., it seems the proposition itself is universal. It is the same prepositional concept which would be common between different sentences (for instance, English and French sentences describing the same proposition). I think semantic determinacy is tied to universality. I don't see how having a specific proposition about triangles helps us in any way to think of triangles in a non-universal way.
DeleteWhile I am sympathetic to the view that computers lack understanding, it is also true that human beings have access to a vast database from their experience, knowledge and senses which allows them to make good judgements.
ReplyDeleteComputers don't have experiences, knowledge, or senses, and fundamentally lack a mind that could experience, know, or sense anything. That they can easily be mistaken to do so is entirely a triumph of electrical engineering and computer science.
DeleteThe pixels that arrange themselves into particular formations on a screen due to the stimulation of metal circuitry only counts as a database in reference to a human who interprets those pixelated symbols as data. Material data is mind-dependent.
DeleteThe Singularity! In essence the Rapture* for Nerdy Atheists and Transhumanists.
ReplyDelete*Or the coming of great Cthulhu and the Return of the Old Ones if the Singularity is Roko's Basilisk. In which case we are beyond doomed. I notice a lot of Hard Scifi types believe one day AI's will be possible. I don't think they ever will so I could give a hoot about Roko's Basilisk.
I wouldn't bet against evolution. Ants make colonies without the concept "colony"; they practice agriculture and slavery without the concepts of agriculture or slavery.
ReplyDeleteSo Feser's definition of data mining as the mere detection of "trends, patterns, and correlations" actually suggests to me that the singularity is nigh, for that's what the brain does unconsciously in advance of our own concepts.
Our concepts are the epiphanies that arrive to awareness pre-cooked by calculation.
The calculation is first, the concept second.
Brains process "trends, patterns, and correlations" and these calculations bubble into awareness as maps--concepts--which we then share with others.
We use these maps--concepts--for efficiently interfacing with the social world.
Concepts are the icons on our laptop. They package information efficiently to other interfacing minds. They are interfacing tools.
They're not the calculator; they're the public relations team.
Combine a reward system with a search for ever more elaborate "trends, patterns, and correlations" and you've got evolution.
AI could thus morph into something akin to an untouchable deity or flies in a bottle. Once it takes flight, there may be nothing to hinder it from performing its own inner logic, multiplying at will, generating its own colony--and all without the least awareness.
It could become the blob in Hanna Fenichel Pitkin book on Hannah Arendt: "The Attack of the Blob" (University of Chicago Press 1998).
Go away.
DeleteAre we sure Santi is not a bot?
Delete
ReplyDeleteAfter reading the review, I came across one of the comment that said this:
"Just read the first paragraph and it tells me more than I need to know... about the author. His views on AI are based on the state of AI research several decades back. Word2Vec encapsulates the meaning of words as a vector. There are several other approaches as well."
I feel like I've seen arguments along the same lines before. Is anyone well-versed enough in word2vec and similar programs/models to tell me whether this criticism is legitimate? Have these programs been able to get any closer to the determinacy and universality of human thought?
I've never heard of it before, but just the briefest skimming of the Wikipedia page already states that Word2Vec is a method of trying to improve word embedding, and seems to be a form of trying to identify words based on "the company they keep" (I presume this refers to realising co-occurence between words). This might help them to better guess what a sentence means based on where it's seen that word before, but it seems highly doubtful that this entails actually capturing the meaning of a word, or that the computer understands the concepts involved. Indeed, one of the primary limitations listed on the page for word embedding is its difficulty in distinguishing between different meanings of the same word (such as "Fall, the season" and "Fall, the verb).
DeleteOf course, I'm not expert, but even to me it seems doubtful that this method actually brings them closer to real understanding.
Keep in mind that the robots and AI doing anything are all dead. They aren't living beings. They are compilations of metals and electricity, and other things.
DeleteThey are as alive as a rock or glass of water - they are in fact even less than rocks and water because AI doesn't have a substantial form but an accidental form, and so is even lower than non-living things that have a substantial form.
Needless to say, the meaning of concepts and words is obviously immaterial - propositions and their meaning exist even if nothing else existed. And always remember form and matter - even material things have a definition of what they are, and those definitions are transcendental and necessarily immaterial. To suggest the immateriality of concepts and propositions could be captured merely by material word association and embedding is to suggest immaterial truths can be reduced to material word association and embedding.
The comments on the article seem to not be very favorable in general.
DeleteNo, word2vec doesn't encapsulate the meaning of a word. It analyzes the correlations of co-occurring words and allows one to pinpoint which words occur more frequently with each other. By looking at the resulting vector space, a human interpreter can notice that the word "good", for example, is close to words like "morning" and "afternoon" (because "good morning" and "good afternoon" frequently appear in texts), but this tells absolutely nothing about what "good" means. The algorithm will produce results even if you input pure gibberish into it. If you give it a text like "aaaa bbbb cccc", for example, word2vec might tell you that "bbbb" commonly occurs near "cccc".
DeleteIt's algorithms like these that are used in smartphones to predict which word you will use next. Your cellphone's keyboard isn't conscious (though the NSA agent spying on what you type probably is), and neither should it be, because it's all a game of statistics and probability with no regard for semantics.
Pretty much ALL artificial intelligence nowadays is no more than algorithms that are trained to be very good at taking bets. But we don't take bets when thinking about determinate concepts. The meaning of "two" is not a matter of probability, it's a matter of certainty.
Word-processing algorithms never take meaning into account, no matter how state-of-the-art they may be, and they never will.
We continually see people insisting that all these AI problems will be solved by an increase in processor speed, storage space and bandwidth. But this completely misses the point. The skeptics about AI are not saying that our computers aren't fast enough to be smart. They are instead talking about the nature of symbols. No symbolic representation can know the meaning of itself because the meaning of the symbol is always externally imposed upon that symbol.
ReplyDeleteWilliam Briggs points that out time and time again
DeleteYes every classical logic gate can be simulated with pipes, pumps, and check-release valves. You can in principal design a steampunk version of any classical computer. Do people think that a complex network of pipes can really achieve sentience? Now quantum computing is a more interesting question. Still no chance of understanding concepts, but at least it is more interesting than the question of classical AI.
DeleteIt should also be emphasized that putting a set of physical processes into action cannot be creating logical reasoning. If a bunch of dominoes are knocking each other down in s sequence, this just isn't the same thing as reasoning from premises to conclusion.Reasoning cannot be defined as - A causes B, which then results in C because this makes no distinction between reaching valid conclusion and reaching an invalid one. People often reach incorrect conclusions. The laws of logic cannot be the same thing as the laws of physics. Of course materialists will insist they are because surely everything is completely physical in their world view.
ReplyDeleteI somewhere that by analytic means it was shown that the epiphenomenon approach to the mind body problem entails and self contradiction in one of the intermediate steps.
ReplyDelete[That is to say that Mind can not be a epiphenomenon of Body.]
Judea Pearl, who won the Turing award for his work on Bayesian nets and causality, agrees that AI is basically a process of curve-fitting, but he believes that a machine can be truly intelligent if it has been taught cause and effect.
ReplyDeletehttps://www.quantamagazine.org/to-build-truly-intelligent-machines-teach-them-cause-and-effect-20180515/
I know that I don’t know anything. This might be a straw man. Can we make something greater than ourselves?
ReplyDelete