Subscribe to:
Post Comments (Atom)
"One of the best contemporary writers on philosophy" National Review
"A terrific writer" Damian Thompson, Daily Telegraph
"Feser... has the rare and enviable gift of making philosophical argument compulsively readable" Sir Anthony Kenny, Times Literary Supplement
Selected for the First Things list of the 50 Best Blogs of 2010 (November 19, 2010)
I suspect the topic of AI is one in which the intuitions of a great deal of materialists can be shown to be at odds with their stated philosophy.
ReplyDeleteIf you are committed to a materialistic worldview, it is hard to come up with a principled reason why current AI like chatGPT doesn't actually think. And yet the position that existing AI is actually thinking (that is to say, when we talk to an AI, it is doing the same kind of thing that we do when we talk to each other) is a minority position among those in AI research.
Well, I hope some abstract or version of Professor Feser's paper will appear in this blog some time, to take away our illusions about what he thinks. Or I could ask ChatGPT.
ReplyDeleteI was going to comment about the article. But then I can't get the article without paying to get past the paywall. And without reading the article, I don't know if I want to pay. So: I guess I'll wait for the movie, I am sure the price of a ticket will be less. And will just live with the fact that the movie is "based on" the article, instead of, you know, actually presenting what's written. It'll be close, right?
DeleteWCB
ReplyDelete"If you are committed to a materialistic worldview, it is hard to come up with a principled reason why current AI like chatGPT doesn't actually think."
It all depends on how one defines thinking, doesn't it?
"AI doesn't think like humans, therefore AI does not think". That is about it. A hamster thinks, Just not like humans. AI, same thing. Sure AI is limited today. But a century from now it might be a far different situation.
None of this is a problem for materialists.
WCB
I believe they call this the "science of the gaps" argument. The idea that science will be able to explain away everything immaterial/spiritual eventually is, ironically, very unscientific. It's a completely unwarranted, irrational assumption, which materialists cling to out of necessity, because without it their entire worldview falls apart.
DeleteWCB
DeleteI am not arguing science explains all. I just am pointing out that the word "thinking" here is a vague and loaded term. This leads to some unsatisfying word games, leading nowhere. A big problem with philosophical theorizing on such complex issues.
First, we should define thinking. and in doing so, avoid straw men and argument by definition, a logical fallacy.
As for intelligence, think of this analogy. Learning physics from a well qualified expert in a good college physics class. And learning bad physics from an online creationist kook. Bot are intelligent in a sense, but not all intelligence is equal. Today's AI training as it does on the internet, is usually pretty bad.
But one can set up a data base of curated expertise, and confine AI to use that instead of the Internet. The next obvious step is to train AI to understand the difference, and be able to choose true expertise rather than nonsense.
And we can bet that is where AI will be headed in the future.
WCB
The question is not how curated or advanced AI can be in order to reach deductions, the question is how to bridge the ultimate gap of induction. AI is incapable of interpreting evidence to surmise a hypothesis, and to insist it is capable of doing so eventually is to say that AI is capable of doing something new, which is has never done.
DeleteWCB, if, in order to ensure that AI is able to understand the difference between expertise and nonsense, we need to train it on a data set that exclusively contains expertise, how did any human ever learn the difference? How many humans are trained exclusively by experts?
Delete@ Anonymous April 3, 2024, at 2:04 PM,
ReplyDelete"I believe they call this the "science of the gaps" argument."
Every gap in human knowledge that has been explained has been explained by science — no gaps have been explained by religion. Thunder? Static electricity, not Thor. The variety of life? Evolution, not God. The sun's heat? Fusion, not Helios. By comparison, religious explanations have had a 100% failure rate.
"The idea that science will be able to explain away everything immaterial/spiritual eventually is, ironically, very unscientific. It's a completely unwarranted, irrational assumption, which materialists cling to out of necessity, because without it their entire worldview falls apart."
I'm not aware of anything 'immaterial/spiritual' which requires any explanation.
It's probable (not certain) that current gaps in our knowledge will one day be explained using the same evidence-based reasoning that has explained all the other gaps so far. This is just inductive reasoning, the thing which science is based upon. Theism, in fact, claims to explain everything, even if the explanation is sometimes "God moves in mysterious ways".
"By comparison, religious explanations have had a 100% failure rate."
DeleteSome of us would say that paganism shouldn't be called religion. But whatever position we take on that assertion, the one you made is a red herring, even if it's true. This isn't about scientific explanations versus "religious" ones, but about materialistic explanations and non-materialistic ones. Huge difference.
If you're not aware of immaterial or spiritual realities, either you don't know what either word means (quite likely, I'd wager), or you don't understand why such things as meaning, moral evil, and convention are not material. There are many ways of expressing a given proposition, yet that meaning is in common between all of them, and therefore cannot be identified with any of them, even if we concede that all means of expressing said proposition are material. There's a reason eliminativist materialism is a thing.
@ Anonymous April 4, 2024 at 5:50 AM,
ReplyDelete"AI is incapable of interpreting evidence to surmise a hypothesis"
Nature 17 November 2023:
AI systems capable of generating hypotheses go back more than four decades. In the 1980s, Don Swanson, an information scientist at the University of Chicago, pioneered literature-based discovery — a text-mining exercise that aimed to sift ‘undiscovered public knowledge’ from the scientific literature. If some research papers say that A causes B, and others that B causes C, for example, one might hypothesize that A causes C. Swanson created software called Arrowsmith that searched collections of published papers for such indirect connections and proposed, for instance, that fish oil, which reduces blood viscosity, might treat Raynaud’s syndrome, in which blood vessels narrow in response to cold2. Subsequent experiments proved the hypothesis correct.
There is no interpretation going on here, it's merely implementing an algorithm.
Delete@ Anonymous April 9, 2024 at 10:22 AM,
DeleteYour objection presupposes that 'interpretation' (the exact meaning of which is unclear to me) can't be done algorithmically.
@Anon April 14 4:03
DeleteConsider it this way. If Swanson instead tasked his 15 year old kid to read through hundreds of papers and said "if one paper says 'a causes b' and another says 'b causes c' then we can hypothesize that a causes c, did you find any examples of this?"
Would you say that the 15 year old kid actually made the hypothesis? I would not, the actual hypothesis here was made by Swanson.
Intuitively, I would agree that AI is an illusion. However, I am curious how would a Thomist answer the following argument (maybe it's discussed in the article but unfortunately it is behind the paywall):
ReplyDelete1. Agere sequitur esse. According to this scholastic principle, what a thing does reflects what it is.
2. But what recent AI systems do is - they produce seemingly rational contents - e.g. answers to questions, stories, language translations, etc.
So, from (1) and (2) it would seem to follow that recent AI systems are rational.
So, how we identify a fallacy in the above argument?
"2. But what recent AI systems do is - they produce seemingly rational contents - e.g. answers to questions, stories, language translations, etc."
DeleteThe key here seems "seemingly rational contents". AI can sure produce content that looks rational to us, but them we are taken to John Searle chinese room: what a algorithm does and what a mind does dlwhen they produce what we take as messages do not seems to be the same thing.
From 2 one can argue that AI is pseudo-rational, i dont think we can get more from it.
Igen, look up "AI alignment problem." We can't directly train an AI to actually produce rational content, we train it to maximize a particular reward function using an elaborate system where over many iterations the output it produces is closer to the answer a rational actor would give.
Delete@ Talmid April 12, 2024 at 6:45 PM,
Delete"Searle holds that the brain is, in fact, a machine, but that the brain gives rise to consciousness and understanding using specific machinery. If neuroscience is able to isolate the mechanical process that gives rise to consciousness, then Searle grants that it may be possible to create machines that have consciousness and understanding." [Wikipedia]
In any case, Searle's chinese room thought experiment can be easily answered by saying that the book in the experiment is conscious. Searle is taking the place of a part of the body (like the circulatory system) which supports the operation of the brain, not the brain itself.
@Anon
DeleteSearle actual view is not very relevant here, i was only using the man to ilustrate the dificult i have with the argument. Perhaps he is following the argument conclusion or perhaps he is not.
In any case, you have quite a pluzzing view. As i understand it, to be conscious is to have a first-person view of the world, which does not seems to be the case with the Chinese Room. Do you mean something diferent when you say "the book in the experiment is conscious"?
@ Talmid April 14, 2024 at 7:31 PM,
DeleteThe point is, the codebook in the chinese room is doing the actual work of understanding, Searle is just doing mechanical output of information, which doesn't prove in any way that the codebook can't understand or be conscious. In fact, I'm not sure what it even is supposed to prove.
Searle point seems that the codebook allows one with no understand of what the words mean to produce messages that speakers of the language find meaningful by just manipulating syntax. If this is all that AI is doing, them making it more complex would just not produce a real mind.
Delete@ Talmid April 17, 2024 at 7:34 PM
ReplyDeleteIt depends what you mean by a "real mind".
The chinese room as a whole is supposed to be able to pass the Turing test (which AI's can already do) and it can only do this if the codebook is processing information sufficiently well to be able to fool a chinese speaker. The insertion of Searle into the room to handle output in a dumb way by just following instructions is irrelevant. If the chinese room tells you that it is conscious, what is your next move? You can hardly say "you aren't really conscious because your codebook is just a book", as the brain is just a bunch of neurons. So what?
It appears that all the brain is doing is processing information using neurons which individually can't 'understand' the information which they process, yet understanding emerges at a higher, conscious level (sometimes!). Searle just tries to hide this fact by presenting the brain as a book, which suggests that it doesn't understand anything, when in order for the experiment to actually work, that is exactly what it has to do.
But how much information is the chinese room really processing? It certain looks to the chinese speaker that it is producing meaningful content, but, if we remove the chinese speaker, the room is just moving simbols around.
DeleteConsider that even if Searle could memorize all the rules of the book and "comunicate" with a native chinese we would not say that he knows chinese. Sure, to the outsider it could look like so, but we would not be "fooled" because we knows what he is doing.
Besides manipulating syntax, we must be capable of atributing meanings to the symbols and use these meanings to formulate our phrases to meaningfully comunicate, Searle point seems that AI is not doing it, but just doing something that looks like so to us.
Why do we still care about the Turin Test? Would we care about Marie Curie's opinion on how in the theoretical future when nuclear reactors become possible how to test them? Science is supposed to progress.
DeleteAre you really saying that those who do the long multiplication algorithm are not really doing multiplication‽
ReplyDeleteThis is such a hilarious topic. First our standards in education or say marketing texts got really low. Then we were surprised when LLMs could do it, pass the tests etc. It is precisely because our standards got low, people were already doing the same thing as LLMs do, a statistically predictive word salad.
ReplyDelete