"One of the best contemporary writers on philosophy" National Review
"A terrific writer" Damian Thompson, Daily Telegraph
"Feser... has the rare and enviable gift of making philosophical argument compulsively readable" Sir Anthony Kenny, Times Literary Supplement
Selected for the First Things list of the 50 Best Blogs of 2010 (November 19, 2010)
Friday, August 17, 2012
Rediscovering Human Beings
My article
“Rediscovering Human Beings” will appear in two parts over at The
BioLogos Forum. Today you can
read Part
I. Part II will be posted
tomorrow.
Nice article, Dr. Feser. One line that stuck out to me, though, is the one claiming the math is a body of truths independent of our scientific discoveries. The funny thing is that I've actually met a kook who apparently believed math isn't valid unless we've demonstrated it in the real world through experiments. He was an avid supporter of scientism, if I remember correctly....
But enough of him. I have a question for you, if that's okay. I'm taking a college visit to BGSU and I've read in several places (including right above your BioLogos article) that you were a visiting scholar there. Do you think it's worth going there? I'm seriously considering it, since it's not too far away and I'm certain they would accept me, although I could definitely aim higher.
One question I have from reading a lot about animal experiments over the last couple of years is this...
A chimp can be trained to recognize a shape or color for example and then press the appropriate symbol on a screen to denote what he is seeing. That recognition is often interpreted by researchers as some sort of understanding or even abstraction. In some cases chimps were trained to learn certain symbols and then asked to apply to them in new contexts and in some cases they got them right.
What exactly makes our reasoning different than theirs? I'm often confronted in discussions with naturalists with this kind of argument and although I think there is a difference I have a hard time explaining what it is.
First off, great article. However, it reminded me of a rather mundane question that has been digging at me ever since I read "Aquinas" and "The Last Superstition".
Why did you choose to use the example of a triangle drawn on a seat on a moving school bus? For some unknown reason that repeated example has annoyed me for inexplicable reasons.
The books, by the way, were amazing, and I bought two extra copies of "Aquinas" for two of my friends, as they get ready to leave for college again - they go to small, traditional Great Books schools, and I figured that the book would come in handy for references and explication. One of them finished it just yesterday, and she told me it was absolutely magnificent. I heartily concur.
Oh, and I would also like to thank you for recommending David Oderberg's work; I'm currently working my way through a massive pile of his articles, as well as his most excellent "Real Essentialism." This is the most fulfilling area of philosophy that I've ever experienced thus far, and I believe I have you to thank for actually getting me to seriously consider (and now adopt) hylemorphism.
What's wrong with the article, TouchStone? I'm reading it now and I don't see anything really wrong with it.
Because *extremely longwinded response that ultimately amounts to 'Touchstone doesn't like it', 'Touchstone gets worked up about all things Christian' and 'Touchstone has an idiosyncratic, wildly flawed philosophy and metaphysics, but so long as he doesn't admit it and runs whenever the flaws in his reasoning are pointed out, he can maybe pretend otherwise'*.
Aloysios - I just took it as an example of a imperfect triangle. A triangle scrawled on the seat of a moving vehicle by a child is likely to be pretty imperfect.
I wonder if anyone could answer a question for me. Sorry to be stupid but I can't seem to get my head around the final cause. When we say a seed is directed toward becoming a tree and nothing else as its final cause, why can't this directness be explained by the chemical make up of the seed? Or say a struck match causing fire. The chemistry of the match and the matchbox is the reason fire is generated and not a bouquet of flowers, no? Where am I going wrong here?
When we say a seed is directed toward becoming a tree and nothing else as its final cause, why can't this directness be explained by the chemical make up of the seed?
That would make no sense, because the two are not substitute explanations but complementary. The final cause of such-and-such biochemical reaction is to bring about an oak tree and not a pink hippopotamus for example.
You're going wrong in thinking that matter can in any way coincide with the final cause. The final cause, or the end toward which something is directed, is the cause of matter; for the final cause is that for the sake of which matter is. Therefore the final cause is prior to all matter-form composites.
Remember, that which is directed cannot be the director. If the seed is directed toward the oak, then the seed cannot be the director toward the oak. And the chemistry of the seed is in a even more passive position; for the chemicals are directed toward being the seed, and the seed is directed toward being the oak; but the director of both seed and chemicals cannot be either of them.
@NIck What's wrong with the article, TouchStone? I'm reading it now and I don't see anything really wrong with it.
Well, I'm usually in a place to leave more than a two word summary, but just saw this thread very late last night, and, having run into this article more than once previously, once I confirmed it was the same article, had done with the "comedy" comment before closing up for the night.
And, if you look through my comments here and elsewhere "comedy", or dismissal by characterization as so bad as to be funny or comical is not a card I play but rarely. But this article is really exceptionally bad. It makes me cringe for the author in the same way only the clumsiest of young earth creationist diatribes do. Dr. Feser's thinking isn't much better in his Biologos piece (hope to find time to put some substance behind that later), but it's "seriously wrong", not "comically wrong" in making a complete hash of the science and knowledge we have about animal cognition. He's focused on conceptual processing of abstractions like "triangle", which steers him away from the ditches that Dr. George keeps falling into.
Here's an example from the page where it was left in my PDF reader last night (from page 12): For instance, tying one’s shoes keeps them more securely on one’s feet. How do people learn to tie shoes? Certainly not by studying knot theory which falls in the branch of mathematics call topology. Most people probably had someone show them how to do it, and maybe this teacher even held their hands and guided them through it. And then most people engaged in trial and error to repeat the appropriate motions. Eventually they become fully familiar with the pattern and acquired the needed handeye skill to execute the steps consistently. Seriously??? Here's she's trying to avoid the admission that animals think, and think via abstraction and meta-representation, in efforts to preserve the uniqueness in *kind* rather than degree of human cognitive faculties.
But on this bit "show them how to do it" is a matter of training motor reflexes by the guiding hands of the instructor, no thinking needed. Really??? How does she suppose the student, whether it be man or chimp, "engaged in trial and error", without thinking? It's a ridiculous error. The process of "becoming fully faimiliar with the pattern" *is* the cognitive work of learning, of distinguishing "success" from "error", "over" from "under", "around" from "through", of distilling a sequence of steps as a "recipe" for the task.
I think the quickest, simplest answer to the Anon with the causality question would be this: because to relocate final causes to the chemicals is to commit the homunculus fallacy. The chemicals themselves must be "directed toward" some range of results--otherwise, they could do anything. However, if final causes don't exist in the chemicals, then we have to posit them at a lower level, and so on forever. So final causes have to exist somewhere. And, because reductionism is incoherent (a separate argument), we must endorse holism with regard to substances. The final causes, then, emerge from holistic substances--from the formal cause.
She then doubles down on her confusion thusly: One might object that this only explains how people learn to solve problems who have been taught. However, the first person to come up with the idea of the bow, learned how to tie it either through trial and error using his senses, or by using his imagination, or through a combination of the two. A little reflection on everyday experience readily turns up other examples of problems that one solves, not by thinking, but by using one’s senses. (One learns how to ride a bicycle by feeling how to pedal and balance, not by studying the principle of the gyroscope.) This is a nice example of concise self-refutation. It "shows how people learn to solve problems". If one is learning, making distinctions, and processing trial and error, one is *thinking*. Senses are *input*, they are not the processing. By thinking about the effects of various actions when trying to learn to ride a bicycle, one *is* studying the principle of the gyroscope. Perception, or "knowing via sense" -- the awareness of one's percepts -- cannot possibly account for the concepts that form a model that enables us to balance and adjust our movements so as to make our way safely down the road on a bike. Chimps learn to extract termites from a jar with tree branches, stripped of leaves, as a "tool upgrade" from merely using their (shorter) fingers, and Dr. George supposes this is just using their senses? Well, it *is* using one's senses, the awareness of one's percepts -- they are used to think about problems and seek solutions. We don't even need to address counterfactual hypotheses on the part of the chimp, or conjecture about hypothetical outcomes yet. The animal, human or otherwise, must conceptually *integrate* those percepts in just to "learn by example". How does a kid know he goofed up the latest attempt to tie his shoes, as he's being repeatedly shown how, even with "guiding hands"? He has to thinking critically about the sense data he has streaming in. That pattern is not what I'm aiming for, and matches a "fail", so better try again and attempt to make different, better moves that will yield a better/acceptable match for the pattern I'm seeking, the goal condition I'm aiming for.
Just that kind of discrimination puts the operator far beyond the capabilities of our perceptual processing.
What really makes this comically wrong, and not just badly denialist in preserving a Thomist narrative, is that it ostensibly is at pains to address the science that is available. And the material she covers as examples in her favor are, one after the other "own goals" for other side of the argument. For example, just a couple pages up she points out that Japanese macaques learned to wash their potatoes, a conceptual abstraction *and* an exercise in applied imagination, the very kind of "art" she wants to reserve for the "cook" who can do new things with a demand for a recipe.
I apologize if I gave you the wrong impression - I am fully aware that the example itself referred to an imperfectly drawn triangle, a less exemplary instantiation of the concept. What I was asking was why Feser chose that particular example. This isn't an intellectual thing at all, it's just a silly question that's been rattling 'round in my head ever since I saw him use it in his books. It really doesn't matter, to be honest - it was just a goofy thing I was wondering about.
@rank sophist, I think the quickest, simplest answer to the Anon with the causality question would be this: because to relocate final causes to the chemicals is to commit the homunculus fallacy. The chemicals themselves must be "directed toward" some range of results--otherwise, they could do anything. However, if final causes don't exist in the chemicals, then we have to posit them at a lower level, and so on forever. So final causes have to exist somewhere. And, because reductionism is incoherent (a separate argument), we must endorse holism with regard to substances. The final causes, then, emerge from holistic substances--from the formal cause. That claim keeps getting made, and seemingly taken for granted, here ("reductionism is incoherent"). Without litigating that in this thread, can you point to somewhere this is argued to your satisfaction?
If this is not just a shibboleth here, how would a new guy in a combox ramp up on "reductionism is incoherent"? It must be pretty strong, because you aren't even offering a positive commendation for holism/essentialism, here, but rather declaring it the "winner by default" due to the perceived inadequacy of reductionism. It reads a lot like an Intelligent Design maneuver I see regularly -- since abiogenesis has no known natural recipe, therefore God.
Anyway, not looking to has that out here, but just interested in a referral to the "already settled case" on reduction I've seen you refer to repeated now.
-TS
(Too bad the combox is too character limited to paste Hofstadter's "Ant Fugue" in here, or in the appropriate thread!)
That claim keeps getting made, and seemingly taken for granted, here ("reductionism is incoherent"). Without litigating that in this thread, can you point to somewhere this is argued to your satisfaction?
If this is not just a shibboleth here, how would a new guy in a combox ramp up on "reductionism is incoherent"? It must be pretty strong, because you aren't even offering a positive commendation for holism/essentialism, here, but rather declaring it the "winner by default" due to the perceived inadequacy of reductionism.
It's difficult to summarize the arguments in a combox, and they've been presented in great detail by contemporary essentialists like David Oderberg. In general, we kind of take it for granted that the case is closed.
Briefly, the very idea that everything is constituted by particles in certain arrangements--for example, "dog-wise" arrangement--is incapable of being consistent. It must necessarily presuppose macroscopic phenomena to retain any coherence at all. Further, even if everything is constituted by particles, those particles themselves would still need to have holistic substantial forms.
Touchstone: If one is learning, making distinctions, and processing trial and error, one is *thinking*.
You seem to have missed the point. Sure, we could define "thought" as anything involving brain-processes, and voilà, animals "think"… but that isn't useful or interesting. George obviously wants to distinguish a particular kind of intellectual activity, and some tricks done by an ape — or a computer — don't require an intellect in the Thomistic sense.
That said, I don't think it was a great paper. The examples she gives do not illustrate clearly the distinction that is key to her argument, nor was it clear why certain differences had to be of kind rather than degree. And there were a lot of typos.
He confuses behavior with intellect, then cries foul because his little empiricism and consequent materialistic view is shown to be nothing but a sham. He confuses sense experience with sense data and then ignorantly claims that they are the same. That sounds like behaviorist tosh. Are you a behaviorist touchstone?
I would think by now, that with one naturalist after another trying to hide behind the new-found non-reductive physicalism, that you'd get the point about reductionism being incoherent, but your blind faith is unshakable. I am tired of refuting you blog. First it's your nonsense about falsificationism as a theory of meaning, which Popper himself rejected and I showed it to you. Then it's the tiresome rhetoric about the incoherent nominalism that you espouse. We show you how that is unintelligible too, only to see you craw back into your hole when shown wrong without even the decency to concede.
You don't even understand the argument the article is making regarding the distinction of trial an error via sense experience and learning via intellectual abstraction and analysis but because it shows how awful and empty your worldview is (along with the pseudo-explanations your reductionistic appetite desires) you are all upset throwing around a) two-liners and when confronted b) a torrent of self-referetial and lurid assertions that have absolutely nothing to do with the point the article is making.
The irony is, what George does in that article is demonstrate how not to falsely think of animal experiments like you do! Your entire claim along with those made by those who think animals can "think" or have "language" is one giant anthropomorphic fallacy.
In general, we kind of take it for granted that the case is closed.
I speak for myself here but I suspect that some may find this to applicable to themselves as well...
As one who was a naive reductionist/materialist in the past, who was forced to eventually confront my implicit (unconscious) assumptions only to watch the entirety of reality disintegrate (so to speak) in front of my very eyes into an chaotic blob, I can say this much... It took me a very long time and a lot of reading (of the numerous refutations of materialism/reductionism) to realize how bankrupt that doctrine was and it took just as much reading to start making sense of the world again, albeit via a better epistemology and metaphysic.
So it's not just that I take it for granted that reductionism is incoherent, I consider it an imperative truth as a means to sustain my sanity.
It's difficult to summarize the arguments in a combox, and they've been presented in great detail by contemporary essentialists like David Oderberg. In general, we kind of take it for granted that the case is closed. All right, good to know. I was *not* looking for the argument in the combox, but a pointer elsewhere -- could be a book that's not available online, for all I know. Oderberg is the name you'd offer if you were going to name one, then. Thank you.
Briefly, the very idea that everything is constituted by particles in certain arrangements--for example, "dog-wise" arrangement--is incapable of being consistent. It must necessarily presuppose macroscopic phenomena to retain any coherence at all. Further, even if everything is constituted by particles, those particles themselves would still need to have holistic substantial forms. OK, well familiar with that line of thinking.
Touchstone: taken out of context those arguments were confusing for me, but once I reacquainted myself with the text, I believe I realize where you went wrong.
The author was attempting to demonstrate that there's more underlying our behavior--and behavior in general-- than the instinct-intellect dichotomy lets on. She was giving an example of how we, and animals, can act purely on trial and error, or with a little application of our imagination, or both, not relying on our thinking capacity, but we and we alone go beyond that. The sections you quoted are confusing but given the context I don't think it hard to figure out what she's saying.
Thomas Nagel is another name to look into, to see how your reductionism gets refuted touchstone (he's a self- proclaimed wishful atheist too - the 'wishful' is not sarcastic by the way, he actually states it). ;-)
The irony is, what George does in that article is demonstrate how not to falsely think of animal experiments like you do! Your "Ken Ham" factor is high here, anon. Don't you know, learning about radiometric dating and all that non-sense ("oh the emptiness of that worldview")... it just shows you how not to think falsely about the authority of scripture and the six days of creation!
Your entire claim along with those made by those who think animals can "think" or have "language" is one giant anthropomorphic fallacy.
You have that backwards. The science augurs against your anthropormophic conceits -- that's why Dr. George will have to retreat farther and father into the corner her Thomism has painted her into, making even more contrived restrictions ("Yes, but chimps cannot play CHESS as well as humans, and that makes all the difference... THAT's really what thinking is... now").
Science just plods along and identifies not only the machinery in the brains of humans for functions like percept integration, language processing, concept formation, etc., but isomorphic structures in other animal brains. Meaning that the *divisions* Dr. George wants to impose (problem solving with senses!) break down even more badly, and the neurophysiology of humans and animals becomes more and more clearly differentiated by degree and adaption than kind.
That's non-anthropic, traitorous to the anthropocentric conceits, long and deeply held.
eppure si muove and all that. Science doesn't give a fig for your conceits about your cosmically special "immaterial intellect". It is what it is.
The irony is, what George does in that article is demonstrate how not to falsely think of animal experiments like you do! Your "Ken Ham" factor is high here, anon. Don't you know, learning about radiometric dating and all that non-sense ("oh the emptiness of that worldview")... it just shows you how not to think FALSELY about the authority of scripture and the six days of creation!
Your entire claim along with those made by those who think animals can "think" or have "language" is one giant anthropomorphic fallacy.
You have that backwards. The science augurs against your anthropormophic conceits -- that's why Dr. George will have to retreat farther and father into the corner her Thomism has painted her into, making even more contrived restrictions ("Yes, but chimps cannot play CHESS as well as humans, and that makes all the difference... THAT's really what thinking is... now").
Science just plods along and identifies not only the machinery in the brains of humans for functions like percept integration, language processing, concept formation, etc., but isomorphic structures in other animal brains. Meaning that the *divisions* Dr. George wants to impose (problem solving with senses!) break down even more badly, and the neurophysiology of humans and animals becomes more and more clearly differentiated by degree and adaption than kind.
That's non-anthropic, traitorous *to* the anthropocentric conceits, long and deeply held.
eppure si muove and all that. Science doesn't give a fig for your conceits about your cosmically special "immaterial intellect". It is what it is.
Think about it. What do you suppose Dr. George is trying to protect? An archaic anthropocentric view of humans, humans as ontologically sui generis. Maybe that archaic view is right, after all. But either way, your invective is confused -- the scientific view is the one assaulting our anthropocentrism.
You have to understand that touchstone aims to be confused and to confuse. I honestly doubt he read the article and even if he did I highly doubt he would even let anything sink in or make an honest effort to understand it. His usual tactic is to dismiss anything Thomistic and Aristotelian as an assault on modern science (as he did here) completely oblivious to the fact that his metaphysic (materialism) is as old as dirt going back to the days of the Pre-Socratics.
You see, once he ignores that materialism is a product of ancient time he can commit the usual fallacy of "anything newer is necessarily better than the older"... But like I said, materialism is even older than its opposite and apparently touchstone likes to conflate it with modern science as a means to pretend it has any credibility.
Touchstone, I have a question for you. Can you recommend any sources where you think your views have been sufficiently argued? I do not mind buying and reading books.
Once again, you seem to be confused. Anthropocentrism and anthropomorphic fallacy are two different things. Attributing anthropomorphic traits to animals is committing such fallacy. That's what you're doing
You are also committing the same fallacy when you refer to science. Science does "plod" nor does it "assault" nor does it "not give a fig"... Without the immaterial intellect in fact I cannot even see how scinece would be doable. The human intellect is a presupposition of science.
Anyways, the fact is that it's certain scientists that make certain claims and interpret data according to their metaphysical commitments that seem to *assault* the human intellect. They have however been criticized not only by George but by many others including people who share your own worldview. People like Chompsky have ridiculed the overt claims made but such charlatans, who try to sensationalize their research in order to appeal to impressionable peoples such as yourself.
Furthermore, identifying structures of the brain does nothing for your cause since the very argument made is in regards to immaterial aspects of thought. Again, your wishful materialistic thinking is all you have to run on here. Only if one can made the absurd materialistic identity thesis does anything you have said so far actually undermine anything I have said. So unless you can prove materialism for us (and please stop conflating it with science) then all you have done is claim something without justification.
Finally, I don't think George is trying to protect anything, but instead trying to clarify and correct errors made by people who think like you. If you want to take materialism for granted thats fine. As someone who once held a similar belief I'll tell you that you're on the road to intellectual suicide, but please stop with the dishonest attempts at misrepresentation and distortion of both science and an astute philosophical paradigm.
PS. (General Question to Everyone) Is it me or does touchstone's entire argument assume realism? Without realism (in this case scientific realism) how can any findings be used to assault an ontological paradigm? If nominalism is upheld then it's just to different conventions none of which has any real claim on reality, no?
All right, good to know. I was *not* looking for the argument in the combox, but a pointer elsewhere -- could be a book that's not available online, for all I know. Oderberg is the name you'd offer if you were going to name one, then. Thank you.
No problem. Real Essentialism attacks all forms of anti-essentialism, as well as all kinds of reductionism. If you're looking for solid arguments, I'd start there.
@rank sophist, No problem. Real Essentialism attacks all forms of anti-essentialism, as well as all kinds of reductionism. If you're looking for solid arguments, I'd start there.
I haven't read it, but am familiar with as a regular catalyst of discussions elsewhere from Thomists and other essentialists. In fact, I think just such a discussion some time back was the means of finding Dr. Feser's blog.
I see it's available in Kindle format, so that's good.
Here's a puzzling section from Dr. Feser's first post at Biologos:
In particular, there is nothing in the picture in question or in any other picture that entails any determinate, unambiguous content. And even in the best case there is nothing that could make it a representation of triangles in general as opposed to a representation merely of small, black, isosceles triangles specifically. For the picture, like all pictures, has certain particularizing features -- a specific size and location, black lines as opposed to blue or green ones, an isosceles as opposed to scalene or equilateral shape -- that other things do not have.
Now, as someone who has spent significant time in my career working on software engineering solutions for pattern recognition, chunking and cognition, this seems conspicuously unaware of how neural networks approach phenomena like the triangle picture Dr. Feser provided. In copmuting, our use of "neural nets" is referred to as such because it is based on the neural architecture of the brain.
So, the triangle picture P1 *is* determinate, unambiguous content, as a percept. It is a "visual pattern" the neural network processes. Distinguishing features are identified, based on the existing "visual vocabulary" of the network, which for humans traces all the way back to the process of visual integration as an infant, and to the training process of the software neural net (a triangle pattern won't associate with anything if there's no stored patterns to associate with).
The image is processed associatively based on the salient visual traits of the image -- color, dark/light contrast boarders, "chunking" of regions and parts (like the "lines" and "corners" of the triangle -- these are determinable from basic analysis of the image input, prior to any conceptual processing... (think of the way an OCR package that doesn't even need neural nets for character recognition processes image input).
Associations are made, if they exist, between the "stored patterns" and the stimulus image. Recalled patterns maybe more perfectly isosceles, or less, different color, but affinities between the patterns obtain in statistical, Bayesian sense. P1 fires associatively against a group of stored patterns that in turn associate with neural configurations we classify as the *concept* of "triangle", and beyond that, the *word* "triangle".
The important concept here is that nowhere is there any normative, "platonic" or archetypal "triangle" needed. Visually, "triangle" is a cloud, a cluster of spatial features that are statistically related by virtue of the configuration of the "pixels" of the image (or whatever one would like to call raw percept quanta from our eyes in human processing.
This is how neural nets work. They are associative across huge numbers of addressable nodes that can be "wired" together. In software applications, when we want the program to converge its associations on patterns we specifically care about, we provide triangle images, and provide positive feedback when it fires for "triangle" (or closer intermediates from where it was), training the network for strong associations with that target pattern.
But importantly, if we don't force its training toward "triangle", the system will adapt its associations to distinguish triangles visually from circles, or single straight lines, all by itself. There is no platonic form of triangle needed, but rather just the analyzing, sorting and associative process that naturally coalesces "triangle-ish" image features together, and "square-ish" image features together, and "human face-ish" image features together, and on and on.
Now what is true of this “best case” sort of symbol is even more true of linguistic symbols. There is nothing in the word “triangle” that determines that it refers to all triangles or to any triangles at all. Its meaning is entirely conventional; that that particular set of shapes (or the sounds we associate with them) have the significance they do is an accident of the history of the English language. No, there's no reason to suppose that's the case given what we know about human brains, and "software brains" (even as comparatively humble and rudimentary as they are compared to a human brain). The visual features that distinguish "triangle" from "non-triangle" are not language bound. We rely on language to discuss the subject (and to discuss *any* subject), but a software daemon that processes visual input into an associative learning funnel via neural networks doesn't need to implement or take heed of language at all.
Triangles obtain without that, and become "concepts" -- clouds and clusters of associations, distinct from other clusters of associations for [what we call 'square'], distinct from yet other clusters of associations for [what we call 'circle'], and on and on. The concepts emerge from the visual phenomena, mechanically, associatively. No "universals" or platonic forms needed or useful.
Given that, I think it's hard to see how this makes any headway: Even if we regarded them as somehow having a built-in meaning or content, they would not have the universality or determinate content of our concepts, any more than the physical marks making up the word “triangle” or a picture of a triangle do. But then the having of a concept cannot merely be a matter of having a certain material symbol encoded in the brain, even if that is part of what it involves.
The pixels don't contain the meaning or content, they are just pixels, triggers and catalysts for associations in the neural net. And it's a mistake -- a conspicuous one given our available knowledge on this subject -- to suppose that a concept in the brain can be reified just by encoding one particular pattern/symbol internally. But those are not nearly all the options. As associative neural net learning shows, "triangle-ness" as an abstraction does not and cannot (by definition) rely on a SINGLE symbol encoding. Rather, the abstraction is a cluster of related associations, where "related" just denotes a statistical/Bayesian affinity between the "pixel data" for nodes in the cluster.
He continues: Nor can it merely be a matter of having a set of material symbols, or a set of material symbols together with certain causal relations to objects and events in the world beyond the brain. For just as with any picture or set of pictures, any set of material elements will be susceptible in principle of alternative interpretations; while at least in many case, our thoughts are not indeterminate in this way.
This is as close as Dr. Feser gets here to addressing conceptual abstractions as a set of associations, but it's not very close. Here, he dismisses "set[s] of material symbols", or those symbols mapped to their referents, all as discrete symbols, all distinct "atoms" (in the conceptual sense of that term). Those sets do admit of the hazards of ambiguity, but that is not a problem for the association set that the neural net holds as an abstraction, but a problem of contextualizing those abstractions to come to some determinate semantics. It can't always be done, ambiguity and semantic underdetermination are a persistant problem in thinking and language.
All of which boils down to this: look at how neural nets establish associations and create abstractions, just by the nature of their operation, their cumulative and adaptive cycling through new input and storage of (some elements) of past input. Why would any 'universal' need be posited in some immaterial or metaphysical sense for "triangle". We can abstract against our abstraction of 'triangle', fuzzy as that visual abstraction must be (as a cloud of associated representations), to a mathematically strict and elegant concept -- a 'pure isosceles', for example, but this is derivative of the lower level abstractions. We don't need any 'universal' mode of existence for that, any more than we need it for the visual abstraction of 'triangle'.
If you doubt this, do you suppose that a neural net cannot and will not coalesce clusters of associations around triangle-ish symbols presented to it, as distinct from clusters of associations around square-ish symbols?
Visually, "triangle" is a cloud, a cluster of spatial features that are statistically related by virtue of the configuration of the "pixels" of the image (or whatever one would like to call raw percept quanta from our eyes in human processing.
Infinite regress of resemblance. Keep trying. No, that's not even a good effort at a critique, here. A set of images we would classify as "triangle-ish" vs another set we would classify as "square-ish" are so classified without any dependence on a recursive or regressive means of analysis -- that's a dialectical problem, we're talking about visual pattern analysis.
This is demonstrable. You can write multilayer perceptrons that work in back propagating neural networks that create these associations, and in "unsupervised" mode, where it has to sort out the images into the natural groupings without any preset or pre-learned notions of 'triangle', 'square' or any other pattern.
And it's been done, many times. The relationships between the images are not dependent on pre-existing hierarchies of features. They are just statistical affinities, associates that are made through Bayesian matching. This is what avoids the "well, what classifies *that* feature set, and then what classifies that classifying feature set, and ..." type of recursing problem.
The human brain, and neural nets built on the same architectural principles, don't work that way. The neural net doesn't need and can't use such a visual ontology. It works "bottom up" through an astronomical number of neurons, associations being "non-semantic" and purely isomorphic. For two images P1 and P2 to be more (or less) associated with each other, we don't need to know about 'triangle' or any other term. We only need the "pixel data", and to be able to have neuronal connections accumulate according to the "pixel associations" that obtain in P1 and P2.
If you think about a set of "happy face" icons, and a set of "sad face" icons, the icons in each set may be (will be) quite different from one another in terms of size, aspect ratio, color, contrast, curvature and angles, but if the pixel analysis diverges on "mouth corners up", versus "mouth corners down", that is how the associations will cluster.
Nothing need be known by the system about "mouth", "mout corners", "up", "down", "face" or any of that. It's just brute image feature matching, matching without knowing or needing to know what any of those terms represent.
So, the triangle picture P1 *is* determinate, unambiguous content, as a percept. It is a "visual pattern" the neural network processes. Distinguishing features are identified, based on the existing "visual vocabulary" of the network
No perception is unambiguous. If we've learned anything from the post-modernists, or from Wittgenstein, it's that, unless you endorse essentialism, there's no such thing as a non-interpretive perception. Anything that you see, hear, read--all of these are merely your own interpretations. There is no such thing as wholly determinate content.
Consider Wittgenstein's refutation of Hume's imagism. If Hume is correct, then we merely perceive images--almost like photographs--that are then stored in our minds. Wittgenstein gives us the example of the image of a man on a hillside. Is he walking up or sliding down? Nothing in the make-up of the picture can tell us. The content is utterly ambiguous. So it goes with everything. The only explanation is an appeal to irreducible intentionality, and machines simply do not possess it.
The reason a computer is capable of registering certain "determinate" things is simple: we programmed it to do what it does. No matter how complex the system architecture gets, a computer is ultimately as simple as a series of symbols. A computer matches "this symbol" to "that symbol" because that's how it is designed to work. It has no intentionality aside from that which we give it. Therefore, it only has determinate content because we programmed it to recognize certain things in certain ways. It's that simple.
But importantly, if we don't force its training toward "triangle", the system will adapt its associations to distinguish triangles visually from circles, or single straight lines, all by itself.
That's because the lines of code--the series of symbols--that you used to program the system are set up to produce certain effects. "This symbol" refers to (intentionality) "that symbol". Whether or not a computer can recognize shapes is irrelevant. To the computer, the shape is wholly indeterminate without an infusion of intentionality--that is, our programming to tell it that "this" means "that". It's designed in such a way that it can sort images "all by itself", but its ability to sense the similarity was programmed by us. It can't help but find it, because it was designed to do so. Even if there was no similarity, it would be forced to group certain objects because of lower-level coding.
Triangles obtain without that, and become "concepts" -- clouds and clusters of associations, distinct from other clusters of associations for [what we call 'square'], distinct from yet other clusters of associations for [what we call 'circle'], and on and on. The concepts emerge from the visual phenomena, mechanically, associatively. No "universals" or platonic forms needed or useful.
No need to bring up the New Riddle again, I assume.
However, it's important to remember that there are only two options when dealing with a system of signs: either it obtains its meaning from the "outside", or it obtains its meaning via infinite internal self-reference. That is, either we impart determinate meaning, or the system can only ask itself what one ambiguous symbol means by appealing to another. This applies even if it perceives something, because this perception is made and stored with code. Unless each symbol is given a hard, fast, determinate meaning by us, then the machine is left to forever appeal to extra symbols, each of whose meaning is as ambiguous as the last. Of course, this second option makes the entire system vacuous of content. You can thank Jacques Derrida for that argument.
As associative neural net learning shows, "triangle-ness" as an abstraction does not and cannot (by definition) rely on a SINGLE symbol encoding. Rather, the abstraction is a cluster of related associations, where "related" just denotes a statistical/Bayesian affinity between the "pixel data" for nodes in the cluster.
Unless associations are determinate, then you're left with Derrida's vacuous set of infinite reference. But a computer does not have intentionality by its own nature, and so it cannot give a set of symbols determinate content.
Those sets do admit of the hazards of ambiguity, but that is not a problem for the association set that the neural net holds as an abstraction, but a problem of contextualizing those abstractions to come to some determinate semantics. It can't always be done, ambiguity and semantic underdetermination are a persistant problem in thinking and language.
It's impossible without intentionality, Touchstone. Computers only have it because we give determinate meanings to the symbols that run them. If the meanings of the symbols were ambiguous, then there would be no place where the "buck stopped", so to speak; and we'd be left with Derrida.
Nothing need be known by the system about "mouth", "mout corners", "up", "down", "face" or any of that. It's just brute image feature matching, matching without knowing or needing to know what any of those terms represent.
You've manifestly failed to realize that the problem extends to the very architecture of the computer doing the matching. The code itself is a series of symbols. These symbols either have determinate content or indeterminate content. If the content is determinate, then it has intentionality--because we put it there. If the content is indeterminate, then you're left with an infinite regress without any content.
Out-of-touchstone said... The visual features that distinguish "triangle" from "non-triangle" are not language bound.
What's that got to do anything? That has nothing to do with what he's talking about. As usual, you don't even get the point. You're in such a rush to prove how wrong Feser must be that you never stop to actually figure out what he's saying in the first place.
Might also call on Glenn for a re-link to that paper in the other thread regarding computer programming and its relation to Aristotelian logic. If nothing else, it will give Touchstone something else to chuckle at while writing obfuscatory sentences. Perhaps my 'perceptrons' are malfunctioning.
Out-of-touchstone said... The human brain, and neural nets built on the same architectural principles, don't work that way
Uh, right. So if the human brain doesn't work that way, and yet the human mind does, then the mind cannot simply be equivalent to the brain. Congratulations, you just proved it yourself.
It's just brute image feature matching, matching without knowing or needing to know what any of those terms represent.
EXACTLY. The computer/brain cannot explain REPRESENTING. If you are telling me that you do not actually engage in representations, or meaning, or knowing, that all your mind does is cluster around statistical groupings -- well, then actually that would explain a lot.
No problem, Josh. Touchstone doesn't seem to be familiar with the problems inherent to intentionality and semiotics. Because it's impossible for a sign to have its own wholly determinate content--nothing about "this symbol", in itself, tells the whole story--, it must be placed there from the outside. Either this "outside" is wholly determinate (as with intentionality) or it is indeterminate, in which case we must appeal to further signs forever.
EXACTLY. The computer/brain cannot explain REPRESENTING. If you are telling me that you do not actually engage in representations, or meaning, or knowing, that all your mind does is cluster around statistical groupings -- well, then actually that would explain a lot.
These same problems have plagued semiotics for decades. Crazies like Derrida bite the bullet on intentionality and tell us that human thoughts work as signs, too. If Touchstone did that, then all meaning would vanish in an instant.
Touchstone, I have a question for you. Can you recommend any sources where you think your views have been sufficiently argued? I do not mind buying and reading books.
I don't know which views you are referring to -- my views on 'universals' and theory of mind, per this thread, perhaps? Or my views on (scientific) epistemology, more broadly? My atheistic conclusions?
Give me a little more direction, please.
That said, here's what books come to mind when reading your request:
1. Gödel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter 2. The Methodology of Scientific Research Programmes by Imre Lakatos 3. Arguing About Gods by Graham Oppy 4. The Third Chimpanzee by Jared Diamond 5. A Universe From Nothing, by Lawrence Krauss
I have not read Hofstader's book but hearing him at an interviw I can't say I was in any way impressed with what he had to say.
Lakatos is probably the best reference on that list. He tried to salvage whatever was left from the ruins of positivism (although it's debatable how successful he was, especially given the destructive force of Feyerabend's work on the myth of scientism)
Oppy is not that great either. Craid has refuted many of his objections to Theism and has shown how faulty his thinking are. One common theme in his work is his misunderstanding and consequent caricaturing of arguments, which makes his book/articles even weaker.
The 3rd chimp book I've not read nor have I ever heard of the author.
The the universe from nothing is just LOL (to echo the sentiments of another anon user). I don't know what's more awful, how krauss tansies science with his sophisms and misrepresentation of cosmological theory (much in line with what Popper called promissory materialism) or his inability to do any kind of philosophy... This is the same guy that claimed in a debate that 2 + 2 = 5!
Anon7:52: Jared Diamond is that guy who studies civilizations and their rise and fall. I actually watched his TED talk recently: http://www.ted.com/talks/jared_diamond_on_why_societies_collapse.html I haven't read any of his books, though.
No perception is unambiguous. Or rather, they are all ambiguous. Or better yet, we agree that they are pre-interpreted as percepts, which is what I was saying in comment you responded to, here. I said the image is "determinate, unambiguous content, as a percept", and the point of that contradiction of Feser (and perhaps you here) is that we don't have information just at the post interpretive layer, but "raw input" at pre-interpretive layer. As the image comes into our eyes, it's determinate content AS raw visual input. That's a key distinction because it is at this layer, and prior to/without human abstraction that we can analyze (with computers, if we want) this raw input in such a way that categories, classes and groups automatically emerge in the neural network. This is just to say that there are objective features of these images that have statistical feature affinities with each other, but which are not (yet) attached to any linguistic concepts.
If we've learned anything from the post-modernists, or from Wittgenstein, it's that, unless you endorse essentialism, there's no such thing as a non-interpretive perception. You're confusing PERCEPT with PERCEPTION. Easy to do, but these are not the same thing. By 'percept', I am referring to our raw sense-data, the 'input pixels' we start with prior to any interpretation, and which an interpretation must have to operate on. There, are, and must be non-interpretive *percepts*, else you have nothing to interpret. Your Wittgenstein reference is problematic on its own, but that's not relevant to my point about *percepts* as raw input.
Anything that you see, hear, read--all of these are merely your own interpretations. There is no such thing as wholly determinate content. That can't be true, transcendentally. There must be some raw input into the senses which we take as the starting point for the process of contextualization and interpretation. When my computer program, which interprets visual input for the purposes of identifying English letters and numbers gets a new item to process, it's a "brute image" -- it's just a grid of pixels (quantized data so that the computer can address it for interpretation). In a human, or a chimp, the optic nerves terminate in the brain (at the LGNs in the thalamus) and provide raw visual stimuli to the neural net, whereupon all sorts of integrative and associative interpreting begins across the neural mesh.
Again this is important to understand because the pre-interpretive features of our input data provide objective points of differentiation and classification -- the basis for meaning. That is nothing more than to note that what we call "triangle-ish" images and "square-ish" images are not so called by caprice; the images have, prior to any interpretation, or labeling, physical features that distinguish them, and distinguish them as distinct groups.
Consider Wittgenstein's refutation of Hume's imagism. If Hume is correct, then we merely perceive images--almost like photographs--that are then stored in our minds. Wittgenstein gives us the example of the image of a man on a hillside. Is he walking up or sliding down? Nothing in the make-up of the picture can tell us. Oy, more reliance on philosophers for subjects like visual perception and cognition. Cells in the retina and the thalamus fire in response to movement, changes in light/dark, velocity. Movement fires different cells for horizontal activity and vertical activity. So before any interpretation, which happens in the visual cortex, the brain receives not just spatial-chromatic ("picture") information, but signal fires for dynamics of motion direction, velocity, and other changes. These are discrete signals themselves, like the "picture" data, and not interpretation itself. Just fodder for for integration as the first step of that process in the PVC.
Which is just to say that Wittgenstein, bless his heart, is talking out his behind here, from a position of thorough ignorance of what is going on in his own brain, physically. He can take some comfort in the fact that he was hardly more equipped by science to speak on the matter than was Aquinas, but when we read something like that NOW, it's just obsolete as a context for thinking about this subject. The brain's "pictures" DO have motion cues that come with it, prior to any interpretation, for direction, and velocity. This is how sight works, before the visual cortex even gets hold of it. The "sliding down" vs "walking up" interpretations are NOT on an even footing, and BEGIN that way for brain, as our visual sense machinery is constantly streaming in motion cues (and other cues) in along with "image" data.
The reason a computer is capable of registering certain "determinate" things is simple: we programmed it to do what it does. No matter how complex the system architecture gets, a computer is ultimately as simple as a series of symbols. A computer matches "this symbol" to "that symbol" because that's how it is designed to work. It has no intentionality aside from that which we give it. Therefore, it only has determinate content because we programmed it to recognize certain things in certain ways. It's that simple. I'm just noting, as things roll on, how often this is "simple" for you. ;-)
In the case of unsupervised learning, the neural net doesn't have recognition of certain things wired into it -- that's what "unsupervised" indicates in the terminology. Rather, the system is programmed to "recognize", or more precisely, to build associations and to maintain and refine them through continuous feedback. So that means it can and will group "triangle-ish" images and "square-ish" images (if its mode of recognition is visual/image-based) without it being told to look for 'triangle' or told what a 'triangle' is. The system doesn't speak a language or use symbols that way, but it "understands" triangle-ishness and square-ishness such that it can predictably process images that we would say belong in this pile or that (or neither) correctly. It has demonstrable knowledge of the concepts, then in a pre-linguistic way, provably -- give it a test, and it can distinguish between those types. Add in five-pointed stars, and it will learn to associate star-ish patterns together, without ever knowing, in a labeled or linguistic sense, what a "star" is.
But hold on, you say -- it only does that because we programmed it to recognize and categorize generally. Yes, of course, but so what? We are programmed by our environment to recognize and categorize. These are adaptations with enormous advantages for the evolving population. If that point doesn't suffice to dismiss your "it's programmed" complaint, then it seems the question just shifts to skepticism about evolution and emergence.
Which is fine, and I need to do nothing more than note that if that's the case, all the worse for Dr. Feser's article. He has completely misunderstood the basis for human pattern recognition, visual integration and typing. He can then say, well, even if all that science-y stuff is right, it stil takes God to make that happen, a telos. Fine, but it makes the article a throw-away, an exercise in mistakes about cognition and missing the point, the real basis for his superstitions.
Anon7:52: Jared Diamond is that guy who studies civilizations and their rise and fall. I actually watched his TED talk recently: http://www.ted.com/talks/jared_diamond_on_why_societies_collapse.html I haven't read any of his books, though. His "claim to fame" book, and the way I like most of his fans became aware of him, is Guns, Germs, and Steel. That's a book I highly recommend, too, but it's not one that topically covers the epistemology/worldview ground I was going for in my list. His Collapse, the book behind the talk you link to, was by far his poorest offering, in my opinion.
@rank sophist That's because the lines of code--the series of symbols--that you used to program the system are set up to produce certain effects. "This symbol" refers to (intentionality) "that symbol". Whether or not a computer can recognize shapes is irrelevant. To the computer, the shape is wholly indeterminate without an infusion of intentionality--that is, our programming to tell it that "this" means "that". It's designed in such a way that it can sort images "all by itself", but its ability to sense the similarity was programmed by us. It can't help but find it, because it was designed to do so. Even if there was no similarity, it would be forced to group certain objects because of lower-level coding.
No, because if you are coding for similarity as the basis for grouping, "lower-level coding" won't help group. Grouping is a function of a similarity test.
Humans are programmed by the environment to do similarity testing, and make associations based on that, just like our associative back-prop neural net software programs do -- we developed the software architectures from what we've learned about the human brain, so the software design is informed by the hardwire design bequeathed to us by evolution.
But that just makes humans a fantastically more scaled out version of machines we build. But machines all the same, in the sense of deterministic finite automata.
This is important for understanding the poverty of Dr. Feser's article. Even if I stipulate, arguendo, that some kind of Cosmic Mind or Supernatural Deity is required for "bootstrapping" the environment with a telos that is sufficient to program humans and other animals with the "wetware" to make associations and develop them in arbitrary complex ways through recombinant patterns of trillions of neurons, to recognize, categorize, contextualize, etc., THIS DOESN"T HELP DR. FESER'S ARTICLE ONE BIT. For he is saying that the local process of interpreting that triangle image takes immaterial intervention directly.
Not God as the source of any telos now manifest in our brains working as they do. But rather, Dr. Feser's hylemorphic dualism. If you complain that any explanation on the mechanisms of recognition, chunking, contextualization, abstraction and conceptualization just points back to a creator, well, you've thrown Dr. Feser under the bus, because that mechanism as a mechanism does not need and cannot use the dualistic ontology that Dr. Feser is arguing for with his triangle example.
So, would you agree, then, that appealing to a fundamental "prime telos", so to speak, as what you call 'God' abandons Dr. Feser's appeal to the necessity of immaterial intellect for THAT individual for the basic task of identifying/recognizing 'triangle'?
Humans are good pattern recognizers, and I see this pattern a lot:
A; We need an immaterial intellect for conceptualization and understanding. B: No we don't. Look at this program... A: Well, that just proves something is needed to program us or the machine for conceptualization or understanding. B: But the question was about the need for immaterial intellect to *perform* the task of conceptualization and understanding! A: You still need a Cosmic Designer to have that mechanism come to be.
Dr. Feser is saying an immaterial intellect is needed to perform the local act of interpretive meaning. Not God doing it, but the individual's 'immaterial intellect'.
You are playing the role of B here, shifting the 'immaterial intellect' away from the individual, and appealing to a Great Cosmic Immaterial Intellect which has created the mechanisms the brain uses to interpret and establish meaning. That's a good move in practical terms as a "ground shift", but it leaves Dr. Feser's claims in the ditch, unneeded and frivolous when you do that.
Nick Corrado said... I read Godel, Escher, Bach. It was very well-written, though that's not to say I endorse Hofstadter's conclusions.
What, you don't believe that recursion is 𝓜𝓪𝓰𝓲𝓬𝓪𝓵?? GEB is amusing, but the philosophy is as you might expect ignorant fluff. I won't LOL at Krauss but only because everyone else already has and he doesn't deserve the attention. I have new sympathy for Out-of-touchstone, though. He couldn't be expected to know any philosophy when he's been fed all this nonsense by people who should know better. If he hangs around here, there's hope he will at least pick up some real understanding about it.
Stupid blogger. When you post a comment, the letters show up, but not on the main post (at least for me). That should have said, "What, you don't believe recursion is ~ṁâģïçãļ~?"
Out-of-touchstone said... Your Wittgenstein reference is problematic on its own, but that's not relevant to my point about *percepts* as raw input.
No, your point about percepts is irrelevant. That's not what anyone is talking about.
There must be some raw input into the senses which we take as the starting point for the process of contextualization and interpretation.
Yes, of course. It's the meaning, the interpretation where the indeterminacy comes in.
the pre-interpretive features of our input data provide objective points of differentiation and classification -- the basis for meaning.
Again, nobody ever said they weren't necessary. Just that they aren't sufficient.
the images have, prior to any interpretation, or labeling, physical features that distinguish them, and distinguish them as distinct groups.
See, when you make silly claims about not needing forms and then say that there are "features that distinguish them as groups", it just goes to show that you do not understand what forms are about in the first place. Why not make an effort to find out?
You don't need forms.... what distinguish them from one another is OF COURSE.... the number of poneys related to that IMAGE!!!!!
after all, reality is just poneys all the way down, and if you get them small enough .... yada yada ... they become dots and the poneys in your head can see the poneys that fly out of the other poneys, creating a pattern that is not a poney, even though is made really just of poneys, therefore proofing once and for all, that everything is a poney ... deep down
@rank sophist, No need to bring up the New Riddle again, I assume. Well, at least it was *relevant* before, even if you misunderstood the problem. ;-)
However, it's important to remember that there are only two options when dealing with a system of signs: either it obtains its meaning from the "outside", or it obtains its meaning via infinite internal self-reference.
Meaning only obtains "inside", as a set of associations. But they aren't "semantical" in a fundamental sense. That is, meaning is not derived from anything more fundamental as a "source of meaning", something semantically transcendental to it in the human brain. Instead, it's a hugely complex, acyclic graph, made up of neural associations. A useful bit of pedagogy in getting this point is the definition of English words. The definition of a word in English is given not in an appeal to more fundamental (transcendental) units of meaning, but just in terms of other English words. The definition points to other concepts that it is associated with.
So it's not regressive, and doesn't fall to a vicious cycle demanding ever more "fundmental" bases for meaning. It's a peer graph, and a huge one, too, in the case of English.
It's all just circular, then??? There's no "meaning" in that, surely! In a roundabout way that's true, every word is defined just in terms of other words. But this (and here, there are more and better works in a technical sense than Hofstadter, but if you've not read Hofstadter and this seems problematic to you, you should read him) is how meaning obtains; not by appeals to more fundamental elements of meaning, or a superstitious appeal to an 'immaterial intellect', but as the network of subject-object relationships, the graph of arcs of meaning between nodes.
This does NOT mean, however, that "outside" doesn't matter. "Inside" itself is meaningless without the accompanying concept of "outside", remember. Without the outside, there are no referents for any symbols we might establish. The "books" are kept internally, but the "arcs" of meaning are predicated on our inside interacting with the outside. Meaning as "internal only", no outside needed, is incoherent. For meaning to cohere, we must have "outside" referents for our internal associations.
Out-of-touchstone said... Oy, more reliance on philosophers for subjects like visual perception and cognition.
Did I mention that this is not about visual perception??
signal fires for dynamics of motion direction, velocity, and other changes. These are discrete signals themselves, like the "picture" data, and not interpretation itself.
Oh geez, he's talking about a painting, not watching the guy fall down the hill on his backside in real time.
The system doesn't speak a language or use symbols that way, but it "understands" triangle-ishness and square-ishness such that it can predictably process images that we would say belong in this pile or that (or neither) correctly.
You put "understand" in quotes because that isn't real understanding. And if something analogous were all there was to human meaning and intention, then it would not be real understanding either. So either there is something more going on, or else you are seriously claiming to be an eliminativist about understanding.
He has completely misunderstood the basis for human pattern recognition, visual integration and typing. He can then say, well, even if all that science-y stuff is right, it stil takes God to make that happen, a telos.
Sigh. "If that science-y stuff is right"??? Did you really say that with a straight face, or do you know deep down what a strawman that is?
Out-of-touchstone: Well, I don't understand what Feser is saying, but I know something about visual image processing, so I'll just talk about that instead. And if Feser disagres with anything I believe then he a big anti-science dummy head!!!
That is, either we impart determinate meaning, or the system can only ask itself what one ambiguous symbol means by appealing to another. Yes, but "appealing to another" decreases the ambiguity, and establishes meaning! That process of association is the process of creating meaning, as it provides differentiation and specification out ambiguity. For every association that is A->[B,C] leaves out [D,E]. Even if A, B, C, D are, as stand alone entities, perfectly ambiguous, conceptually entity, by creating associations like A->[B,C], we have created some new meaning in the network. For now we know that if we have A it activates [B,C] BUT NOT D. This is what meaning is, rules for "this', but not "that".
This applies even if it perceives something, because this perception is made and stored with code. Unless each symbol is given a hard, fast, determinate meaning by us, then the machine is left to forever appeal to extra symbols, each of whose meaning is as ambiguous as the last. Of course, this second option makes the entire system vacuous of content. You can thank Jacques Derrida for that argument. It's only vacuous if only requires that "meaning" be understood in a magical, supernatural way, something apart from the arcs and nodes that make up meaning and context in a network. Derrida is one who saw this with acute clarity. Derrida understand that “pure language” entails terms necessarily including multiple senses that are irreducible to a single sense that provides the normative, "proper" meaning. This an artifact of the graph, the mesh of associations we maintain that constitute meaning and sense for us. Computational linguistics researcher and AI guys look at at that and nod -- this is what they've understood as manifest from their computational machinery all along.
It doesn't make language empty of meaning, for Derrida or anyone else. It is "babelized", to use a term I think he came up with for this idea, internally translated, overloaded *and* idiomatic. It's "impure" because it's associative, and associative in a fuzzy, graphed way (neural networks!), agains the 'pure' idea of language and meaning as transcendental to any network in which it might be expressed -- the immaterialist superstition, in other words.
I have new sympathy for Out-of-touchstone, though. He couldn't be expected to know any philosophy when he's been fed all this nonsense by people who should know better. If he hangs around here, there's hope he will at least pick up some real understanding about it.
Fair enough. But for that to happen he needs to change his attitude and show a desire to learn. Dropping in to unload the usual torrent of lurid yet incoherent assertions only to be refuted, while remaining completely oblivious to said fact isn't going to help him.
It takes a lot of will power and a lot of reflection to free yourself from materialistic assumptions, especially they are naive/unconscious (I speak for myself here). I have seen no effort on his part to even try to understand much of what I and many others have told him to far.
Compound that with the fact that any time he stops by the thread is usually derailed into the "Touchstone Show" and we have ourselves a little problem.
Well he would be someone nice to read and talk with; wasn't for the fact that all he cares is ripping at other people's ideas with assertions and intimidation.
But that is web atheist behavior for you ... I still hope it is the internet to blame for that.
Seriously, I feel like Touch is talking about something irrelevant to Thomistic ideas. I think that's basically right. The Thomistic concepts, as I read them in Dr. Feser's posts, and in a much more extreme way in Dr. George's gerrymandering for human intellect as ontologically peerless (that is, dualist) only serve to confound obfuscate the questions it gets applied to.
Perhaps its better to say that the subjects Dr. Feser addresses here aren't aided by any Thomistic treatment. Thomism is not a falsifiable framework or set of propositions, so it's not a matter of it being "false" or "wrong" in that context. "True" and "false" aren't applicable, there. Rather, it's just a kind of "fog" that gets layered onto whatever is the subject of the day.
A recurring theme in my reading of Dr. Feser's post is that those treatments are just inert or frivolous with respect to the subject, which, like cognition, or semantical processing, avails of other heuristics for deriving knowledge and insight on the subject.
What we do know about human cognition and pattern matching and visual integration and associative networks as the substrate for meaning are just conspicuously absent in Dr. Feser's exposition. A philosopher is not necessarily a scientist, but science is a resource a careful philosopher should at least take passing note of on matters like this.
Out-of-touchstone said... we developed the software architectures from what we've learned about the human brain, so the software design is informed by the hardwire design bequeathed to us by evolution.
Informed by design? Congrats, you just appealed to formal and final causes. But perhaps that was only "a way of speaking", in which case, feel free to make the necessary point without using any philosophically equivalent synonyms.
THIS DOESN"T HELP DR. FESER'S ARTICLE ONE BIT. For he is saying that the local process of interpreting that triangle image takes immaterial intervention directly.
There is no "intervention", which suggests to me that again you are way off base with what Feser is saying in the first place. But you are wrong even apart from that, because you are considering only the "outside" effects. It is entirely possible for God to program the universe to make creatures capable of acting in interpretive-like ways without having any immaterial intellects themselves. (Or on your view, this apparently all just traces back to the Big Bang, because the Big Bang is a thinking thing... or something). But if a creature actually understands or interprets something on its own merits, then yes, that can only be because it possesses an intellect. Nothing you have said refutes this (because you are not even actually addressing it). Even if your alternative worked, it would at most be an alternative, not a rebuttal.
Meaning only obtains "inside", as a set of associations. But they aren't "semantical" in a fundamental sense.
So "meaning" is not "semantic"? And that's not a problem? I guess you are an eliminativist.
It's all just circular, then??? There's no "meaning" in that, surely! In a roundabout way that's true, every word is defined just in terms of other words.
But of course that obviously isn't true. In fact, if you eliminate meaning and understanding (the real things, not the simulated outside imitations), then no argument you offer can be "true". You can't even claim it is "statistically clustered in a way likely to be true" because you cannot show that our pseudo-thoughts cluster in a suitable way. So there is no point listening to anything you say. It's just a network of nodal connections, any relationship to the truth is purely coincidental.
I would love that you would demonstrate what you say instead of just assert.
Now about your falsificationism... well it might be not falsifiable by your particular epistemological theory, so what about the arguments to why yours is the only one that works???
You have to conclude that your "Performative model" system is then only one that works.... and "works" have to be defined by that same system of course.
Another thing... he wasn't really talking about cognition, cognition was just related topic, but what he was talking about was the ontology of what we were seeing. Your whole talk about how we come to know stuff is only important to the matter when you project your metaphysical beliefs onto what is going on in the brain ... which is just pure assertion.
So again ... back to your nominalism. Start with, what is really out there, jump the mechanisms in the brain ( you can talk about it in general terms because they are not part of the discussion ); them somehow concludes that, this whatever that is outside the brain is what we think it is. Because seriously... when you say that all the "data" coming from stuff outside is ambiguous ... it just go through my head that you will someday confuse a chair with an elephant. I mean you will confuse the objects, not their names or images; you are going to see the elephant instead of the chair because it is really all ambiguous in the beginning.
Perhaps its better to say that the subjects Dr. Feser addresses here aren't aided by any Thomistic treatment. Thomism is not a falsifiable framework or set of propositions, so it's not a matter of it being "false" or "wrong" in that context. "True" and "false" aren't applicable, there. Rather, it's just a kind of "fog" that gets layered onto whatever is the subject of the day.
I'm so freaking sick of this positivistic CRAP. I wish we could all just agree that anyone who doesn't get past this nonsense is not worth engaging with. All it does is deflect from worthy opponents.
Immaterial conception is not a 'God of the gaps' "hypothesis." None of what you have said here refutes or even really applies to the substance of Feser's article. Why not quote from it, and then contradict a quote with reasons supporting?
It's only vacuous if only requires that "meaning" be understood in a magical, supernatural way
What vacuous is your attempt to fabricate meaning out of reductionistic, incoherent materialist concepts that do no justice to the splendor of reality but are rather a finely chopped-up remnant of its fullness. We're talking about a cat and in its place you offer in your definition a broken bone. You then proceed to commit ad hominem fallacies against an entire metaphysical paradigm in a pathetic attempt to propagate what has already been shown to you to be false. You're still in the middle of an infinite regress problem and everyone recognizes it except you. Obfuscating the issue with unnecessary verbiage that is irrelevant, while simultaneously misunderstanding the other side is either total ignorance of intellectual dishonesty!
What we're telling you is that even the "babelized" claim to language itself would require determinate meaning in order to make sense. We are all aware of the contextual interpretations of language and we all heard the usual relativism tosh. The fact is you either have relativism and infinite regress into absurdity (epistemological nihilism) or you recognize the necessity of The Absolute.
And please stop with the strawmen against Theism because you;re starting to sound very juvenile.
You don't need forms.... what distinguish them from one another is OF COURSE.... the number of poneys related to that IMAGE!!!!!
after all, reality is just poneys all the way down, and if you get them small enough .... yada yada ... they become dots and the poneys in your head can see the poneys that fly out of the other poneys, creating a pattern that is not a poney, even though is made really just of poneys, therefore proofing once and for all, that everything is a poney ... deep down
Seriously, you come up with the funniest stuff.
This has to be as good as the "natural selection the feral spirit of evolution".
EXACTLY. The computer/brain cannot explain REPRESENTING. If you are telling me that you do not actually engage in representations, or meaning, or knowing, that all your mind does is cluster around statistical groupings -- well, then actually that would explain a lot.
I'm saying that is a distinction without a difference -- you can't "feel" your brain making these associations and activating these networks of connections, because there are no nerve in the brain to give you awareness of what is acutally going on physically in your head, so you suppose it's "magic", a ghost in your machine.
Humans have machinery that enables and develops representational thinking, and uniquely as a matter of degree and depth, if not necessarily of kind with respect to other animals or machines, meta-representational thinking. That isn't controversial.
What is controversial is whether that representation is real, reified in nature or not.
We can suppose some "immaterial intellect" is required for representation or meta-representation to occur. But we can similarly suppose "immaterial particle faeries" attend to the actions and behaviors of elementary particles, moment by moment, keeping everything moving as it should. There's no falsifying it, there's only the realization that such conjectures do not add anything to our knowledge or models of the world around us.
@Josh, I'm so freaking sick of this positivistic CRAP. I wish we could all just agree that anyone who doesn't get past this nonsense is not worth engaging with. All it does is deflect from worthy opponents. This is not positivism. You can dismiss it as you like, but if you are dismissing it on the basis of its positivism, you're not following what's being said.
Immaterial conception is not a 'God of the gaps' "hypothesis." None of what you have said here refutes or even really applies to the substance of Feser's article. Why not quote from it, and then contradict a quote with reasons supporting? You can see above that I, unlike anybody else here, have quoted Dr. Feser's article numerous times, and at length.
See my posts with these timestamps for the very thing you're asking for, already provided by me, not provided by anyone else in this thread:
August 18, 2012 9:04 PM August 18, 2012 9:35 PM August 18, 2012 9:35 PM (There are two separate comments with the same minute stamp).
Similar engagement with Dr. George's article, which you offered for consideration, if I recall, occur upthread of that.
Out-of-touchstone said... Perhaps its better to say that the subjects Dr. Feser addresses here aren't aided by any Thomistic treatment.
So Feser isn't just WRONG, he's STUPID. Ok. And your reaction is, man, everyone here is saying stuff that makes so little sense to me they must all be idiots! It never occurred to you at any point to think, gee, maybe I'm not understanding what their point actually is, perhaps I should ask some questions? Because you're some kind of super-genius, naturally. Do you honestly expect anyone to take you seriously?
Yeah, watching this conversation, a Family Guy paraphrase comes to mind. "I suppose I should find all this annoying, but really I'm just bored as hell."
TS does this same act in each thread. A lot of mangled, tortured understandings, lecturing, and completely ignoring anything that causes trouble for his position. This time he seems to not even understand what he's criticizing. I'm sure he'll decide it's all everyone else's fault, not his. After all, he's a programmer, unlike... well, actually we've got several programmers here.
This is not positivism. You can dismiss it as you like, but if you are dismissing it on the basis of its positivism, you're not following what's being said.
Of course it's positivism. All you do is take the positivism mentality and simply replace 'verificationism' with 'falsificationism' and then proceed to assert in your usual bombastic, yet obscurantist tone that statements are not meaningful because they cannot be said to be either "true" or "false" (would like to point to the irony of putting the word truth in quotes, since by implication it undermines its very value). Apart from the fact that I've shown you that such claims are worthless, due to the insurmountable problems falsificationism faces, I have also provided you with a quote from Popper himself warning (better yet, disciplining) anyone who dares to abuse his notion of falsificationism by pretending that it's something else than what it truly is. Since you continue to espouse this ridiculous view that Popper warns against, here is the quote from the Logic of Scientific Discovery once again:
"Note that I suggest falsifiability as a criterion of demarcation, but not of meaning. Note, moreover, that I have already (section 4) sharply criticized the use of the idea of meaning as a criterion of demarcation, and that I attack the dogma of meaning again, even more sharply, in section 9. It is therefore a sheer myth (though any number of refutations of my theory have been based upon this myth) that I ever proposed falsifiability as a criterion of meaning. Falsifiability separates two kinds of perfectly meaningful statements: the falsifiable and the non-falsifiable. It draws a line inside meaningful language, not around it."
I even bolded the most important part for you in case you're unable/unwilling to process/understand what he is saying.
So stop abusing falsificationism, stop distorting its utility and stop being so damn intellectually dishonest!
What vacuous is your attempt to fabricate meaning out of reductionistic, incoherent materialist concepts that do no justice to the splendor of reality but are rather a finely chopped-up remnant of its fullness. I think the real point of resistance is showing through here. You're aesthetically not all tingly about alternatives to your intuitions, ergo it's false. Somehow. Must be.
We're talking about a cat and in its place you offer in your definition a broken bone. You then proceed to commit ad hominem fallacies against an entire metaphysical paradigm in a pathetic attempt to propagate what has already been shown to you to be false. Well, it's false on stipulation of the primacy of your own paradigm, perhaps, but that's just to beg the question. It's not been shown such, or even engaged (with a few noble exceptions noted!) except as an exercise in affirming one's consequent.
You're still in the middle of an infinite regress problem and everyone recognizes it except you. Obfuscating the issue with unnecessary verbiage that is irrelevant, while simultaneously misunderstanding the other side is either total ignorance of intellectual dishonesty! Do you suppose the English language is devoid of meaning for its speakers? If not, how does this happen? How does meaning obtain without infinite regress??? It's just words offered as the components defining other words, right? Is it magic that allows it avoid descent into infinite regress?
What we're telling you is that even the "babelized" claim to language itself would require determinate meaning in order to make sense. We are all aware of the contextual interpretations of language and we all heard the usual relativism tosh. The fact is you either have relativism and infinite regress into absurdity (epistemological nihilism) or you recognize the necessity of The Absolute. I read that from you as 'the necessity of [the aesthetic appeals I demand] of The Absolute'.
Look, meaning for humans (and derivatively, for machines modeled on the same architecture) is neither inert nor is it "determinate" in any final, absolute and perfectly unambiguous sense. Meaning is practically determinate, "close enough" to achieve agreement and effective communication between humans (and internal dialectics). There are many cases where language becomes ambiguous or confusing, because the determinacy, the precision of the usage, is not sufficient to effectively convey the intended concepts from sender to receiver. This isn't trivially dismissed as sender or receiver (or both) just being stupid, or careless; meaning is an exercise in varying levels of ambiguity. Anyone who's worked with either computer language construction itself, or use of computer languages to implement natural language comprehension understands this with stark clarity: good enough is good enough.
And for many intents and purposes, it is good enough. If it works, it works, as a communication process, and there's no need to postulate "The Absolute" when 'determinate-enough-for-effective-communication" provides all the explanatory capital we need in light of the evidence we have, neurologically, behaviorally and otherwise, no magic thinking needed!
Do you suppose old Jacques concluded that his beloved French could not bear human meaning after all? Should he have abandoned its use after coming to his conclusions? No, because it's an error to cast this in binary-thinking terms: meaning obtains in pragmatic, fuzzy, associative ways. It's not magical or metaphysically "absolute", but neither is it unable to carry and convey meaning. It's just a lot more messy and complicated and "naturally human" than traditional human conceits about their minds and their languages find aesthetically pleasing.
All you do is take the positivism mentality and simply replace 'verificationism' with 'falsificationism' and then proceed to assert in your usual bombastic, yet obscurantist tone that statements are not meaningful because they cannot be said to be either "true" or "false" (would like to point to the irony of putting the word truth in quotes, since by implication it undermines its very value). Hey, pause the reflexive cut-and-paste diatribes for a second and read with some care. I never said, and have not believed that statements cannot be meaningful without being cast as "true or false" propositions. That's preposterous.
What I have said is that as a matter of KNOWLEDGE about the extra-mental world, propositions that ARE cast as "true or false" statements about the world around us are NOT MEANINGFUL AS KNOWLEDGE ABOUT EXTRA-MENTAL REALUTY if they do not carry semantics for "true" vs. "false". They are "meaningless as knowledge" if true cannot be distinguished from false. That's just an entailment of what we mean by "knowledge" - the 'truth' requirement of the epistemology.
All manner of other statements can be generated and used that are richly meaningful outside of that constraint; 'true or false' as a proposition about extra-mental reality is not applicable or the basis for meaning of the statement. If I say "all bachelors are unmarried men", that is a statement that is not falsifiable, not a true/false statement about the extra-mental world. It's a definition, an association, a declaration of meaning, and on that invokes subjects ("unmarried", "men") that do have real world referents for those symbols.
But the statement itself is both meaninful, and non-falsifiable. It's not possibly a candidate for the set of propositions (or models) that we would include as knowledge of the extra-mental world.
You hear criticism of statements that do obtain as propositions about our extra-mental reality ("Immaterial intellect exists and is crucial for human thinking.") and make a leap from that, as a criticism of putative claims to knowledge, to a general dismissal of meaning for all statements which are not even putative claims to knowledge. That's a mistake, and neither reflects what I've said here, or believe.
August 18, 2012 9:04 PM August 18, 2012 9:35 PM etc.
My mistake; I should have said, why not quote from Feser's article and then contradict with something relevant with reasons supporting?
For instance, you just go on a rant:
So, the triangle picture P1 *is* determinate, unambiguous content, as a percept.
Why? Because we make computers do it? Who the hell cares?
The important concept here is that nowhere is there any normative, "platonic" or archetypal "triangle" needed.
You mean there's no class written into the pattern recognition software that allows for this recognition to take place? Mon Dieu!
This is all irrelevant. Rank Sophist showed it. The point is that concepts in principle can't arise out of percepts because of their material conditions. Your recourse to "statisical fuzziness" will never get you to a determinate, unambiguous concept or universal that can be applied to a class of objects. It's mere equivocation on the term 'concept' to denote the "fuzzy" perception that allows both chimps and humans to recognize a green circle as associated with some instinct, or something.
You're aesthetically not all tingly about alternatives to your intuitions, ergo it's false
You couldn’t be more wrong. I use to have the same intuitions as you do, but upon realizing how incoherent they were I decided to abandon them. I made reference of this in a previous post.
Well, it's false on stipulation of the primacy of your own paradigm, perhaps, but that's just to beg the question. It has been shown to you in the past by myself and several other users as well as by Rank Sophist in this discussion. It’s also been shown in a plethora of books that unveil the false pretensions of materialism. I’m not going to go over all the literature with you right now. Even if I did, I doubt that you would be willing to listen. How many individuals on this blog have told you time and time again about this? You don’t listen, you don’t understand the argument and yet persist in creating strawmen and provide irrelevant responses as a means to salvage your worldview.
Do you suppose the English language is devoid of meaning for its speakers? If not, how does this happen? How does meaning obtain without infinite regress??? It's just words offered as the components defining other words, right? Is it magic that allows it avoid descent into infinite regress? Essence. The real question however is, how would meaning obtain through an infinite regress? An infinite regress of “meaning” (using quotes as so not to do the word meaning a disservice) is a contradictio in adjecto.
I read that from you as 'the necessity of [the aesthetic appeals I demand] of The Absolute'. If you want to claim that logic is an aesthetic demand that’s fine with me. ;-)
Look, meaning for humans (and derivatively, for machines modeled on the same architecture) is neither inert nor is it "determinate" in any final, absolute and perfectly unambiguous sense. Meaning is determinate. Errors in discovering said meaning caused by humans does not negate said fact.
Meaning is practically determinate, "close enough" to achieve agreement and effective communication between humans (and internal dialectics)… [shortened for space conservation]… good enough is good enough.
I’m starting to think that we’re talking a different language here (and I don’t mean coming at it from two different metaphysical positions but specifically, English vs I-don’t-know-what-language). I already told you that acknowledging diversity and cultural context as well as ambiguity between human beings does nothing to undermine what Feser, myself, anon, Rank and everyone else is saying. For the “babelizing” to make sense you need determinatation of meaning as a fundamental aspect of reality, teleology.
Your claim now has shifted to pragmatism and what “works”… To I need to point the obvious about the incoherency of assuming that something “works” can exist in suspended animation? Or the consequent relativism that follows from it? Which takes us back to square one?
And for many intents and purposes, it is good enough. If it works, it works, as a communication process, and there's no need to postulate "The Absolute" when 'determinate-enough-for-effective-communication" provides all the explanatory capital we need in light of the evidence we have, neurologically, behaviorally and otherwise, no magic thinking needed!
More pragmatism. See above. Also, if my memory doesn’t fail me, a few weeks ago Rank or another users took the time to unveil the problems that lie behind pragmatism in more detail than I have.
Do you suppose old Jacques concluded that his beloved French could not bear human meaning after all? Should he have abandoned its use after coming to his conclusions? No, because it's an error to cast this in binary-thinking terms: meaning obtains in pragmatic, fuzzy, associative ways.
An inherent problem with all post-modern deconstructionist efforts is that once they are done, they end up refuting themselves or fall back on pragmatism as you have done so once again here. Or in the case of those of us who have heard enough such nonsense, end up simply ignoring them. Your friend derrida has often been characterized as an obfuscationist and sophist by his contemporaries by the way because of the mess he created. I kind of like him in a way because he (maybe without realizing it) unveiled the follies of a materialistic worldview.
Meaning does not obtain in materialistic, non-determinate, reductionistic way because in such cases meaning does not even exist in the first place but is instead fabricated. It’s an illusion. It’s a phantasm that exists only in the mind of the epistemological nihilist.
For the last time, stop with the strawmen and appeals to the word “magic” every time you speak about anything opposing your view. People are already having a hard time taking you seriously and you’re making yourself sound like the anti-intellectualist “new atheist” types.
Wait .... Do we actually have people that think that Touchstone is a serious guy ???? or girl ???
I mean people, come on, Touch is here to cruse you people to high heavens; That model of his behavior/motivations seem to work rather nice according to the data so far!!!
A new book is about to hit the shelves this fall and my guess is that it's going to create a war. It's called:
Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False
by Thomas Nagel (atheist)
Kind of telling how even a self-ascribed wishful atheist is now abandoning materialism, reductionism and whatever other absurdity that goes along with it, no?
I can't wait to read it and more importantly sit back and enjoy the show of the polemics that will commence.
This is all irrelevant. Rank Sophist showed it. The point is that concepts in principle can't arise out of percepts because of their material conditions. My objection is that no principle, for your "in principle" is provided. When we say "we cannot see stars shining beyond our event horizon, and this is impossible in principle", we can appeal to the physics principles which both obtain as effective in practice/observation, and which MATHEMATICALLY preclude perception of light from sources beyond a certain distance in space-time from us, due to the constraint of the speed of light.
That's a principled use of "in principle". But your use... I can't see any underlying principle. What is it? I understand the intuition and sense of incredulity, but please don't confuse that with a principle that you'd invoke as an epistemic constraint.
Can you articulate the principle your "in principle" refers to?
Many times, in discussion with creationists on abiogenesis, I hear that formation of replicating cells or organisms by natural processes is "impossible in principle". But when I ask for the principle, the response is the principle of incredulity, an argument from ignorance: "I just don't see any way that could happen". I don't doubt that's their earnest position and view of things, but that's a disingenuous use of "in principle" when that happens. Your recourse to "statistical fuzziness" will never get you to a determinate, unambiguous concept or universal that can be applied to a class of objects. It's mere equivocation on the term 'concept' to denote the "fuzzy" perception that allows both chimps and humans to recognize a green circle as associated with some instinct, or something. It's not an equivocation, it's the rejection of immaterialist conjectures as meaningful distinctions against associative patterns. I'm not trading between the two senses, unaware or without the required univocity. I'm saying the immaterialist definition of 'meaning' isn't itself meaningful as a matter of examination of how humans, animals, or machines-programmed-to-work-like-rudimentary-brains actually function. Again I understand the intuition -- "but, but, it's not the same, it just doesn't seem to be the same!" -- and share it even. But I can't locate the basis for that belief anywhere outside of our intuition, an intuition we know from humans is given to just such conceits about its uniqueness and the "immaterialist intellect" at work it conjectures in the absence of being able to feel or directly sense in a brain with no internal innervation what is actually going on (until science established a beachhead on the subject).
Meaning is determinate. Errors in discovering said meaning caused by humans does not negate said fact.
Ipse dixit. If that's how you roll, then:
Meaning is associative, relational, fuzzy. Metaphysical intuitions about Cosmic Absolute Meaning cannot negate this fact.
QED, huh?
It's not just my reverse hand-waving in response to your hand waving. I can point you the physical structure and the electro-chemical activity of humans engaging in "processing of meaning". Creating meaning, determining meaning, deeploying meaning, associatively, relationally, and fuzzily. You can watch this activity happen on an fMRI.
For example, here's the abstract of an article from 2002 Annual Review of Neuroscience by Susan Brookheimer, "FUNCTIONAL MRI OF LANGUAGE: New Approaches to Understanding the Cortical Organization of Semantic Processing":
Until recently, our understanding of how language is organized in the brain depended on analysis of behavioral deficits in patients with fortuitously placed lesions. The availability of functional magnetic resonance imaging (fMRI) for in vivo analysis of the normal brain has revolutionized the study of language. This review discusses three lines of fMRI research into how the semantic system is organized in the adult brain. These are (a) the role of the left inferior frontal lobe in semantic processing and dissociations from other frontal lobe language functions, (b) the organization of categories of objects and concepts in the temporal lobe, and (c) the role of the right hemisphere in comprehending contextual and figurative meaning. Together, these lines of research broaden our understanding of how the brain stores, retrieves, and makes sense of semantic information, and they challenge some commonly held notions of functional modularity in the language system.
The point of providing the abstract is twofold: 1) to show that there is more behind these beliefs than than dogma ("Meaning is determinate"!), and 2) that we have instrumentation that provides observation of these associations and connections *at work* -- in vivo.
What does "Meaning is determinate" stand on, beyond a dogmatic pronunciation of the claim? I'm distinguishing here between 'associative/relational/fuzzy' and 'determinate' as absolute/perfectly unabmiguous. Associative meaning is sufficiently determinate for effective human communication, so it's 'determinate' as a practical matter, just not 'Cosmically Determinate' as you suppose.
Is anyone going to explain the difference between extrinsic and intrinsic meaning to TS? Anyone? He seems to need an EZ mode introduction to this topic.
We can see the object Brain acting as it perceives something.
Meaning is defined as a process in the brain
Meaning rises in the brain, because dur... I have just defined it is something that happens in the brain alone.
Therefore there is no need to add anything like forms or essences to our model, because meaning emerges in the brain. ( Although arguments of necessity don't work to infer the existence or non existence of something... ) ------------------------------------------
It feels like you are playing around with words, and trying to define things the way that you need through the method that you like. I don't know touch, this doesn't seem to be any good as an argument. Are you interpreting in some different way here Mr Bombastic, or are we really just defining meaning any way we wish and calling a day?
I’m starting to think that we’re talking a different language here (and I don’t mean coming at it from two different metaphysical positions but specifically, English vs I-don’t-know-what-language). I already told you that acknowledging diversity and cultural context as well as ambiguity between human beings does nothing to undermine what Feser, myself, anon, Rank and everyone else is saying. For the “babelizing” to make sense you need determinatation of meaning as a fundamental aspect of reality, teleology.
But that isn't the source of the core problem with indeterminacy per natural models of cognition. Dr. Feser says in his post at Biologos:
But doesn’t neuroscience show that there is a tight correlation between our thoughts and brain activity? It does indeed. So what? If you smudge the ink you’ve used to write out a sentence or muffle the sounds you make when you speak it, it may be difficult or impossible for the reader or listener to grasp its meaning. It does not follow that the meaning is reducible to the physical or chemical properties of the sentence. Similarly, the fact that brain damage will seriously impair a person’s capacity for thought does not entail that his thoughts are entirely explicable in terms of brain activity.
That's not what a natural model of cognition understands to be the fundamental challenge in determinacy; those are problems but superficial ones, logistical challenges compared to the problem that obtains in the architecture of the brain itself. As a hugely scaled mesh of associative neurons, the basic mechanism for determining meaning, or identifying activated associations is FUNDAMENTALLY FUZZY, as a matter of neurophysiology.
This has nothing to do with "smudging the ink" -- that kind of wave off is all noise, either evasive or ignorant of this aspect of cognition. Meaning obtains, from all the evidence we can gather and analyze, as an associative mesh that *physically* does not admit of the kind of perfect unambiguity and precise univocity intuited by many.
So, complaining that you do indeed understand the hazards of 'culture context' on this point just confirms you are not grasping the problem, the problem that obtains in the architecture of the brain itself, as effective in creating and deploying meaning via those associative networks -- good enough, excellent enough for effective communication -- but architecturally incompatible with your intuition about "The Absolute" as an aspect of meaning as used by humans.
Touchstone, I don't think that part is referring to human cognition. Did Feser actually claimed that neuroscience were talking about indeterminacy in that way ?
I mean it looks like he is arguing against ideas that involve "A correlates with B, so therefore B is made of A or is reducible to A". Or vive versa.
I'll get to your objections in a bit, Touchstone. However, for now, I should point out that there was no "outside" for Derrida, because any perception (even a "percept") was reduced to "language" as soon as--even before, since lanugage shapes the percept--it occurred. It's what happens in your system, too. Do you know what that means? It means that there is no such thing as truth, meaning or objectivity. Know what that means? It means that science is impossible. It means that there is no such thing as logic, no such thing as "falsification", no such thing as research--all of these things are destroyed. Not even percepts are safe, Touchstone, if they are reducible to lines of signs.
the basic mechanism for determining meaning, or identifying activated associations is FUNDAMENTALLY FUZZY, as a matter of neurophysiology.
This sentence obtains only if you assume reductionistic materialism. Chemicals in the brain (hence neurophysiology) not determine meaning.
an associative mesh that *physically* does not admit of the kind of perfect unambiguity and precise univocity intuited by many
Again, without the assumption of materialistic reductionism this is simply vacuous.
the problem that obtains in the architecture of the brain itself, as effective in creating and deploying meaning via those associative networks -- good enough, excellent enough for effective communication -- but architecturally incompatible with your intuition about "The Absolute" as an aspect of meaning as used by humans.
It is you that doesn't understand what he is being told and you are now purposefully ignoring what I said and simply responding in a circular manner without even addressing my devastating critique of your worldview.
The brain does not "create" meaning. Given materialism the best you can claim is that it fabricates illusions of meaning. Once again, meaning in your worldview is a phantasm. In fact, everything you said up to this point, given materialism, is mindless babble!
If the brain as part of a reductionistic-cum-materialistic worldview cannot discover meaning then that serves as a refutation of your worldview. The intellect is thus necessary for apprehension and use of meaning in reality. Your entire argument in fact, is a self-defeating argument against your own view.
It seems that all you did was explicated the reductio ad absurdum that lies behind your beliefs. I recognized it, now it's time for you to recognize it as well.
Also, I should note that this same problem is why analytic philosophers laugh at Derrida. His system refutes the logic that he used to create his system--it's worthless. Therefore, the claim that mind-brain activity can be reduced to signs and associations is likewise self-refuting. You've wrecked the very enterprise that allowed you to reach that conclusion.
@Anon, This sentence obtains only if you assume reductionistic materialism. No, there's nothing in an naturalist model of meaning as an emergent property of the brain, any more than gravity as a natural process can only obtain if we assume some form of philosophical materialism. "naturalist meaning" does not produce a contradiction with supernature, or the idea of a God, personal in nature otherwise. It would negate "immaterial intellect" insofar as that was synonymous with the machinery for concept formation, meaning, abstraction and (meta-)representation, but it only obviates what it naturalizes, there. Everything else can be as supernaturalist and immaterialist as you like. A supernatural god may have designed the universe such that humans, or some kind of sentient creature will evolve, developing natural faculties for recognition, comprehension, concept formation and semantics/meaning. That does not (cannot) require assuming materialism, and no logical contradictions obtain.
This recurring charge that on materialism, "meaning is meaningless" and language is somehow vacuous is just an exercise in equivocation, clinging to supernaturalist/dualist concepts of for "meaning" and "understanding" and forcing them into a materialist model, where they are, indeed, meaningless, divide-by-zero operations. On materialism, the materialist concept of meaning would obtain, and neural associations in the brain is (or may be, depending on what the science shows) how meaning is reified in human, and other minds.
It's silly to complain that "OMG on the reductionist materialism nothing means anything!". That's nothing more than denialism about what that materialism would entail, namely that meaning was a natural, physical phenomenon, and supernaturalist intuitions about meaning WERE NOT RELEVANT in that case. "My definition of 'meaning' and how it obtains must change on materialism" is NOT a case against the meaningfulness of semantic structures in a materialist paradigm. It's just a reflection of the inapplicability of the ones you are wedded to in your current paradigm.
This is very much like misunderstanding the concept of 'motion' from people who understand the colloquial and physics sense of the term, but are not aware of the potentiality->actuality sense deployed for the term in A-T. One cannot force alien concepts on a different paradigm, but must address the concepts in the frames in which they are constructed WITHIN that paradigm.
The charge of 'vacuousness if materialism' commits this error, and judges a completely different set of semantics and constructs for 'meaning' by its own parochial notions from a framework external to it.
Chemicals in the brain (hence neurophysiology) not determine meaning. Well, you have ipse dixit down, and in its reiterative form, too.
Since that doesn't mention "intelligence" or "design", it wouldn't seem to qualify as an ID argument. Where are you getting that from? God as the intelligent designer, sorry, thought that would be quite obvious as the connection. Ask Dembski who he thinks designed biological life. Ask Behe. Ask Fuller. Ask Paul Nelson. Ask Philip Johnson.
And on an on.
1. We are ignorant of chemical pathways for abiogenis on natural, impersonal processes. 2. Therefore, this not possible in principle (why, because we can't think how it might happen!) 3. Therefore organic life must be the product of an Intelligent Designer. 4. Even if aliens seeded life here on earth, the design of THOSE aliens, if they are not supernatural creators, requires a Intelligent Designer, on grounds of (1,2) -- a biogenesis is not possible in principle. 5. This Intelligent Designer, capable of creating life where nature itself was incapable, is therefore a Supernatural Intelligent Designer. This we call "God".
You see man, the problem is that your concept of meaning isn't the one I like... I mean, wait ... is not that ONE I CAN APPLY! I will not show that to be true because is too damn hard that is why I always dodge anyone who asks to demonstrate something... always U_U!
Well I guess we can summarize this whole thing as: " I would love to discuss with you as long as we start from the idea I am right ... always."
Although this Evo-Cretio talk is really messy, what Touchstone said is half correct. Yeah they all believe that G*d designed life, I think that is pretty clear.
The argument ... well I never seem them do it, but I have seem the EVO side says that they do those arguments.
But the best way to proof is just to go to Discovery institute site and get the quotes from there. I think that settles if these people actually made the arguments as it was presented.
Although this Evo-Cretio talk is really messy, what Touchstone said is half correct. Yeah they all believe that G*d designed life, I think that is pretty clear.
No ID argument given by the main ID proponents concludes to God's existence. None.
At best, they infer intelligence based on demonstrable capabilities of known intelligent agents.
The argument may be flawed, but Touchstone misrepresents it badly.
My bet is that he is lying ... judging how the discussion is, I wouldn't be impressed
Oh needless to say, I was also an asshole with half ass ideas about the whole thing so I talk from experience. Seriously most of the people in the talk know nothing of the other side... and I was sort of like that... yeah shame on me u_u I know
model of meaning as an emergent property of the brain
Oh, so now you're appealing to irreducible emergence and trying to hide behind that notion? I already explained this to you in another discussion but as usual you're not listening. Emergence is just a side-ways manner of appealing to dispositions (teleology) and latent aspects of reality (forms), which are actualized. Furthermore, they represent a discontinuity in metrology since they are irreducible. Well, well... Doesn't that sound quite like what we've been telling you? Of course it does. What you fail to understand is that reductionism is your only real option here. Appealing to emergence is trying to play the game by our rules. Unfortunately you just lost (again).
The fact that you think that intellect can be replaced with machinery is precisely the core of your confusion and problem. We're wasting time trying to explain this to you it seem.
When I referenced contradiction I spoke of contradictio in adjecto... A contradiction in itself... As in materialistic meaning is a contradictio in adjecto.
A supernatural God may have designed the universe such that humans, or some kind of sentient creature will evolve, developing natural faculties for recognition, comprehension, concept formation and semantics/meaning.
This is not about God, but about coherence vs absurdity. I am well aware of physicalist Theists (e.g. Baker)as well as mereological Theists (Van Inwagen). You're being irrelevant again.
On materialism, the materialist concept of meaning would obtain, and neural associations in the brain is (or may be, depending on what the science shows) how meaning is reified in human, and other minds.
The materialistic concept of "meaning" is meaningless. Contradictio in adjecto. An illusion. What you're saying here is mere empty verbiage appealing to "science" (emphasis on the quotes).
I will explain it to you one last time despite your constant and dishonest attempt to ignore what we've been telling you. If materialism holds, then there is no meaning in the world, period. That is true by definition. So whatever construct you'll create as your paradigm, using whatever brain process, whatever science (or "science"), whatever bombastic super-duper nerd talk you conjure will never be able to obtain meaning because reality in its totality is devoid of it!
You can believe that you have found meaning in a materialistic world but it would be no different than believing in santa claus. Neither one exists and believing in either is a delusion. I simplified it as much as I could for you. Please try to understand.
meaning was a natural phenomenon
Here you are either admitting that nature is ridden with teleology or again committing contradictio in adjecto.If the former, welcome to Aristotelianism if the latter it's as if you said nothing at all (again).
the meaningfulness of semantic structures in a materialist paradigm
If those structures are meant to describe how reality is then your materialism is refuted. If those are mere constructs of the materialist's imagination then you're deluding yourself again.
different set of semantics
If we all operate on different semantics then I suppose that we are all enslaved in our little mind incapable of communicating at all. While this might somewhat describe what is going on when we try to explain things to you, the rest of us to don't believe that each one of us operates under different semantics, let alone ones (as per materialism) that are mere illusions that have no relation to the reality which we inhabit.
I said the image is "determinate, unambiguous content, as a percept", and the point of that contradiction of Feser (and perhaps you here) is that we don't have information just at the post interpretive layer, but "raw input" at pre-interpretive layer.
Unfortunately, this move does not work. Percepts are representations of something else, which means that they are reduced from that "something else" to the language of the percept. The "something else"--the exterior material processed by the percept--necessarily cannot exist unreduced in the brain. (If it did, then the brain would become it.) It has to be transformed into code first. However, this means that the code ("language") pre-exists the percept, and so determines it. As a result, there can be no such thing as "determinate content" even on the level of percept, because even percepts are interpretations.
In simple terms: if our brains work like code, then the code pre-exists pre-conscious "raw material". If this is the case, then all "raw material" is reduced to the language of the pre-existent code, which means that even "percepts" are representations of something else. All representation is interpretation. Therefore, all percepts are interpretations. If we then deny the existence of intentionality and immaterial intellect, we are left with a system that is manifestly self-refuting, just like Derrida's.
This is just to say that there are objective features of these images that have statistical feature affinities with each other, but which are not (yet) attached to any linguistic concepts.
Percepts would have to occur through a pre-linguistic language, as I said above. If we hold that minds are totally material, we are left with the ridiculous position that there are no such things as objective features, even on the level of percept.
When my computer program, which interprets visual input for the purposes of identifying English letters and numbers gets a new item to process, it's a "brute image" -- it's just a grid of pixels (quantized data so that the computer can address it for interpretation).
But the "brute image" is processed as code, and the code itself must have either determinate or indeterminate content. It's obviously the case that the code is determinate, because we gave the symbols determinate content. If the content was indeterminate, then we'd be left with Derrida's paradox, and the machine would be incapable of taking in "objective" percepts in the first place.
In a human, or a chimp, the optic nerves terminate in the brain (at the LGNs in the thalamus) and provide raw visual stimuli to the neural net, whereupon all sorts of integrative and associative interpreting begins across the neural mesh.
If computer code is determinate because we gave it determinate content, then the code of our minds would have to be determinate as well. Otherwise, even our percepts would be completely non-objective and indeterminate, since that would be the nature of the "code" in which they are processed. In other words, our "brain code" would have to contain intentionality, which is exactly what your computationalism is trying to explain away.
That is nothing more than to note that what we call "triangle-ish" images and "square-ish" images are not so called by caprice; the images have, prior to any interpretation, or labeling, physical features that distinguish them, and distinguish them as distinct groups.
Every code reduces perception (or "percept") to representation. Therefore, the code itself must contain "interpretation" and "labeling"--otherwise, the representation is indeterminate as well.
Which is just to say that Wittgenstein, bless his heart, is talking out his behind here, from a position of thorough ignorance of what is going on in his own brain, physically. He can take some comfort in the fact that he was hardly more equipped by science to speak on the matter than was Aquinas, but when we read something like that NOW, it's just obsolete as a context for thinking about this subject. The brain's "pictures" DO have motion cues that come with it, prior to any interpretation, for direction, and velocity. This is how sight works, before the visual cortex even gets hold of it. The "sliding down" vs "walking up" interpretations are NOT on an even footing, and BEGIN that way for brain, as our visual sense machinery is constantly streaming in motion cues (and other cues) in along with "image" data.
Versions of imagism based on moving images have been refuted by Wittgenstein's followers. They suffer from the same innate problems.
Not to mention that, again, either the "code" (the format for pre-conscious representation) is determinate or it is not determinate. If it is determinate, then it has irreducible intentionality. If it is indeterminate, then the percept is not determinate either.
Rather, the system is programmed to "recognize", or more precisely, to build associations and to maintain and refine them through continuous feedback. So that means it can and will group "triangle-ish" images and "square-ish" images (if its mode of recognition is visual/image-based) without it being told to look for 'triangle' or told what a 'triangle' is.
I know. I'm not an expert in programming, but I know how computers operate. Again, the problem arises as soon as you introduce the term "programmed". Programming is code, and the code must have determinate content. In the case of computers, this is obviously the case: we put it there. In the case of the human brain, there is nothing to give the code itself determinate content.
But hold on, you say -- it only does that because we programmed it to recognize and categorize generally. Yes, of course, but so what? We are programmed by our environment to recognize and categorize.
This, of course, does not work. Feser has attacked similar bizarro reasoning in the past. You're merely engaging in the homunculus fallacy. Here's Feser's post against Coyne's similar inanities: http://edwardfeser.blogspot.com/2011/05/coyne-on-intentionality.html
If we're going to interpret the environment, then both our "code" and the environment itself must have irreducible intentionality. If neither has intentionality--and thereby determinate content--, then all content is indeterminate and there are no such things as "objectivity", "accuracy", "science" and the like.
No, because if you are coding for similarity as the basis for grouping, "lower-level coding" won't help group. Grouping is a function of a similarity test.
The similarity test is a function of the determinate content of the code itself.
Humans are programmed by the environment to do similarity testing
Then the environment must have determinate content, and therefore intentionality, or it could not give it to humans. In turn, humans could not give it to machines.
For he is saying that the local process of interpreting that triangle image takes immaterial intervention directly.
It does. Intentionality is beyond the material--as you well know--and no image of a triangle has determinate content from its material components. This applies even if you deconstruct the triangle into a series of code-associations: the code-associations themselves would need to have intentionality.
Humans are good pattern recognizers, and I see this pattern a lot:
A; We need an immaterial intellect for conceptualization and understanding. B: No we don't. Look at this program... A: Well, that just proves something is needed to program us or the machine for conceptualization or understanding. B: But the question was about the need for immaterial intellect to *perform* the task of conceptualization and understanding! A: You still need a Cosmic Designer to have that mechanism come to be.
Actually, this would be more accurate.
A; For there to be determinate content, irreducible intentionality must be positied. B: No it doesn't. Look at this program... A; Well, that just proves that the code was given determinate content by us. B: Uh... uhhh... homunculi!
"B" is basically a representation of Dennett.
Dr. Feser is saying an immaterial intellect is needed to perform the local act of interpretive meaning.
Dr. Feser is saying that, without forms (universals), there could not be determinate content. Nothing about the physical make-up of an image gives it determinate content. All computationalist attacks against this idea--"percepts" and whatnot--invariably presupposes intentionality, because the "code" must itself have determinate content. The only thing that can abstract this determinate content in its non-visual, non-representational essence is an immaterial mind. Anything less is merely a visual impression, reduced to code.
So it's not regressive, and doesn't fall to a vicious cycle demanding ever more "fundmental" bases for meaning. It's a peer graph, and a huge one, too, in the case of English.
I know it isn't a vicious regress. Derrida wasn't that terrible of a philosopher. It's still a self-refuting position, though.
Without the outside, there are no referents for any symbols we might establish.
Exactly, Touchstone. That's what Derrida tells us. Because your system involves the reduction of the "outside" to code form, you're stuck in Derrida's very same self-refuting system.
Yes, but "appealing to another" decreases the ambiguity, and establishes meaning!
No, it doesn't.
It doesn't make language empty of meaning, for Derrida or anyone else.
Unfortunately, this move does not work. Percepts are representations of something else, which means that they are reduced from that "something else" to the language of the percept. The "something else"--the exterior material processed by the percept--necessarily cannot exist unreduced in the brain. (If it did, then the brain would become it.) It has to be transformed into code first. However, this means that the code ("language") pre-exists the percept, and so determines it. As a result, there can be no such thing as "determinate content" even on the level of percept, because even percepts are interpretations.
This equivocates on "language". If you suppose that signal encoding is language, in an unequivocal way, than any sighted organism is a linguistic being. For the brain (or the computer program processing visual input from a camera of some type), the external light patterns are encoded, but this is not semantic language, to use a term that should help avoid equivocation. For the brain (or the computer program) this is as "raw" as "raw" gets. This is the starting point of the chain, the encoding of photon patterns to electrical signals that precede interpretation against concepts and symbols.
If you want to understand the raw input to the brain (or program) as 'interpreted' by virtual of its translation from photon dynamics to electrical signals, fine -- that in way precludes it from being determinate content. It is what it is. For example, if you were to look at a pixellated input image that might be processed by an OCR program, those pixels are content -- they are information bearing configurations of matter (it takes some Kolmogorov complexity to describe the signal in a lossless way, for example). They just are not abstractions at the level of linguistic symbols or semantically rich concepts. The information is what it is as content, prior to any interpretation by the brain (program).
We could, alternatively, just shift the input frame back and refer to the photon inputs for our eyes as the "raw input". That sets aside your concerns about the "interpreted-ness" of any encoding process by eyes or machine-with-camera of those photon actions to electronic signals. The same point obtains -- the input is content, determinate as is-what-it-is. What human or program may do with it can go do many paths, depending on the processing features in place. But at the head of the chain, we begin with 'brute content' that is unprocessed conceptually or linguistically.
In simple terms: if our brains work like code, then the code pre-exists pre-conscious "raw material". It may be the case, but I see no reason to think this *must* be the case. Code -- neural network connections that map and remap adaptively based on feedback loops -- is thought to be an adaptation of evolution, and emergent feature of animal cognition. You have have a means of demonstrating this can't be the case now?
If this is the case, then all "raw material" is reduced to the language of the pre-existent code, which means that even "percepts" are representations of something else. All representation is interpretation. Therefore, all percepts are interpretations. If we then deny the existence of intentionality and immaterial intellect, we are left with a system that is manifestly self-refuting, just like Derrida's. Well, back to the photons bouncing around and coming into our eyes (or the camera attached to our processing program): for a given photon P inbound to your eye, what it is it a representation of, in your view?
There's nothing self-refuting about properties that emerge from certain configurations of matter and particular interactions, any more than we suppose that the "wetness" or water is self-refuting because neither of the elements that make up water (2 H + 1 O) are not "wet" like water is. Where did the "wetness of water" come from??? It's a product of the combination of those elements, a feature synthesized from them.
On your view, water cannot be wet because such synthesis cannot obtain -- Hydrogen and Oxygen aren't wet on their own. On semantical capabilities of brains, this is a feature that is synthesized from the configuration and interactions of its constituent parts, "wetness" as 'meaning-processing', a faculty that is supervenient on brains.
This equivocates on "language". If you suppose that signal encoding is language, in an unequivocal way, than any sighted organism is a linguistic being.
It doesn't equivocate. Any series of signs is a language of sorts--a semiotic structure. If the brain works by signs and associations, then it works through a kind of pre-language.
If you want to understand the raw input to the brain (or program) as 'interpreted' by virtual of its translation from photon dynamics to electrical signals, fine -- that in way precludes it from being determinate content.
Thank you for admitting it. This means that your system is self-refuting.
We could, alternatively, just shift the input frame back and refer to the photon inputs for our eyes as the "raw input".
There are no such things as photon inputs under your system, Touchstone. There are merely things that we refer to as "photon inputs" after they have been reduced to our indeterminate pre-linguistic brain code. We can never know them in themselves. And, because our brain code only obtains "meaning" by association with other symbols--and never by anything "beyond the text"--, we're stuck with the destruction of all science, knowledge, truth, objectivity and so on.
The same point obtains -- the input is content, determinate as is-what-it-is.
Aside from the fact that this move is incoherent within your system, you have merely committed the homunculus fallacy: you relocated determinate content to nature, which means that you've relocated intentionality to nature. Again, though, your system is already in ruins.
Percepts would have to occur through a pre-linguistic language, as I said above. If we hold that minds are totally material, we are left with the ridiculous position that there are no such things as objective features, even on the level of percept. That doesn't follow. How do you suppose a material mind entails the absence/impossibility of objective features.
Think about a program that categorizes shapes based on the input from a camera. On an adaptive neural network architecture, with back propagation, in unsupervised mode, it will begin to "learn" the features of the images it sees. It will find statistical affinities between the pixel configurations of the inputs it processes and develop clusters of associations, associations that will activate, stronger or weaker, based on the processing of the next inbound image to be processed. This is a material system that learns features, and can distinguish them. It is also contingent on the objective features of the input it processes. There is no caprice or will or emotion or subjective bias to interfere. It's just code cycling mechanically in the machine, deterministically.
On that description, do you reject that the system learns the features of the images presented (that is, it can distinguish them predictably, and more precisely and accurate as the model processes more and more input)? What part of those features do you reject as non-objective, and what would the "objective features" be in that case, if any obtain under some other view?
@rank sophist But the "brute image" is processed as code, and the code itself must have either determinate or indeterminate content. No, that's not true any more than every question is a hard "yes" or "no". As the code processes images on input, it will fire on different associations to different degrees. Some association humans would call "triangle-ish" may fire to 40% of its potential, which translating for human thinking about it would mean "this looks sorta somewhat triangle-ish". That *same* input image may fire on associations humans would call "square-ish" to 60% of its potential, meaning (put in human-friendly terms) "this looks like pretty much like a square".
The code is deterministIC, but the determinatION of the abstract content of a given image is fuzzy, an array of mixed signals (and in practice, it's not just two perceptrons that activate, but can be very many). It's "60% square", "40% triangle, a conflicted answer, but with a small bias toward 'square' (er, what humans would call a square -- the program isn't labeling these like humans do). But the results could be "30% Square", "30% Triangle", "30% Star". So what's the verdict, then? Without further feedback into the system that favors one of these (or some other association), there is no clear determination.
It's crucial to understand, then, that {determinate|indeterminate} are not the available options for this network. There is always some ambiguity in the system, but even so, there is not total ambiguity and parity between all associations. To think of it in hard {determinate|indeterminate} terms is to misunderstand how associations work in the brain (or our program). This should no more difficult than understanding the applications of fuzzy logic systems in the real world. Fuzzy logic replaces a {True|False} pair with a potential ranging from 0 to 1.0 (for example) on propositions. So a "0.6" is more "true" than "false", but less true than "0.9". If you ask if proposition X is "true" in a binary sense, all you can do is deprecate your available information into a "rounding up" or "rounding down" result for 0 or 1, false or true. But this loses information available in the system, information that can and does provide the basis for better performance in the system because it can represent more accurately the partial indeterminacy in the system.
It's obviously the case that the code is determinate, because we gave the symbols determinate content. That's a mistake about computing. The code can be fully deterministic in terms of what opcodes and instructions it executes (and for nearly all programs exist self-mutating code instances this is the case), but the execution of the instruction itself can lead to indeterminate results or states of the system. Calling a library routine that returns (pseudo) random values, for example, can put the program in an indeterminate state (and input from a camera can do the same thing). And more broadly, see the famous Halting Problem in computing.
If what you say is true, the Halting Problem is no longer a problem, and congratulations, you are now world famous!
If the content was indeterminate, then we'd be left with Derrida's paradox, and the machine would be incapable of taking in "objective" percepts in the first place. No, that is only a problem if you assume everything must be 100% determined, or 100% ambiguous. This is a classic example of the hazards of binary thinking. The content is "indeterminate-in-the-sense-of-certainty", but it is determinate to some degree of potential. Importantly, as more and more input is processed, the objective features of what has been seen and processed already can be brought to bear on new input, and the determinacy of both -- the repository of stored associations and connections and the associations and connections assigned to the new input image -- can be improved (or degraded, as it happens). The determinacy and specificity of the associations and connections of the overall system is fluid, and reacts with continuing feedback. This can be seen when a neural net app "discovers" which features provide performative disposition (generates positive feedback), and with just a small amount of new input, its network connections reach a "tipping point" where it's discriminating abilities spike sharply.
All of this operates OUTSIDE of the binary notion of {determinate|indeterminate}.
If computer code is determinate because we gave it determinate content, then the code of our minds would have to be determinate as well. No. Go read up a bit on the Halting Problem and see what you make of that. Is that a problem, in your view? Why can you not, given all the time and energy you could want, determine if a given (deterministic) non-trivial program terminates or runs forever?
Interesting, and for related reasons you have not grasped here, Greg Chaitin can calculate a "halting probability", an estimate of the Halting indeterminacy. It can't be completely computed, so it itself is only partially determinable. Zing. {indeterminate|determinate} is just a massive category error on these issues. Otherwise, even our percepts would be completely non-objective and indeterminate, since that would be the nature of the "code" in which they are processed. In other words, our "brain code" would have to contain intentionality, which is exactly what your computationalism is trying to explain away.
No, strong AI doesn't explain away intentionality, it just obviates any need for an immaterial homunculus, a ghost in the machine that is "really doing the thinking" apart from physical matter and energy. Intentionality obtains -- humans are beings with a stance of intentionality. This is a feature of our evolved physiology. Computationalism just aims to show that intentionality, along with meta-representational thinking (among other things) does not require and cannot use notions of an 'immaterial intellect'. The reaction seems to be, here, that on such a model, 'meaning' is meaningless. But that's only true with a definition derived from a supernaturalist model for 'meaning'. Meaning just obtains naturally on materialism.
Touchstone: Ask Dembski who he thinks designed biological life. Ask Behe. Ask Fuller. Ask Paul Nelson. Ask Philip Johnson.
Well, who they think designed life is irrelevant, unless that's explicitly part of the arguments they offer as being ID.
1. We are ignorant of chemical pathways for abiogenis on natural, impersonal processes. 2. Therefore, this not possible in principle (why, because we can't think how it might happen!) etc.
That is indeed a bad argument, but what I meant was, can you cite where Behe or Fuller or Nelson or Johnson actually say that?
If you can't overcome my objection to your first two repsonses, then the rest of your posts collapse. Rather than draw out this argument to obscene lengths, I'd prefer to keep it short and snappy. If you can wiggle out of my objection--I don't see how--, then I'll take on your more recent posts. Otherwise, I'm not sure it will be necessary.
@Eduardo That emergent characteristics you just spoke is also known as FORM to your ...adversaries. No, but that's a good point to make for clarity: people see the word emergent, and think "emergentism", as in the belief that features that are fundamentally irreducible on its parts for a composite. By "emergence", and properties that supervene on such a composite, I mean that the phenomena is obscured (to us) by the complexity of the interactions of the components. That is, the "saltiness" of NaCl is deducible from knowledge of the chemical/physical properties of sodium and chlorine, even though neither of these to components of salt are themselves "salty".
Emergentists, as non-reductionists, may hold that an emergent feature of consciousness or intellect is *fundamentally* irreducible. That's not what I subscribe to.
Nature C was not there and it is here now. That is emergency to me. Now, your emergent idea and the idea form works just fine to comprehend how salt becomes what it is from other stuff. Now emergency as far as I can understand it can be shown to be correct or at least shown to be the best option.
Now you might believe that A gets close to B and becomes C... but that is your opinion, I want to know is emergency has limits, or rules or something like that, What are the group of definitions that describe emergency completely, how I should use it... what could show it to be wrong; you know that very lengthy discussion of an idea, anyways why I bother telling you...
Asserting that Saltiness emerges from other stuff, is just an assertion. I can see that it could, but it doesn't warrant it over anything else, and it doesn't show anything wrong forms or essences or any of the ideas you cri... you rip at ALL the time
You are hardly the first one to defend these ideas, and your ideas have been criticized. It would be nice to see you argue that those critiques are irrelevant or they fail, or something like that.... but you never do that do you.
Anyways, is not your problem, I should stop being lazy and think from your position and from the Thomist position, and hardly need you, even though I had hopes... badly placed, no doubt.
Might also call on Glenn for a re-link to that paper in the other thread regarding computer programming and its relation to Aristotelian logic. If nothing else, it will give Touchstone something else to chuckle at while writing obfuscatory sentences.
To Touchstone: I've been over some of this same ground with Mr. Rank Sophist (see some of these posts where I try to illustrate how intentionality can arise in a mechanical system using the ribosome as an example) and I can pretty much promise you that you are wasting your time.
I am tempted to say that he's too stupid to understand, but I don't think that's accurate, it's more like he's actively working to not understand. Which is fine if you are doing religion, not so good if you claim to be doing philosophy.
At the top of Touchstone's list of books he recommends for reading is Gödel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter. In the preface to the 20th Anniversary Edition of GEB, Hofstadter answers a question that had long been on people's mind: what is the book about? He succinctly states his purpose for having written the book thusly: "In a word, GEB is a very personal attempt to say how it is that animate beings can come out of inanimate matter."
Though I don't know if Touchstone expressly believes that animate beings can come out of inanimate matter, I do think it would be surprising if he disagreed with the notion that such can happen. At any rate, since Touchstone seems to have some respect for Hofstadter, I thought I'd provide some quotations from Hofstadter's work. There are only two quotations, but they are rather lengthy, and it will take a few comments to post them. Though lengthy, they are easy to read, and just as easy to understand.
My reason for posting these quotations is two-fold: a) by his criticism of Marie George, Touchstone seems to think that 'monkey see monkey do' qualifies as the kind of thinking George was writing about or at least indicating (in the paper linked to by Josh), and I think what comes from Hofstadter's use of his cognitive apparatus provides a better idea/example (than does Touchstone's hallowed 'monkey see monkey do') of what George might have had in mind when she wrote about thinking; and, b) I think Hofstadter shows--without breaking a sweat--the kind of honesty and integrity of thought which rational people can appreciate.
...an article by Mitchell Waldrop in the prestigious journal Science (Waldrop, 1987)... described in flattering terms the analogy-making achievements of SME, the Structure Mapping Engine (Falkenhainer, Forbus & Gentner, 1990), a computer program whose theoretical basis is the "structure-mapping theory" of psychologist Dedre Gentner (Gentner, 1983). After a brief presentation of that theory, Waldrop's article went through an example, showing how SME makes an analogy between heat flow through a metal bar and water flow through a pipe, inferring on its own that heat flow is caused by a temperature differential, much as water flow comes about as a result of a pressure differential. Having gone through this example, Waldrop then wrote:
To date, the Structure Mapping Engine has successfully been applied to more than 40 different examples; these range from an analogy between the solar system and the Rutherford model of the atom to analogies between fables that feature different characters in similar situations. It is also serving as one module in....a model of scientific discovery.
There is an insidious problem in writing about such a computer achievement, however. When someone writes or reads "the program makes an analogy between heat flow through a metal bar and water flow through a pipe", there is a tacit acceptance that the computer is really dealing with the idea of heat flow, the idea of water flow, the concepts of heat, water, metal bar, pipe, and so on. Otherwise, what would it mean to say that it "made an analogy"? Surely, the minimal prerequisite for us to feel comfortable in asserting that a computer made an analogy involving, say, water flow, is that the computer must know what water is--that it is a liquid, that it is wet and colorless, that it is affected by gravity, that when it flows from one place to another it is no longer in the first place, that it sometimes breaks up into little drops, that it assumes the shape of the container it is in, that it is not animate, that objects can be placed in it, that wood floats on it, that it can hold heat, lose heat gain heat, and so on ad infinitum. If the program does not know things like this, then on what basis is it valid to say "the program made an analogy between water flow and such-and-so (whatever it might be)?
Needless to say, it turns out that the program in question knows none of these facts. Indeed, it has no concepts, no permanent knowledge about anything at all. For each separate analogy it makes (it is hard to avoid that phrase, even though it is too charitable), it is simply handed a short list of "assertions" such as "Liquid(water)", "Greater(Pressure(beaker), Pressure(vial))", and so on. But behind these assertions lies nothing else. There is no representation anywhere of what it means to be a liquid, or of what "greater than" means, or of what beakers and vials are, etc. In fact, the words in the assertions could all be shuffled in any random order, as long as the permutation kept identical words in corresponding places. Thus, it would make no difference to the program if, instead of being told "Greater(Pressure(beaker), Pressure(vial))", it were told (Beaker(Greater(pressure), Greater(vial))", or any number of other scramblings. Decoding such a jumble into English yields utter nonsense. One would get something like this: "The greater of pressure is beaker than the greater of vial." But the computer doesn't care at all that this makes no sense, because it is not reaching back into a storehouse of knowledge to relate the words in these assertions to anything else. The terms are just empty tokens that have the form of English words.
Despite the image suggested by the words, the computer is not in any sense dealing with the idea of water or water flow or heat or heat flow, or any of the ideas mentioned in the discussion. As a consequence of this lack of conceptual background, the computer is not really making an analogy. At best, it is constructing a correspondence between two sparse and meaningless data structures. Call this "making an analogy between heat flow and water flow" simply because some of the alphanumeric strings inside those data structures have the same spelling as the English words "heat", "water", and so on is an extremely loose and overly charitable way of characterizing what has happened.
Nonetheless, it is incredibly easy to slide into using this type of characterization, especially when a nicely drawn picture of both physical situations is provided for human consumption by the program's creators (see Figure VI-I, page 276), showing a glass beaker and a glass vial filled with water and connected by a little curved pipe, as well as a coffee cup filled with dark steaming coffee into which is plunged a metal rod on the far end of which is perched a dripping ice cube. There is an irresistible tendency to conflate the rich imagery evoked by the drawings with the computer data-structures printed just below them (Figure VI-2, page 277). For us humans, after all, the two representations feel very similar in content, and so one unwittingly falls into saying and writing "The computer made an analogy between this situation and that situation." How else would one say it?
Once this is done by a writer, and of course it is inadvertent rather than deliberate distortion, a host of implications follow in the minds of many if not most readers, such as these: computers--at least some of them--understand water and coffee and so on: computers understand the physical world; computers make analogies; computers reason abstractly; computers make scientific discoveries; computers are insightful cohabiters of the world with us.
This type of illusion is generally known as the "Eliza effect," which could be defined as the susceptibility of people to read far more understanding than is warranted into strings of symbols -- especially words -- strung together by computers. A trivial example of this effect might be someone thinking that an automatic teller machine really was grateful for receiving a deposit slip, simply because it printed out "THANK YOU" on its little screen. Of course, such a misunderstanding is very unlikely, because almost everyone can figure out that a fixed two-word phrase can be canned and made to appear at the proper moment just as mechanically as a grocery-store door can be made to open when someone approaches. We don't confuse what electric eyes do with genuine vision. But when things get only slightly more complicated, people get far more confused--and very rapidly, too.
A particularly clear case of a program in which the problem of representation is bypassed is BACON, a well-known program that has been advertised as an accurate model of scientific discovery (Langley et al 1987). The authors of BACON claim that their system is "capable of representing information at multiple levels of description, which enables it to discover complex laws involving many terms". BACON was able to "discover", among other things, Boyle's law of ideal gases, Kepler's third law of planetary motion, Galileo's law of uniform acceleration, and Ohm's law.
Such claims clearly demand close scrutiny. We will look in particular at the program's "discovery" of Kepler's third law of planetary motion. Upon examination, it seems that the success of the program relies almost entirely on its being given data that have already been represented in near-optimal form, using after-the-fact knowledge available to the programmers.
When BACON performed its derivation of Kepler's third law, the program was given only data about the planets' average distances from the sun and their periods. These are precisely the data required to derive the law. The program is certainly not "starting with essentially the same initial conditions as the human discoverers", as one of the authors of BACON has claimed (Simon 1989, p. 375). The authors' claim that BACON used "original data" certainly does not mean that it used all of the data available to Kepler at the time of his discovery, the vast majority of which were irrelevant, misleading, distracting, or even wrong.
This pre-selection of data may at first seem quite reasonable: after all, what could be more important to an astronomer-mathematician than planetary distances and periods? But here our after-the-fact knowledge is misleading us. Consider for a moment the times in which Kepler lived. It was the turn of the seventeenth century, and Copernicus' De Revolutionibus Orbium Coelestium was still new and far from universally accepted. Further, at that time there was no notion of the forces that produced planetary motion; the sun, in particular, was known to produce light but was not thought to influence the motion of the planets. In that prescientific world, even the notion of using mathematical equations to express regularities in nature was rare. And Kepler believed—in fact, his early fame rested on the discovery of this surprising coincidence—that the planets' distances from the sun were dictated by the fact that the five regular polyhedra could be fit between the five "spheres" of planetary motion around the sun, a fact that constituted seductive but ultimately misleading data.
Within this context, it is hardly surprising that it took Kepler thirteen years to realize that conic sections and not Platonic solids, that algebra and not geometry, that ellipses and not Aristotelian "perfect" circles, that the planets' distances from the sun and not the polyhedra in which they fit, were the relevant factors in unlocking the regularities of planetary motion. In making his discoveries, Kepler had to reject a host of conceptual frameworks that might, for all he knew, have applied to planetary motion, such as religious symbolism, superstition, Christian cosmology, and teleology. In order to discover his laws, he had to make all of these creative leaps. BACON, of course, had to do nothing of the sort. The program was given precisely the set of variables it needed from the outset (even if the values of some of these variables were sometimes less than ideal), and was moreover supplied with precisely the right biases to induce the algebraic form of the laws, it being taken completely for granted that mathematical laws of a type now recognized by physicists as standard were the desired outcome.
It is difficult to believe that Kepler would have taken thirteen years to make his discovery if his working data had consisted entirely of a list where each entry said "Planet X: Mean Distance from Sun Y, Period Z". If he had further been told "Find a polynomial equation relating these entities", then it might have taken him a few hours.
Addressing the question of why Kepler took thirteen years to do what BACON managed within minutes, Langley et al (1987) point to "sleeping time, and time for ordinary daily chores", and other factors such as the time taken in setting up experiments, and the slow hardware of the human nervous system (!). In an interesting juxtaposition to this, researchers in a recent study (Qin & Simon 1990) found that starting with the data that BACON was given, university students could make essentially the same "discoveries" within an hour-long experiment. Somewhat strangely, the authors (including one of the authors of BACON) take this finding to support the plausibility of BACON as an accurate model of scientific discovery. It seems more reasonable to regard it as a demonstration of the vast difference in difficulty between the task faced by BACON and that faced by Kepler, and thus as a reductio ad absurdum of the BACON methodology.
So many varieties of data were available to Kepler, and the available data had so many different ways of being interpreted, that it is difficult not to conclude that in presenting their program with data in such a neat form, the authors of BACON are inadvertently guilty of 20–20 hindsight. BACON, in short, works only in a world of hand-picked, prestructured data, a world completely devoid of the problems faced by Kepler or Galileo or Ohm when they made their original discoveries. Similar comments could be made about STAHL, GLAUBER, and other models of scientific discovery by the authors of BACON. In all of these models, the crucial role played by high-level perception in scientific discovery, through the filtering and organization of environmental stimuli, is ignored.
It is interesting to note that the notion of a "paradigm shift", which is central to much scientific discovery (Kuhn 1970), is often regarded as the process of viewing the world in a radically different way. That is, scientists' frameworks for representing available world knowledge are broken down, and their high-level perceptual abilities are used to organize the available data quite differently, building a novel representation of the data. Such a new representation can be used to draw different and important conclusions in a way that was difficult or impossible with the old representation. In this model of scientific discovery, unlike the model presented in BACON, the process of high-level perception is central.
Oh, I forgot to mention that the quotations are from Hofstadter's Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought.
Quotation I of II is from Preface 4: The Ineradicable Eliza Effect and Its Dangers (commencing on page 155).
Quotation II of II is from Chapter 4, High-level Perceptions, Representation, and Analogy:A Critique of Artificial-intelligence Methodology (commencing on page 169).
Had to transcribe Quotation I; Quotation II can be found here.
o The human understanding when it has once adopted an opinion ... draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside or rejects[.] -- Francis Bacon
Dr Feser, this may be a bit off topic, but i would be interested to find out your position, if you have one, on the SSPX, after an indepth review of your philosophical stand point?
I hate to troll, but since I have nothing useful to add I just want to share my amazement at how much time TS must have to be able to write such long books so often in response to simple questions.
Is just that I have learned through a series of performative models to emulate other people which makes me look pretty damn normal.... But at night !!!!!!!
Which means he is a beautiful swan for anyone that can't the story... I think it was a swan at least.
So... Should we start from now on writing down the premises of Dr Feser's post and then discussing them one by one.... It would remove the trolling I say!
So I know some of you have defended that if it is something ambiguous you just wont get meaning out of it... But since I am mostly an ass to Touchstone, i would like that perhaps you Rank or Anon would explain it further, with more details so my model/system based head can grasp the demonstration of what you people are saying.
Oh ... Touchstone should have asked this.... But nooooo, is too damn hard to do this, is better to call other people's ideas superstitions!!!!!!
goddinpotty said... I am tempted to say that he's too stupid to understand, but I don't think that's accurate, it's more like he's actively working to not understand. Which is fine if you are doing religion, not so good if you claim to be doing philosophy.
Wow, way to reveal your intellectual dishonesty. If you can't understand what somebody is saying, then he must be stupid... or a liar. The anti-religious bigotry is just icing on the cake. To anyone who ever wondered if goddinpotty was intellectually serious, wonder no more.
Just having a quick bit of lunch at work here so no time now to post more than this, but thanks for taking the time to transcribe those quotes (if I understand your comments above on that). Hofstadter is a gifted writer, in addition to being a gifted thinker.
If you've read Gödel, Escher, Bach than you surely can anticipate my response that Hofstadter is quite clear-eyed about the difficulties and challenges that obtain in weak AI, and all the more in strong AI. He has made a bit of a mini-career in being a critic of the many efforts in this are that are simplistic, confused, or over-hyped.
The SME is a notorious example. I'm sure the coding is quite sophisticated and all that, but Hofstadter's point is more forceful than he even puts it: Strong AI is not a feature of a computer, or any array of computers, any more than intelligence is a feature of a brain. Intelligence as we understand it, is a function of a being, a body, not just an organ, an integrated system that interacts with its environment in rich and dynamic ways.
In short, there is no substitute for *experiencing* the real world, and having experienced the real world for a long time, as a predicate for thinking in the real world in ways we would agree (Thomist dogma notwithstanding) are "intelligent" in a similar fashion to humans.
That changes the design spec for AI significantly. Now you have to build an "artificial being", and incorporate something like a nervous system and a hardware/wetware layer for navigating and sensing and interacting in physical ways with the world, not just via a fiber optic data interface. It makes budgets go way up and timelines slow way down. But, as Hofstadter rightly points out, and as I've seen first hand in some of my own past projects, without that, you can't get where you'd like to go.
This is how research programs and innovative projects succeed, but such reckoning with the problem. But to read this here, in the context given, seems to miss the larger message of Hofstadter: we must incorporate the "whole stack" as they say in programming to be able to reproduce the phenomena of thinking, but it is in doing this, and doing this in a robust way, that we can see strong AI succeeding, and succeeding in profound ways. As any software developer will tell you, understanding the problem domain is a key indicator for the eventual success or failure of the project. "Thinking" and "intelligence" and "consciousness" are difficult and complex natural phenomena to model -- we don't have enough science yet to make a good go of strong AI.
Hofstadter points out the problems, the path to strong AI. But this criticism and guidance is given BECAUSE there remain no fundamental problems known, no reason to think that a silicon machine with a similar architecture us carbon machines can't do what we do, and by extension that humans as carbon machines are highly plausible as emergent from natural processes, just because we can build similar (or surpassing) silicon machines. Then, the argument reduces to "well, it couldn't happen by evolution/abiogenesis" from the creationists, and "don't see why not, and no other mechanism is available to explain it".
That's progress, with help from Hofstadter, who, as you point out, is convinced that life arose from non-life naturally, and intelligence is an artifact of that process. If we can build machines that think, and think in the strongest, most robust sense, then the objection "machines can't think" will become just a matter of denialism.
Superstition is a belief in supernatural causality: that one event leads to the cause of another without any physical process linking the two events, such as astrology, omens, witchcraft, etc., that contradicts natural science.[1]
That's right on the money regarding, say, the belief in "immaterial intellect", and it's supposed efficacy in thinking.
Actually it all depends as you have put before... In defining things the way you need. Define think in a way you like and boom THINK is something you can explain away... Is not denialism, is just people who doesn't want to climb on board you boat.
Why not simply saying that you have no idea what you seeing, all you got is models which could be for all we know, delusions.... Oh wait people already said that to you didn't they ?
Do you plan to explain how your views escape self-refutation along the lines I indicated above? That is, that we could never possibly know supra-mental determinate content if the "brain code" to which such things are reduced is indeterminate? There seems to be no move that gets you out of this. Note that, if you accept it, you're committed to total irrationalism without objectivity, science or any of your other favorite things. If you can't escape these issues, then I'm afraid that your attacks on Feser have all failed.
You'll have to pull out the lines you want to stick to for the case for "self-refuting". As it is, you've got a lot to offer as a debate partner, but your "self-refuting" charge, everytime I've seen it doesn't even rise to being applauded as "lazy". It's just a troll's itch you are scratching, as best I can tell, and there's no argument or content to engage. That's annoying, but it's just a bit of noise in the channel. Pushing that to some kind of triumphalism about having made this case... well, that's just a bit of playing with yourself in public, intellectually.
For example, I said, upthread:
If you want to understand the raw input to the brain (or program) as 'interpreted' by virtual of its translation from photon dynamics to electrical signals, fine -- that in way precludes it from being determinate content.
To which you responded: Thank you for admitting it. This means that your system is self-refuting. You gotta be kidding me. That's not even comment worthy, not even handwaving worth pausing for.
If you'd actually like to state a case for your understanding of the argumenting I'm making, and why that argument is self-refuting, something I can take a bit more seriously than a naked non sequitur like above, some substance to interact with, I'm game.
Here's another example from you: I know it isn't a vicious regress. Derrida wasn't that terrible of a philosopher. It's still a self-refuting position, though. Never mind that I'm not Jacques Derrida, nor one who subscribes to his arguments (you've confused my familiarity and understanding of his some of his critiques -- that "male|female" may be problematic, and abandoned for a "gender space" that admits of degrees of androgyny or other points in that space, for example -- with subscription to his positions), "It's still a self-refuting position, though" is just fine as an idle assertion, something I just roll my eyes at and move on from. But apparently, you think this throws down the gauntlet somehow.
If so, please, get serious.
If you have a case to make, make it. Don't think you can pull a troll's trick and say "self-refuting, now dance! prove it's not!" That may work on other posters. I'm not gonna take that bait.
I'll chalk this up to a simple misunderstanding and not be offended that you'd think I'm such a chump as that, and wait for something to engage, if you've got it on this topic. As it is, it's farther than needed to say "It's not", and that is plenty, given your case.
I noticed this with the "nominalism is incoherent" thing. Oh yeah, by the way, that fails because everyone knows nominalism is incoherent....
That's just a barrier to taking your posts seriously. You have lots to offer to engage with, but seem to gravitated to the parts you offer that are content-free, and conceited. I don't doubt you have the conviction that "nominalism is incoherent", or that some unspecified version of one of Derrida's arguments is self-refuting, but you're carrying on like you suppose these are givens across the board. I know there's a fanboy factor here that encourages that, but it's untoward, awkward.
Make your case man. It's a hell of a lot more interesting to engage, and for others to read, than this kind of stuff.
Here's my try to anticipate a case from you foreshadowed by this:
Do you plan to explain how your views escape self-refutation along the lines I indicated above? That is, that we could never possibly know supra-mental determinate content if the "brain code" to which such things are reduced is indeterminate? There seems to be no move that gets you out of this. Note that, if you accept it, you're committed to total irrationalism without objectivity, science or any of your other favorite things. If you can't escape these issues, then I'm afraid that your attacks on Feser have all failed. First off, even if you, or I, think that I have issues in my own views that are problematic, and inescapably so, that does NOT mean that my attacks on Dr. Feser's ideas have failed. That's a non sequitur. I recently watched a young earth creationist, and an extreme one (uses the Ussher date for the beginning of the world, for example) take apart a new age/pagan type on the issue of miracles on another forum. That's a topic I think a YEC has serious trouble with by virtue of those beliefs, but I should (and did say) that the critiques were a heavy left cross followed by a crushing right hook to the claims made by the subject of her criticism.
I know this is a common apologetics move, but it's a fallacy. The arguments stand on their own merits. The YEC's critiques are not dismissable because she has beliefs on the subject (or other subjects) which she cannot adequately defend. If you are familiar with the lovely human species called the Calvinist, and the Calvinist who endorses "presuppositionalist apologetics", you'll be aware of this problem. Can't explain the origin of the unvierse? Aha! Therefore Calvinism! All your attacks on [Calvinist version of Dr. Feser] fail!
Second, "indeterminate" does not mean "unknowable". It means "not certain, vague" (see a dictionary to confirm. We can and do have knowledge that obtains as function of doubt and uncertainty. In fact, beyond the certainty we have of an "i" on the basis of cogito, all of our knowledge is laced with some degree of uncertainty and doubt -- knowledge of the real world is necessarily uncertain for the very reason you identify: it comes filtered through our senses, through a layer we cannot get out of (Thomist notions of immaterial intellect and revelation, etc. notwithstanding). But indeterminacy does not refute or dismiss anything on its own. Fuzzy logic models often outperform the brittle/polar models they replace precisely because they incorporate indeterminacy and uncertainty in their heuristics.
I'll stop there to make sure we're not wasting our time talking past each other on "indeterminacy", and ask:
If I put 1000 marbles into a pachenko machine, where will the first ball I drop end up? Is that determinate in your view, or not? How about the second ball? Do you know where it will end up?
If that is an indeterminate outcome (and I'll let you decide if it is or not), once I've let all thousand marbles go into the machine, do I know the shape of the distribution of those marbles in the slots at the bottom? Can I make reliable guesses as to the approximate shape I might expect? Is that determinate or indeterminate?
Your answers to that will go a long way in providing clarity for any debate we might have on this. From this, I will better understand the argument you are making with those terms.
Not that argument are necessarily any better hahahahha
But Rank's argument might be something like... You have a group, A, B, C...Z. Now they are all ambiguous, in other words, the measurement that we use to measure understanding can be pinpointed .... It can't be measured. That is suppose to be ambiguous.
It seems that touchstone is rather saying that a ambiguous characteristic is something like, A has "x" to "y" in our measurement. So it is ambiguous, but it is defined somewhat. Now this could be the wrong interpretation of his view but his claim that you can know X becuase you can talk about with Y and Z seems to be rather awkward. It seems to work only, if youndefine that you know all rest, otherwise you will never know anything.... X is defined by 2Y and 3Z, Y is defined by X and 4Z, and so on. Now you could go on to "solve" the system, BUT X, Y and Z is still unknown. Well maybe touchstone percept is just and unknown thing....
It does seem that Rank is correct, that you can't find any meaning in this system.
Touchstone, I don't think your case is similar to that YEC. Rank didn't claim that you were wrong becuase you had such and such belief. He said you model, or alternative doesn't work, and therefore can't hurt Feser.
You are confuse Touch, you are more confuse than me. Go back read it again, draw if you have to!
Eduardo has it pretty much right. X, Y and Z will remain unknown no matter how much you define them in terms of X2, Y5 and so forth. Something has to be unambiguous--determinate--for it to disambiguate. Either you must eventually reach something with determinate content (understood as non-ambiguous, not as "deterministic"), or nothing is ever determinate. Derrida accepts the conclusion that there is nothing determinate, and you, by saying that brain code is indeterminate, must necessarily bite that same bullet.
Further, your attacks against Feser fail because they were based on a certain model of thought. However, this model of thought (reduction to indeterminate signs and associations), as I have just shown, is self-refuting. Certainly, Feser could be wrong--but it sure isn't because your objections hold.
You gotta be kidding me. That's not even comment worthy, not even handwaving worth pausing for.
If you'd actually like to state a case for your understanding of the argumenting I'm making, and why that argument is self-refuting, something I can take a bit more seriously than a naked non sequitur like above, some substance to interact with, I'm game.
I elaborated on the claim right below that. Your system must necessarily be self-refuting if it contains nothing determinate, because then its own truth is indeterminate. This is the same problem that wrecks Derrida's argument: it winds up as "the certainty that nothing is certain". It's patently self-contradictory.
Because you cannot shift the burden of determinacy to the "outside"--there is no "outside" if we can only know the "outside" via reduction to indeterminacy--, you cannot ever reach what Derrida calls the "Transcendental Signified": that which is wholly determinate and unambiguous. As a result, it must be the case that the very bones of your claims are indeterminate, and hence the system undermines itself.
Feser: In particular, there is nothing in the picture in question [triangle] or in any other picture that entails any determinate, unambiguous content.
You: Second, "indeterminate" does not mean "unknowable". It means "not certain, vague"
Given that particulars are indeterminate, whence comes determinate, unambiguous concepts? It's a simple question, and statistical overlapping can never get you there. It's asymptotic. It's a difference in kind. It's a simple logical question that has been asked for 2000 years.
Rank puts it another way:
That is, that we could never possibly know supra-mental determinate content if the "brain code" to which such things are reduced is indeterminate?
In fact, beyond the certainty we have of an "i" on the basis of cogito,
Sadly, Derrida's system--and, subsequently, yours--undermines even that certainty. Even the idea of "doubt" becomes indeterminate.
But indeterminacy does not refute or dismiss anything on its own. Fuzzy logic models often outperform the brittle/polar models they replace precisely because they incorporate indeterminacy and uncertainty in their heuristics.
But the fuzzy logic models themselves would have to be determinate, rather than indeterminate. If the models themselves were indeterminate, then they could not measure anything. Under your system, the models, too, must crumble.
Determinate to Touchstone, seems to be a determinate number indo a string of real numbers. That is why his fuzzy logic seems ambiguous to him hahahahhaha.
Touchstone: Superstition is a belief in supernatural causality: that one event leads to the cause of another without any physical process linking the two events, such as astrology, omens, witchcraft, etc., that contradicts natural science.[1]
That's right on the money regarding, say, the belief in "immaterial intellect", and it's supposed efficacy in thinking.
Even for Wikipedia, that's a bad definition, and yet it still is nowhere near the money. The intellect is not supernatural, nor does anything about it contradict natural science. If you're not simply trying to be a smart-alec, then you are seriously misunderstanding the thing you are attempting to argue against.
Out-of-touchstone said... @Eduardo That emergent characteristics you just spoke is also known as FORM to your ...adversaries. No, but that's a good point to make for clarity: blahblahblah
Uh, so in other words, Eduardo was exactly right: you have no clue what "form" is. Sheesh. Guys, can we make a new rule? No carrying on discussions of Thomism with anyone who can't figure out forms.
Anonymous said... I hate to troll, but since I have nothing useful to add I just want to share my amazement at how much time TS must have to be able to write such long books so often in response to simple questions.
But it doesn't really take much time when you put zero effort into understanding the thing you're blindly attacking, you see.
Because you cannot shift the burden of determinacy to the "outside"--there is no "outside" if we can only know the "outside" via reduction to indeterminacy--, you cannot ever reach what Derrida calls the "Transcendental Signified": that which is wholly determinate and unambiguous. As a result, it must be the case that the very bones of your claims are indeterminate, and hence the system undermines itself. This is nothing different than noting that no perfect triangles occur in nature. No triangle (or circle, or square, or...) can be reified according to a Euclidian ideal.
But that doesn't preclude our understanding of, use of, and location of triangles in nature. How can this be? Because close enough is close enough. Every triangle is imperfect -- fuzzy, jagged as a matter of physics compared to an ideal triangle -- but that doesn't preclude us from using and manipulating triangles, conceptually, or as features of physical objects.
Your claim can only hold ground if you assume that a given concept-in-context is either wholly and perfectly unambiguous, "universally perspicacious", or else it is completely opaque, intractable, utterly impenetrable. Either a perfect Platonic Triangle, or no triangles anywhere, at all.
As soon as you allow for degrees of ambiguity and and degrees of uncertainty, the system runs fine, just as nature does without a single perfect triangle.
But, again, we're getting ahead of ourselves, I think, as the more you use the word 'indeterminate', the more conflicted the usage becomes. If you want to use that word as your fulcrum, that's fine, but you'll have to provide the measure you are using for that term. The definition you regard as controlling in your claim should be provided. As it is, it's indeterminate, not clear or certain enough to apply -- my best guess is that you are using it to mean "not certain, not definitely or precisely known".
Tell me what your operative definition is for "determinate" and "indeterminate", and we can make further progress perhaps. Given any statement, how to you establishing the determinacy of its meaning, if any, and what criterion is used to measure it?
By the way, your conclusion of no forms or perfect forms, doesn't seem to have any premiss to get to it.
Your argument can be turned to you, and say that we have no need for your system, because we have these things you don't really grasp.... Worthless argument, this argument of necessity.
Even for Wikipedia, that's a bad definition, and yet it still is nowhere near the money. The intellect is not supernatural, nor does anything about it contradict natural science. If you're not simply trying to be a smart-alec, then you are seriously misunderstanding the thing you are attempting to argue against. The intellect operates *outside* of natural physics, no? If not, are you proposing that the immaterial intellect is amenable to natural models? If not, you have action, and more flagrantly, personal action/will, obtaining outside of supernatural.
Someone who is superstitious about "Friday the 13th" doesn't need to posit a deity or a demon as the predicate for their fears about bad fortune on that day. The cause may be *exactly* immaterial in the way the "immaterial intellect" is held to be immaterial on A-T. The salient feature of that superstition is that it identifies interaction and causality outside of natural processes, transcending physics.
As for contradicting science, science doesn't provide a natural model for cognition or human language processing, then allow for "a bunch of immaterial stuff in here to round out the intellect". That scientific models do not rule out "immaterial intellect" explicitly does not mean there's no contradiction. Science doesn't rule out immaterial unicorns as the cause of gravity, either, but it would be ridiculous to maintain that "there's no contradiction with Immaterial Gravitational Unicorns™ and our model of gravity". That the "immaterial intellect" isn't even a coherent concept for science -- it's a divide by zero -- is enough. Science leaves things out because they aren't needed, and they are alien to the model.
This is precisely what the label 'superstitious' looks to identify, belief in activity and interaction that have no basis in science, no basis in our knowledge of nature. The "Friday the 13th" superstition doesn't "contradict science" if "immaterial intellect" doesn't. That immaterial bad luck just obtains *in addition* to science, yeah?
If the belief in "immaterial intellect" is simpatico with science, then so is the Broken Mirror superstition, so is the Black Cat superstition, so is the Garlic as Vampire Repellent superstition, and on and on. None of these "contradict science" in the equivocal way you invoked above. Science doesn't discredit Black Cat superstitions or bother to contradict, any more than Immaterial Intellect superstitions. It just is superfluous, beliefs that are useless and extraneous to scientific models.
Even for Wikipedia, that's a bad definition, and yet it still is nowhere near the money. The intellect is not supernatural, nor does anything about it contradict natural science. If you're not simply trying to be a smart-alec, then you are seriously misunderstanding the thing you are attempting to argue against.
Should have added...
If you believe "immaterial intellect" is part of the natural world, and in such a way that our scientific models can (or should) incorporate that dynamic, then I agree, I was confused about the way you (and others construed the term), and will retract the claim that the belief is superstitious. I'd be quite surprised to learn this, but stand to be corrected.
I've never encountered "immaterial intelligence" as even a contemplated or putative component of a natural model. It is always placed outside of nature, beyond the reach of science and natural knowledge (hence the "immaterial").
Either a perfect Platonic Triangle, or no triangles anywhere, at all.
Why is it that a sizable amount of the opponents on this blog keep making this mistake? There's a middle ground called moderate realism where the concept of triangularity exists in the mind, not in a Platonic realm.
Every triangle is imperfect -- fuzzy, jagged as a matter of physics compared to an ideal triangle -- but that doesn't preclude us from using and manipulating triangles, conceptually, or as features of physical objects.
Repeating the question (yet again): Why doesn't it preclude us the use of a determinate concept, given that all the material we have to work with is indeterminate?
This is nothing different than noting that no perfect triangles occur in nature. No triangle (or circle, or square, or...) can be reified according to a Euclidian ideal.
That isn't what it means. Not in the slightest. Under your system, brain code reduces the determinate to the indeterminate. Because we use our brains to think, this makes it impossible to know anything determinate at all. Whatever we call "determinate" is always already invaded by total indeterminacy, because it has always already been reduced to brain code that has no determinate content. Everything, including all talk of photons and so forth, becomes totally ambiguous and relativized. Everything is illusory. At this stage, the methods that you used to reach this conclusion are undermined, and your argument goes down the toilet.
But, again, we're getting ahead of ourselves, I think, as the more you use the word 'indeterminate', the more conflicted the usage becomes.
Determinate: Having exact and discernible limits or form.
Indeterminate: Not certain, known, or established.
Given any statement, how to you establishing the determinacy of its meaning, if any, and what criterion is used to measure it?
TS: We don't need intentionality (and, subsequently, forms), because we can form objective, pre-conscious percepts. Just look at these computers!
Me: Computers have intentionality infused by us. Each symbol of code has semantic, determinate content. If, like computers, our brains reduce input to code, then the code would have to possess intentionality as well.
TS: Well, you're right that it reduces input to code, but we still don't need intentionality!
Me: Then our "brain code" would be utterly ambiguous.
TS: We can just shift the determinate content (intentionality) to the outside!
Me: If our brain code works by reduction, then any idea of the "outside" is only another instance of indeterminate code, and it therefore contains no meaning whatsoever.
That isn't what it means. Not in the slightest. Under your system, brain code reduces the determinate to the indeterminate. Because we use our brains to think, this makes it impossible to know anything determinate at all. Whatever we call "determinate" is always already invaded by total indeterminacy, because it has always already been reduced to brain code that has no determinate content. Everything, including all talk of photons and so forth, becomes totally ambiguous and relativized. Everything is illusory. At this stage, the methods that you used to reach this conclusion are undermined, and your argument goes down the toilet.
They are just not certain. 90% is not 100%, but it's not 0%. Right? Look you have two systems -- an extra-mental system (the world beyond our senses) and a model (a conceptual "map" of the 'territory' that is the extra-mental world). To the extent you can build a model that makes novel predictions and accounts for the empirical evidence (input from the behavior of the 'territory'), our map performs as a map, an isomorphism to the territory. It's never complete (we don't even know what that would mean, or how that could be established) -- nor perfectly unambiguous (same problem), but we can judge the relative strength of those isomorphisms; some maps track more closely to the territory than others, based on the input we have available from the territory.
But none of that is certain in any final sense, nor complete (that's an undefined concept), or perfectly unambiguous (it's always susceptible to some form of underdetermination among competing hypotheses). There are no reference frames to even *calibrate* those terms by -- and cannot be, because any "reference frame" itself would have to be show its own basis for calibration, and... boom, vicious regress.
No matter, that's a fool's errand. Ambiguous, uncertainty and indeterminacy come in degrees, and we can (and do!) build systems that are semantically rich, highly specific, and effective for purposes of communication, model building, knowledge development, etc. Your key mistake is captured in "totally ambiguous". Somewhat ambiguous is not totally ambiguous, and there's no basis for thinking this is a binary phase space -- total determinacy or total ambiguity, and that exhausts all the options.
The methods for my conclusions not 100% certain, and cannot be. But neither are they 100% ambiguous or opaque. They are meaningful, and yet carry some measure of ambiguity, as is intrinsic to human language. They are reliable -- when you get on an airplane, you are placing your well being in the hands of this epistemology -- but they are not and cannot be 100% certain.
Determinate: Having exact and discernible limits or form. That's not operative. How do you determine wether a limit is exact? Providing an example would be very helpful, because I have no idea that you have a working definition that can be applied for your argument based on what you have said so far.
Indeterminate: Not certain, known, or established. That's an indeterminate definition. How do you establish certainty, so that we might see if we have it or not? Please provide an applied example.
"Given any statement, how to you establishing the determinacy of its meaning, if any, and what criterion is used to measure it?"
I subscribe to the system defend by Prof. Feser.
You have got to be kidding me. I'm am embarrassed for you, responding like that. You must be pulling my leg - a wry play on "indeterminate", I hope. If you are serious... fail!
Why not engage with terms and concepts we can apply and test here? It's an interesting exchange, possibly but you're just blowing smoke, here. Hiding behind Dr. Feser.... tsk. Make your own case -- copy and paste and borrow all you like, but present it as yours.
There was a fair amount of internal critique of these kinds of problems within Artificial Intelligence during the 80s and 90s. See, for example, Lucy Suchman's Plans and Situated Actions, or Phil Agre's Computation and Experience. A summary: yes, a lot of computational models of the mind suffered from wishful thinking (or what would be called here inherited rather than original intentionality) and some rather crude ideas of what the mind does.
This does not prove anything about the ultimate possibility or impossibility of computational or mechanical models of thinking, however. It just shows that the early efforts were too simplistic and the problems are much harder than was thought, and require some real sophistication to solve. The two named writers employed critical practices from sociology and philosophy to try to reform AI, not to prove that it was impossible.
Hofstadter, who did a lot of his work around the same time, also was critical in his own way of standard AI, but from a different perspective.
They are just not certain. 90% is not 100%, but it's not 0%. Right? Look you have two systems -- an extra-mental system (the world beyond our senses) and a model (a conceptual "map" of the 'territory' that is the extra-mental world).
You clearly are not as familiar with Derrida as you claimed. Touchstone, there is no extra-mental system. When you apply Derrida's associative signs, the model is all that is left. That's just how it works. You can't even coherently grasp the idea of an "extra-mental system" anymore. Whether you realize it or not, this argument is already over.
If you've read Gödel, Escher, Bach than you surely can anticipate my response that Hofstadter is quite clear-eyed about the difficulties and challenges that obtain in weak AI, and all the more in strong AI.
Regarding the particular point, why drag GEB into it? Surely knowing that you had or would read the quotations provided above ought to be sufficient for one to display the anticipatory prowess of which you speak.
...we don't have enough science yet to make a good go of strong AI.
Do not despair—there's always the 'in principle' possibility that someday we will.
...there remain no fundamental problems known, no reason to think that a silicon machine with a similar architecture us carbon machines can't do what we do...
I agree that insufficient science should never be seen as a fundamental problem--especially when seeking to accomplish a stated scientific goal.
(But then perhaps you meant to say that the problem isn't one of fundamentals, thus meaning to imply that we already have all the non-physical, nonmaterial 'proof' that is required, and that it's just a matter of time ere these non-physical, nonmaterial somethings can be successfully instantiated in physical, material things.)
...and by extension that humans as carbon machines are highly plausible as emergent from natural processes, just because we can build similar (or surpassing) silicon machines.
Since I'm not a biochauvanist, I can say with a straight face that humans as leverers, weight bearers and locomoters are highly plausible as emergent from factories, simply because we can manufacture...
If we can build machines that think, and think in the strongest, most robust sense, then the objection "machines can't think" will become just a matter of denialism.
And if we can help you nudge your net worth up and beyond that of Bill Gates', then the objection that you aren't worth more than Bill Gates could only be made from ignorance.
(1) A system of signs obtains its meaning from outside of itself. (2) Our brains are based on a system of signs. (3) Our brains reduce all outside input, even pre-conscious, to this system. (4) Mental processes are totally within the system. (5) The concept "outside of the system" is totally within the system. (6) The referent for the concept "outside of the system" cannot be known except by reduction to the system. (7) Therefore, the referent for the concept "outside of the system" is always already part of the system. (8) But an outside of the system is required to give the system meaning. (9) Therefore, the system's meaning is unknowable or non-existent.
Needless to say that all concepts related to "photons" and other scientific things have already been reduced to the system of signs. Under your brand of computationalism, you're stuck with Derrida's universal relativism, in which science teaches us absolutely nothing. However, since this undercuts all of the premises that led you to say that your brain was based on a system of signs, you can rest easy knowing that you've refuted yourself, and that science can live another day.
(8) But an outside of the system is required to give the system meaning.
No, and that can't be true, by its own measure. Anything you suppose is 'outside the system' is *inside* the system by virtue of being the grounds for some meaning. This is transcendentally true, it's presupposed by the concept of meaning. If you have to go 'outside' to ground meaning, you have necessarily brought whatever-it-is 'inside'. If it's not inside, and it's not part of the system, it cannot be the basis for meaning, it's not referencable in the system as the basis for carrying semantic cargo.
This is not bound by physicalism or naturalism or any mode of existence. If you postulate a Transcendental Cosmic Immaterial Meaning Machine as the absolute manufacturing point or authority of "absolute meaning", that is not "outside" the system of meaning, it *inside* the system, and not just inside, its *central* to it. Any Thomistic basis for meaning is necessarily 'inside' the system of meaning -- that's why it would be described as a 'basis for meaning'.
So your (8) isn't just dubious as a premiss, it's transcendentally false. It cannot possibly be sound. Being 'outside' means it's not connected or related to our system of meaning.
The error at (8) is sufficient to dismiss your conclusion, but it's worth pointing out that (9) doesn't follow from (8), even if (8) were somehow possibly sound. When we create meaning, and rely on meaning, we are not talking about the "meaning of the entire system", we are making semantic distinctions *within* the system. The 'meaning of the system as a whole' is undefined, because there are no referents external to it with which to associate relationships. But if I wonder what the meaning of "apple" is in conversation, that is a concept that identifies the relationship between subjects and objects *inside* the system -- "apple" is not "horse", for example, as a rudimentary distinction between referents. "Apple" can and does have meaning by distinguishing what it does *not* refer to, inside the system. Concepts in side the system are "system-internal". It's not illusory or meaningless -- you use this system to good effect just in participating in this thread. Where sender can impart information -- this and not that -- you have meaning, demonstrated.
If you are not confusing "the system's meaning" -- the meaning of the system itself, with the "instances of meaning within the system", then there's no ergo, no basis for your "therefore" in (9).
I appreciate your putting this in succinct, more concrete terms. That is helpful and productive, thank you. It reveals the nature of the problem in what you've been claiming.
In The Last Superstition you talked about the problem Athiests have explaining " forms " of secondary qualities like color and gave the example of two people looking at a red object. One saw green and one saw red. That would be hard to reconcile with their explanation of neurons firing in response to various frequencies. But couldn't they respond that the neurons in the color blind individual were genetically defective?
P.S. Have some of your students spend a little time on the Philosophy Forum at Catholic Answers. Its a mess, they need some real philosophers over there and they are being overwhelmed by athiests and kooks.
... Shiiiit, touchstone didn't get it again. Those eight steps are not necessarily premisses. They are a chain of thinking, the number eight is a conclusion of some stuff before.... Are you seriously saying you didn't notice it was a chain of thinking of some sort ?
RS, are you a philosophy professor? A grad student in philosophy? It's been over a year since I've visited this blog and back then you were nowhere to be seen.
Here's an example a professor in Comp Sci used with us long ago now on the concept of semantics and "inside/outside".
Consider a computer game (maybe now we'd just call it a screensaver...) where circular objects float around a 2D space (the screen), and interact like virtual magnets, attracting and repelling each other based on polarity and proximity to other objects. Inside this system, we can ask, and in fact must ask in order to resolve the physics, and have the objects move toward each other or away from each other: What is the DISTANCE between Object A and Object B?"
This is an internal relationship set. "Distance" is meaningful, and is calculable within the system. A "distance" obtains between any two objects on the screen. It works, you can write code against it, the semantics are clear enough to do math against it.
So if we can identify the semantic grounding for "distance" in this system, what, then: What is the DISTANCE of the system itself, what is the DISTANCE of the game?. No sooner does the professor finish asking the question than the students complain that the question itself is confused. And they are right, the professor's point was to show the implicit context of our notions of semantic, the level of description that obtains for semantics to work, to be effective in carrying semantic weight. Someone had made the point that in our computer algorithms we could focus on breaking the problem down computationally and develop machinery that worked on "distance" in some context-free sense.
Asking what the "distance of the system" is illustrates the transcendentals of that system. For "distance" to have meaning, it can ONLY obtain inside the system, because meaning itself is predicated on relationships between nodes in the system.
A physics book which I can't recall the title of anymore made the same point, reading it some years later. How big is a proton? Well, we can provide a meaningful answer only by way of comparison to other things in the system. And that works. But if you ask, "how big is the universe?", there is nothing (that we know of) to compare it to. And even if we did have something else to compare it to, if we could compare, those things would all be part of our universe, resetting the question.
Same principle, and it's not a novel or esoteric one. I just wanted to take a moment to invoke them in this context because the same principles apply, here. Meaning obtains as a set of relationships between entities within a system.
Touchstone, your point is irrelevant to the argument. You see, is just like I said before, the basic stuff in your system is all meaningless... Is like you ask, HOW BIG IS A PROTON? And the question itselfmhas no meaning whatsoever,mhow even gonna try to answer it... What Rank is talking about, goes all the way there. The meaning/determinacy talk was about the objects not just what humans or brains feel about the environment....
You system still has no determinate stuff is it will never have... Well you can pretend that it does I think.
... Shiiiit, touchstone didn't get it again. Those eight steps are not necessarily premisses. They are a chain of thinking, the number eight is a conclusion of some stuff before.... Are you seriously saying you didn't notice it was a chain of thinking of some sort ?
Oh, I understood it to be a 'chain of thinking'. A premise is a 'link in the chain' for a syllogism, for a rigorous chain of thinking. If you read (8), it's not a conclusion -- it starts with "But", not "therefore" or "Then" or "because of this..." It's a proposition offer as true (or perhaps we should say 'sound') as the predicate for the conclusion to follow. As it happens, (9) doesn't follow from (8) even if (8) is true, but that's not the point I'm making to you -- (9) has the form of a conclusion or a production, where as (8) has the form of a premiss.
>> (8) But an outside of the system is required to give the system meaning.
> No, and that can't be true, by its own measure. Anything you suppose is 'outside the system' is *inside* the system by virtue of being the grounds for some meaning...
You read and considered--starting at (1) all the way up to, including, and beyond (8)--then circled back to 'refute' (8)? Realize ye not that ye could simply have taken aim and lobbed your attempted refutation at (1) ("A system of signs obtains its meaning from outside of itself")?
While travelling from Los Angeles to San Diego via Chicago can be more entertaining and fun, some pople might think it can also be a tad bit less efficient, as well as somewhat more time consuming.
If your implicit assertion is correct--that a system of signs does not obtain its meaning from outside of itself--then it follows that there cannot be any such thing as an idiom, i.e., an expression whose meaning cannot be derived from the individual meanings of its constituent elements.
Number 1, seems to be the premiss that you were fighting against... But it seems to me it is correct that a set of symbols and relations by themselves have no meaning what so ever without something to give them meaning.
Number 2 seems to be your conclusion, and it is Rank's premiss... I think you made very clear that all we have is just signals.
Number 3 is also your point of view, and it seems that you system is all about number 3
Number 4 is also your point of view... You don't like or agree with any sort of dualism so you stuck with this one.
Number 5 seems to be common sense... Any conceptmyou have is INSIDE the system.
Number 6 also follows from your view too
Number 7 is referent to when you said.... Meaning is in the outside.... You said it... But you gonna havento accept that certains things are outside the system which you can't... You are stuck with seven
Number 8, the new problem.... Is just stating your idea that meaning is outside, but since you say it is inside, them meaning is in the signs.... So the premiss you don't want is NUMBER 1, not this one....
RS, are you a philosophy professor? A grad student in philosophy? It's been over a year since I've visited this blog and back then you were nowhere to be seen. On the chance that by "RS", you actually meant "TS" ('r' is right next to 't' on my keyboard), I'm absolutely, perfectly uncredentialed in philosophy. I work in software and technology development, and for a good part of my career in projects that supported scientific research. In debates elsewhere, people I've been talking with for a long time occasionally cite Dr. Feser in support of their (usually more peculiar) ideas, so those references pointed me over here at some point. Last year (or was it the year before, I spent some time engaging Randian Objectivists on a couple blogs, as an offshoot of other discussions. That was interesting for a bit, and some good discussion was had -- very similar to Thomists from my point of view in terms of fetishistic impulses on metaphysics.
There's some strong, articulate thinkers here, and unlike the Objectivists' blogs, where the blog owners lead the way, some of the combox posters exceed the level of care and thoughtfulness of Dr. Feser's posts, certainly more conversant with competing ideas and frameworks.
Which is just to say I'm a complete nobody in the hierarchy of the philosophy profession, just a tech nerd that sees an opportunity for interesting criticism and discussion on a different worldview/metaphysics than my own.
oh by the way, yeah you are right EIGHT is not a conclusion as have stated ... the premiss *YOUR WORDS* was not in the chain so you interpreted correctly. But I still think ... ONE is the one you want!!!
Well A lá Potty I think I will just watch from now on.
Touch ... your last posts got much better, you talked like someone who actually cares about the discussion. Doubt it is accurate interpretation of signs in your head but is much better than Mr bombastic form before ...
Well I will let Rank take care of you... I am afraid you don't even know WHERE IS IT or ABOUT WHAT the conversation is.
Great article. I already had a lot of respect for BioLogos, but this takes it up a notch. It's so refreshing to see good philosophy in a public forum.
ReplyDeleteNice article, Dr. Feser. One line that stuck out to me, though, is the one claiming the math is a body of truths independent of our scientific discoveries. The funny thing is that I've actually met a kook who apparently believed math isn't valid unless we've demonstrated it in the real world through experiments. He was an avid supporter of scientism, if I remember correctly....
ReplyDeleteBut enough of him. I have a question for you, if that's okay. I'm taking a college visit to BGSU and I've read in several places (including right above your BioLogos article) that you were a visiting scholar there. Do you think it's worth going there? I'm seriously considering it, since it's not too far away and I'm certain they would accept me, although I could definitely aim higher.
Very nice article indeed!
ReplyDeleteOne question I have from reading a lot about animal experiments over the last couple of years is this...
A chimp can be trained to recognize a shape or color for example and then press the appropriate symbol on a screen to denote what he is seeing. That recognition is often interpreted by researchers as some sort of understanding or even abstraction. In some cases chimps were trained to learn certain symbols and then asked to apply to them in new contexts and in some cases they got them right.
What exactly makes our reasoning different than theirs? I'm often confronted in discussions with naturalists with this kind of argument and although I think there is a difference I have a hard time explaining what it is.
Any thoughts?
Anonymous,
ReplyDeleteHere's a great article that goes into a touch more detail on that specific point.
http://www.godandscience.org/evolution/nim_chimpsky.pdf
@Josh,
ReplyDeleteHere's a great article that goes into a touch more detail on that specific point.
That article... comedy gold.
-TS
What's wrong with the article, TouchStone? I'm reading it now and I don't see anything really wrong with it.
ReplyDeleteProfessor Feser,
ReplyDeleteFirst off, great article. However, it reminded me of a rather mundane question that has been digging at me ever since I read "Aquinas" and "The Last Superstition".
Why did you choose to use the example of a triangle drawn on a seat on a moving school bus? For some unknown reason that repeated example has annoyed me for inexplicable reasons.
The books, by the way, were amazing, and I bought two extra copies of "Aquinas" for two of my friends, as they get ready to leave for college again - they go to small, traditional Great Books schools, and I figured that the book would come in handy for references and explication. One of them finished it just yesterday, and she told me it was absolutely magnificent. I heartily concur.
Oh, and I would also like to thank you for recommending David Oderberg's work; I'm currently working my way through a massive pile of his articles, as well as his most excellent "Real Essentialism." This is the most fulfilling area of philosophy that I've ever experienced thus far, and I believe I have you to thank for actually getting me to seriously consider (and now adopt) hylemorphism.
What's wrong with the article, TouchStone? I'm reading it now and I don't see anything really wrong with it.
ReplyDeleteBecause *extremely longwinded response that ultimately amounts to 'Touchstone doesn't like it', 'Touchstone gets worked up about all things Christian' and 'Touchstone has an idiosyncratic, wildly flawed philosophy and metaphysics, but so long as he doesn't admit it and runs whenever the flaws in his reasoning are pointed out, he can maybe pretend otherwise'*.
It is funny how slowly the biggest critics around here turn themselves into angry people with nothing to say but one liners.
ReplyDeleteDguller might be am execption for all I saw, but still...
Don't even mention dguller in the same breath as any Gnu.
ReplyDeleteGnus are mentally inferior fundamentalists without god-belief.
@Ben
ReplyDeleteGnus are mentally inferior fundamentalists with materialistic beliefs.
Fixed it for you, Ben.
I do not give them the right to deny their act of faith so they can consequently refuse the burden of proof that comes with their absurd worldview.
Aloysios - I just took it as an example of a imperfect triangle. A triangle scrawled on the seat of a moving vehicle by a child is likely to be pretty imperfect.
ReplyDeleteI wonder if anyone could answer a question for me. Sorry to be stupid but I can't seem to get my head around the final cause. When we say a seed is directed toward becoming a tree and nothing else as its final cause, why can't this directness be explained by the chemical make up of the seed? Or say a struck match causing fire. The chemistry of the match and the matchbox is the reason fire is generated and not a bouquet of flowers, no? Where am I going wrong here?
ReplyDeleteI think that a Thomist should answer that to you but it seems you are confusing different types of causes or mixing different types of metaphysics.
ReplyDeleteBut like I said, it is better that a Thomist answer it to you n_n
@anon
ReplyDeleteWhen we say a seed is directed toward becoming a tree and nothing else as its final cause, why can't this directness be explained by the chemical make up of the seed?
That would make no sense, because the two are not substitute explanations but complementary. The final cause of such-and-such biochemical reaction is to bring about an oak tree and not a pink hippopotamus for example.
@Josh
ReplyDeleteThanks for the article, Josh.
It helped put a lot of things in perspective and helped answer my question in more ways than one.
Where am I going wrong here?
ReplyDeleteYou're going wrong in thinking that matter can in any way coincide with the final cause. The final cause, or the end toward which something is directed, is the cause of matter; for the final cause is that for the sake of which matter is. Therefore the final cause is prior to all matter-form composites.
Remember, that which is directed cannot be the director. If the seed is directed toward the oak, then the seed cannot be the director toward the oak. And the chemistry of the seed is in a even more passive position; for the chemicals are directed toward being the seed, and the seed is directed toward being the oak; but the director of both seed and chemicals cannot be either of them.
>Fixed it for you, Ben.
ReplyDelete>I do not give them the right to deny their act of faith so they can consequently refuse the burden of proof that comes with their absurd worldview.
You say Toe-may-Toe. I say Toe-Mah-Toe.etc..
That is what they do when they say they "lack God belief". It's stupid what is to stop me from saying "I am a Theist thus I lack no-God belief"?
But I can allow them their weird incoherent self-definition since they are knuckle dragging Cro-mags either way.
@NIck
ReplyDeleteWhat's wrong with the article, TouchStone? I'm reading it now and I don't see anything really wrong with it.
Well, I'm usually in a place to leave more than a two word summary, but just saw this thread very late last night, and, having run into this article more than once previously, once I confirmed it was the same article, had done with the "comedy" comment before closing up for the night.
And, if you look through my comments here and elsewhere "comedy", or dismissal by characterization as so bad as to be funny or comical is not a card I play but rarely. But this article is really exceptionally bad. It makes me cringe for the author in the same way only the clumsiest of young earth creationist diatribes do. Dr. Feser's thinking isn't much better in his Biologos piece (hope to find time to put some substance behind that later), but it's "seriously wrong", not "comically wrong" in making a complete hash of the science and knowledge we have about animal cognition. He's focused on conceptual processing of abstractions like "triangle", which steers him away from the ditches that Dr. George keeps falling into.
Here's an example from the page where it was left in my PDF reader last night (from page 12):
For instance, tying one’s shoes keeps them more securely on one’s feet. How do people learn to tie shoes? Certainly not by studying knot theory which falls in the branch of mathematics call topology. Most people probably had someone show them how to do it, and maybe this teacher even held their hands and guided them through it. And then most people engaged in trial and error to repeat the appropriate motions. Eventually they become fully familiar with the pattern and acquired the needed handeye skill to execute the steps consistently.
Seriously??? Here's she's trying to avoid the admission that animals think, and think via abstraction and meta-representation, in efforts to preserve the uniqueness in *kind* rather than degree of human cognitive faculties.
But on this bit "show them how to do it" is a matter of training motor reflexes by the guiding hands of the instructor, no thinking needed. Really??? How does she suppose the student, whether it be man or chimp, "engaged in trial and error", without thinking? It's a ridiculous error. The process of "becoming fully faimiliar with the pattern" *is* the cognitive work of learning, of distinguishing "success" from "error", "over" from "under", "around" from "through", of distilling a sequence of steps as a "recipe" for the task.
-TS
(con't)
I think the quickest, simplest answer to the Anon with the causality question would be this: because to relocate final causes to the chemicals is to commit the homunculus fallacy. The chemicals themselves must be "directed toward" some range of results--otherwise, they could do anything. However, if final causes don't exist in the chemicals, then we have to posit them at a lower level, and so on forever. So final causes have to exist somewhere. And, because reductionism is incoherent (a separate argument), we must endorse holism with regard to substances. The final causes, then, emerge from holistic substances--from the formal cause.
ReplyDelete@Nick (con't)
ReplyDeleteShe then doubles down on her confusion thusly:
One might object that this only explains how people learn to solve problems who have been taught. However, the first person to come up with the idea of the bow, learned how to tie it either through trial and error using his senses, or by using his imagination, or through a combination of the two. A little reflection on everyday experience readily turns up other examples of problems that one solves, not by thinking, but by using one’s senses. (One learns how to ride a bicycle by feeling how to pedal and balance, not by studying the principle of the gyroscope.)
This is a nice example of concise self-refutation. It "shows how people learn to solve problems". If one is learning, making distinctions, and processing trial and error, one is *thinking*. Senses are *input*, they are not the processing. By thinking about the effects of various actions when trying to learn to ride a bicycle, one *is* studying the principle of the gyroscope. Perception, or "knowing via sense" -- the awareness of one's percepts -- cannot possibly account for the concepts that form a model that enables us to balance and adjust our movements so as to make our way safely down the road on a bike. Chimps learn
to extract termites from a jar with tree branches, stripped of leaves, as a "tool upgrade" from merely using their (shorter) fingers, and Dr. George supposes this is just using their senses? Well, it *is* using one's senses, the awareness of one's percepts -- they are used to think about problems and seek solutions. We don't even need to address counterfactual hypotheses on the part of the chimp, or conjecture about hypothetical outcomes yet. The animal, human or otherwise, must conceptually *integrate* those percepts in just to "learn by example". How does a kid know he goofed up the latest attempt to tie his shoes, as he's being repeatedly shown how, even with "guiding hands"? He has to thinking critically about the sense data he has streaming in. That pattern is not what I'm aiming for, and matches a "fail", so better try again and attempt to make different, better moves that will yield a better/acceptable match for the pattern I'm seeking, the goal condition I'm aiming for.
Just that kind of discrimination puts the operator far beyond the capabilities of our perceptual processing.
What really makes this comically wrong, and not just badly denialist in preserving a Thomist narrative, is that it ostensibly is at pains to address the science that is available. And the material she covers as examples in her favor are, one after the other "own goals" for other side of the argument. For example, just a couple pages up she points out that Japanese macaques learned to wash their potatoes, a conceptual abstraction *and* an exercise in applied imagination, the very kind of "art" she wants to reserve for the "cook" who can do new things with a demand for a recipe.
-TS
@Michael CP
ReplyDeleteI apologize if I gave you the wrong impression - I am fully aware that the example itself referred to an imperfectly drawn triangle, a less exemplary instantiation of the concept. What I was asking was why Feser chose that particular example. This isn't an intellectual thing at all, it's just a silly question that's been rattling 'round in my head ever since I saw him use it in his books. It really doesn't matter, to be honest - it was just a goofy thing I was wondering about.
Thanks for the reply all the same, though!
@rank sophist,
ReplyDeleteI think the quickest, simplest answer to the Anon with the causality question would be this: because to relocate final causes to the chemicals is to commit the homunculus fallacy. The chemicals themselves must be "directed toward" some range of results--otherwise, they could do anything. However, if final causes don't exist in the chemicals, then we have to posit them at a lower level, and so on forever. So final causes have to exist somewhere. And, because reductionism is incoherent (a separate argument), we must endorse holism with regard to substances. The final causes, then, emerge from holistic substances--from the formal cause.
That claim keeps getting made, and seemingly taken for granted, here ("reductionism is incoherent"). Without litigating that in this thread, can you point to somewhere this is argued to your satisfaction?
If this is not just a shibboleth here, how would a new guy in a combox ramp up on "reductionism is incoherent"? It must be pretty strong, because you aren't even offering a positive commendation for holism/essentialism, here, but rather declaring it the "winner by default" due to the perceived inadequacy of reductionism. It reads a lot like an Intelligent Design maneuver I see regularly -- since abiogenesis has no known natural recipe, therefore God.
Anyway, not looking to has that out here, but just interested in a referral to the "already settled case" on reduction I've seen you refer to repeated now.
-TS
(Too bad the combox is too character limited to paste Hofstadter's "Ant Fugue" in here, or in the appropriate thread!)
That claim keeps getting made, and seemingly taken for granted, here ("reductionism is incoherent"). Without litigating that in this thread, can you point to somewhere this is argued to your satisfaction?
ReplyDeleteIf this is not just a shibboleth here, how would a new guy in a combox ramp up on "reductionism is incoherent"? It must be pretty strong, because you aren't even offering a positive commendation for holism/essentialism, here, but rather declaring it the "winner by default" due to the perceived inadequacy of reductionism.
It's difficult to summarize the arguments in a combox, and they've been presented in great detail by contemporary essentialists like David Oderberg. In general, we kind of take it for granted that the case is closed.
Briefly, the very idea that everything is constituted by particles in certain arrangements--for example, "dog-wise" arrangement--is incapable of being consistent. It must necessarily presuppose macroscopic phenomena to retain any coherence at all. Further, even if everything is constituted by particles, those particles themselves would still need to have holistic substantial forms.
Touchstone: If one is learning, making distinctions, and processing trial and error, one is *thinking*.
ReplyDeleteYou seem to have missed the point. Sure, we could define "thought" as anything involving brain-processes, and voilà, animals "think"… but that isn't useful or interesting. George obviously wants to distinguish a particular kind of intellectual activity, and some tricks done by an ape — or a computer — don't require an intellect in the Thomistic sense.
That said, I don't think it was a great paper. The examples she gives do not illustrate clearly the distinction that is key to her argument, nor was it clear why certain differences had to be of kind rather than degree. And there were a lot of typos.
Ah touchstone is his usual ignorant self I see.
ReplyDeleteHe confuses behavior with intellect, then cries foul because his little empiricism and consequent materialistic view is shown to be nothing but a sham. He confuses sense experience with sense data and then ignorantly claims that they are the same. That sounds like behaviorist tosh. Are you a behaviorist touchstone?
I would think by now, that with one naturalist after another trying to hide behind the new-found non-reductive physicalism, that you'd get the point about reductionism being incoherent, but your blind faith is unshakable. I am tired of refuting you blog. First it's your nonsense about falsificationism as a theory of meaning, which Popper himself rejected and I showed it to you. Then it's the tiresome rhetoric about the incoherent nominalism that you espouse. We show you how that is unintelligible too, only to see you craw back into your hole when shown wrong without even the decency to concede.
You don't even understand the argument the article is making regarding the distinction of trial an error via sense experience and learning via intellectual abstraction and analysis but because it shows how awful and empty your worldview is (along with the pseudo-explanations your reductionistic appetite desires) you are all upset throwing around a) two-liners and when confronted b) a torrent of self-referetial and lurid assertions that have absolutely nothing to do with the point the article is making.
The irony is, what George does in that article is demonstrate how not to falsely think of animal experiments like you do! Your entire claim along with those made by those who think animals can "think" or have "language" is one giant anthropomorphic fallacy.
You are a troll in disguise. Sorry.
@Rank Sophist
ReplyDeleteIn general, we kind of take it for granted that the case is closed.
I speak for myself here but I suspect that some may find this to applicable to themselves as well...
As one who was a naive reductionist/materialist in the past, who was forced to eventually confront my implicit (unconscious) assumptions only to watch the entirety of reality disintegrate (so to speak) in front of my very eyes into an chaotic blob, I can say this much... It took me a very long time and a lot of reading (of the numerous refutations of materialism/reductionism) to realize how bankrupt that doctrine was and it took just as much reading to start making sense of the world again, albeit via a better epistemology and metaphysic.
So it's not just that I take it for granted that reductionism is incoherent, I consider it an imperative truth as a means to sustain my sanity.
@rank sophist,
ReplyDeleteIt's difficult to summarize the arguments in a combox, and they've been presented in great detail by contemporary essentialists like David Oderberg. In general, we kind of take it for granted that the case is closed.
All right, good to know. I was *not* looking for the argument in the combox, but a pointer elsewhere -- could be a book that's not available online, for all I know. Oderberg is the name you'd offer if you were going to name one, then. Thank you.
Briefly, the very idea that everything is constituted by particles in certain arrangements--for example, "dog-wise" arrangement--is incapable of being consistent. It must necessarily presuppose macroscopic phenomena to retain any coherence at all. Further, even if everything is constituted by particles, those particles themselves would still need to have holistic substantial forms.
OK, well familiar with that line of thinking.
-TS
Touchstone: taken out of context those arguments were confusing for me, but once I reacquainted myself with the text, I believe I realize where you went wrong.
ReplyDeleteThe author was attempting to demonstrate that there's more underlying our behavior--and behavior in general-- than the instinct-intellect dichotomy lets on. She was giving an example of how we, and animals, can act purely on trial and error, or with a little application of our imagination, or both, not relying on our thinking capacity, but we and we alone go beyond that. The sections you quoted are confusing but given the context I don't think it hard to figure out what she's saying.
Thomas Nagel is another name to look into, to see how your reductionism gets refuted touchstone (he's a self- proclaimed wishful atheist too - the 'wishful' is not sarcastic by the way, he actually states it). ;-)
ReplyDeleteThe irony is, what George does in that article is demonstrate how not to falsely think of animal experiments like you do!
ReplyDeleteYour "Ken Ham" factor is high here, anon. Don't you know, learning about radiometric dating and all that non-sense ("oh the emptiness of that worldview")... it just shows you how not to think falsely about the authority of scripture and the six days of creation!
Your entire claim along with those made by those who think animals can "think" or have "language" is one giant anthropomorphic fallacy.
You have that backwards. The science augurs against your anthropormophic conceits -- that's why Dr. George will have to retreat farther and father into the corner her Thomism has painted her into, making even more contrived restrictions ("Yes, but chimps cannot play CHESS as well as humans, and that makes all the difference... THAT's really what thinking is... now").
Science just plods along and identifies not only the machinery in the brains of humans for functions like percept integration, language processing, concept formation, etc., but isomorphic structures in other animal brains. Meaning that the *divisions* Dr. George wants to impose (problem solving with senses!) break down even more badly, and the neurophysiology of humans and animals becomes more and more clearly differentiated by degree and adaption than kind.
That's non-anthropic, traitorous to the anthropocentric conceits, long and deeply held.
eppure si muove and all that. Science doesn't give a fig for your conceits about your cosmically special "immaterial intellect". It is what it is.
-TS
@anonymous
ReplyDeleteThe irony is, what George does in that article is demonstrate how not to falsely think of animal experiments like you do!
Your "Ken Ham" factor is high here, anon. Don't you know, learning about radiometric dating and all that non-sense ("oh the emptiness of that worldview")... it just shows you how not to think FALSELY about the authority of scripture and the six days of creation!
Your entire claim along with those made by those who think animals can "think" or have "language" is one giant anthropomorphic fallacy.
You have that backwards. The science augurs against your anthropormophic conceits -- that's why Dr. George will have to retreat farther and father into the corner her Thomism has painted her into, making even more contrived restrictions ("Yes, but chimps cannot play CHESS as well as humans, and that makes all the difference... THAT's really what thinking is... now").
Science just plods along and identifies not only the machinery in the brains of humans for functions like percept integration, language processing, concept formation, etc., but isomorphic structures in other animal brains. Meaning that the *divisions* Dr. George wants to impose (problem solving with senses!) break down even more badly, and the neurophysiology of humans and animals becomes more and more clearly differentiated by degree and adaption than kind.
That's non-anthropic, traitorous *to* the anthropocentric conceits, long and deeply held.
eppure si muove and all that. Science doesn't give a fig for your conceits about your cosmically special "immaterial intellect". It is what it is.
Think about it. What do you suppose Dr. George is trying to protect? An archaic anthropocentric view of humans, humans as ontologically sui generis. Maybe that archaic view is right, after all. But either way, your invective is confused -- the scientific view is the one assaulting our anthropocentrism.
-TS
@Nick
ReplyDeleteYou have to understand that touchstone aims to be confused and to confuse. I honestly doubt he read the article and even if he did I highly doubt he would even let anything sink in or make an honest effort to understand it. His usual tactic is to dismiss anything Thomistic and Aristotelian as an assault on modern science (as he did here) completely oblivious to the fact that his metaphysic (materialism) is as old as dirt going back to the days of the Pre-Socratics.
You see, once he ignores that materialism is a product of ancient time he can commit the usual fallacy of "anything newer is necessarily better than the older"... But like I said, materialism is even older than its opposite and apparently touchstone likes to conflate it with modern science as a means to pretend it has any credibility.
Touchstone, I have a question for you. Can you recommend any sources where you think your views have been sufficiently argued? I do not mind buying and reading books.
ReplyDelete@touchstone
ReplyDeleteOnce again, you seem to be confused. Anthropocentrism and anthropomorphic fallacy are two different things. Attributing anthropomorphic traits to animals is committing such fallacy. That's what you're doing
You are also committing the same fallacy when you refer to science. Science does "plod" nor does it "assault" nor does it "not give a fig"... Without the immaterial intellect in fact I cannot even see how scinece would be doable. The human intellect is a presupposition of science.
Anyways, the fact is that it's certain scientists that make certain claims and interpret data according to their metaphysical commitments that seem to *assault* the human intellect. They have however been criticized not only by George but by many others including people who share your own worldview. People like Chompsky have ridiculed the overt claims made but such charlatans, who try to sensationalize their research in order to appeal to impressionable peoples such as yourself.
Furthermore, identifying structures of the brain does nothing for your cause since the very argument made is in regards to immaterial aspects of thought. Again, your wishful materialistic thinking is all you have to run on here. Only if one can made the absurd materialistic identity thesis does anything you have said so far actually undermine anything I have said. So unless you can prove materialism for us (and please stop conflating it with science) then all you have done is claim something without justification.
Finally, I don't think George is trying to protect anything, but instead trying to clarify and correct errors made by people who think like you. If you want to take materialism for granted thats fine. As someone who once held a similar belief I'll tell you that you're on the road to intellectual suicide, but please stop with the dishonest attempts at misrepresentation and distortion of both science and an astute philosophical paradigm.
PS. (General Question to Everyone) Is it me or does touchstone's entire argument assume realism? Without realism (in this case scientific realism) how can any findings be used to assault an ontological paradigm? If nominalism is upheld then it's just to different conventions none of which has any real claim on reality, no?
last line should read:
ReplyDelete*two different conventions
All right, good to know. I was *not* looking for the argument in the combox, but a pointer elsewhere -- could be a book that's not available online, for all I know. Oderberg is the name you'd offer if you were going to name one, then. Thank you.
ReplyDeleteNo problem. Real Essentialism attacks all forms of anti-essentialism, as well as all kinds of reductionism. If you're looking for solid arguments, I'd start there.
@rank sophist,
ReplyDeleteNo problem. Real Essentialism attacks all forms of anti-essentialism, as well as all kinds of reductionism. If you're looking for solid arguments, I'd start there.
I haven't read it, but am familiar with as a regular catalyst of discussions elsewhere from Thomists and other essentialists. In fact, I think just such a discussion some time back was the means of finding Dr. Feser's blog.
I see it's available in Kindle format, so that's good.
-TS
Here's a puzzling section from Dr. Feser's first post at Biologos:
ReplyDeleteIn particular, there is nothing in the picture in question or in any other picture that entails any determinate, unambiguous content. And even in the best case there is nothing that could make it a representation of triangles in general as opposed to a representation merely of small, black, isosceles triangles specifically. For the picture, like all pictures, has certain particularizing features -- a specific size and location, black lines as opposed to blue or green ones, an isosceles as opposed to scalene or equilateral shape -- that other things do not have.
Now, as someone who has spent significant time in my career working on software engineering solutions for pattern recognition, chunking and cognition, this seems conspicuously unaware of how neural networks approach phenomena like the triangle picture Dr. Feser provided. In copmuting, our use of "neural nets" is referred to as such because it is based on the neural architecture of the brain.
So, the triangle picture P1 *is* determinate, unambiguous content, as a percept. It is a "visual pattern" the neural network processes. Distinguishing features are identified, based on the existing "visual vocabulary" of the network, which for humans traces all the way back to the process of visual integration as an infant, and to the training process of the software neural net (a triangle pattern won't associate with anything if there's no stored patterns to associate with).
The image is processed associatively based on the salient visual traits of the image -- color, dark/light contrast boarders, "chunking" of regions and parts (like the "lines" and "corners" of the triangle -- these are determinable from basic analysis of the image input, prior to any conceptual processing... (think of the way an OCR package that doesn't even need neural nets for character recognition processes image input).
Associations are made, if they exist, between the "stored patterns" and the stimulus image. Recalled patterns maybe more perfectly isosceles, or less, different color, but affinities between the patterns obtain in statistical, Bayesian sense. P1 fires associatively against a group of stored patterns that in turn associate with neural configurations we classify as the *concept* of "triangle", and beyond that, the *word* "triangle".
The important concept here is that nowhere is there any normative, "platonic" or archetypal "triangle" needed. Visually, "triangle" is a cloud, a cluster of spatial features that are statistically related by virtue of the configuration of the "pixels" of the image (or whatever one would like to call raw percept quanta from our eyes in human processing.
This is how neural nets work. They are associative across huge numbers of addressable nodes that can be "wired" together. In software applications, when we want the program to converge its associations on patterns we specifically care about, we provide triangle images, and provide positive feedback when it fires for "triangle" (or closer intermediates from where it was), training the network for strong associations with that target pattern.
But importantly, if we don't force its training toward "triangle", the system will adapt its associations to distinguish triangles visually from circles, or single straight lines, all by itself. There is no platonic form of triangle needed, but rather just the analyzing, sorting and associative process that naturally coalesces "triangle-ish" image features together, and "square-ish" image features together, and "human face-ish" image features together, and on and on.
-TS
(con't)
(con't)
ReplyDeleteDr. Feser then goes on to say:
Now what is true of this “best case” sort of symbol is even more true of linguistic symbols. There is nothing in the word “triangle” that determines that it refers to all triangles or to any triangles at all. Its meaning is entirely conventional; that that particular set of shapes (or the sounds we associate with them) have the significance they do is an accident of the history of the English language.
No, there's no reason to suppose that's the case given what we know about human brains, and "software brains" (even as comparatively humble and rudimentary as they are compared to a human brain). The visual features that distinguish "triangle" from "non-triangle" are not language bound. We rely on language to discuss the subject (and to discuss *any* subject), but a software daemon that processes visual input into an associative learning funnel via neural networks doesn't need to implement or take heed of language at all.
Triangles obtain without that, and become "concepts" -- clouds and clusters of associations, distinct from other clusters of associations for [what we call 'square'], distinct from yet other clusters of associations for [what we call 'circle'], and on and on. The concepts emerge from the visual phenomena, mechanically, associatively. No "universals" or platonic forms needed or useful.
Given that, I think it's hard to see how this makes any headway:
Even if we regarded them as somehow having a built-in meaning or content, they would not have the universality or determinate content of our concepts, any more than the physical marks making up the word “triangle” or a picture of a triangle do. But then the having of a concept cannot merely be a matter of having a certain material symbol encoded in the brain, even if that is part of what it involves.
The pixels don't contain the meaning or content, they are just pixels, triggers and catalysts for associations in the neural net. And it's a mistake -- a conspicuous one given our available knowledge on this subject --
to suppose that a concept in the brain can be reified just by encoding one particular pattern/symbol internally. But those are not nearly all the options. As associative neural net learning shows, "triangle-ness" as an abstraction does not and cannot (by definition) rely on a SINGLE symbol encoding. Rather, the abstraction is a cluster of related associations, where "related" just denotes a statistical/Bayesian affinity between the "pixel data" for nodes in the cluster.
-TS
(con't)
(con't)
ReplyDeleteHe continues:
Nor can it merely be a matter of having a set of material symbols, or a set of material symbols together with certain causal relations to objects and events in the world beyond the brain. For just as with any picture or set of pictures, any set of material elements will be susceptible in principle of alternative interpretations; while at least in many case, our thoughts are not indeterminate in this way.
This is as close as Dr. Feser gets here to addressing conceptual abstractions as a set of associations, but it's not very close. Here, he dismisses "set[s] of material symbols", or those symbols mapped to their referents, all as discrete symbols, all distinct "atoms" (in the conceptual sense of that term). Those sets do admit of the hazards of ambiguity, but that is not a problem for the association set that the neural net holds as an abstraction, but a problem of contextualizing those abstractions to come to some determinate semantics. It can't always be done, ambiguity and semantic underdetermination are a persistant problem in thinking and language.
All of which boils down to this: look at how neural nets establish associations and create abstractions, just by the nature of their operation, their cumulative and adaptive cycling through new input and storage of (some elements) of past input. Why would any 'universal' need be posited in some immaterial or metaphysical sense for "triangle". We can abstract against our abstraction of 'triangle', fuzzy as that visual abstraction must be (as a cloud of associated representations), to a mathematically strict and elegant concept -- a 'pure isosceles', for example, but this is derivative of the lower level abstractions. We don't need any 'universal' mode of existence for that, any more than we need it for the visual abstraction of 'triangle'.
If you doubt this, do you suppose that a neural net cannot and will not coalesce clusters of associations around triangle-ish symbols presented to it, as distinct from clusters of associations around square-ish symbols?
-TS
Certainly did not mean to derail with the George article; and it was probably just a bad transcription, Mr. Green.
ReplyDeleteVisually, "triangle" is a cloud, a cluster of spatial features that are statistically related by virtue of the configuration of the "pixels" of the image (or whatever one would like to call raw percept quanta from our eyes in human processing.
ReplyDeleteInfinite regress of resemblance. Keep trying.
@Josh
ReplyDeleteInfinite regress of resemblance. Keep trying.
No, that's not even a good effort at a critique, here. A set of images we would classify as "triangle-ish" vs another set we would classify as "square-ish" are so classified without any dependence on a recursive or regressive means of analysis -- that's a dialectical problem, we're talking about visual pattern analysis.
This is demonstrable. You can write multilayer perceptrons that work in back propagating neural networks that create these associations, and in "unsupervised" mode, where it has to sort out the images into the natural groupings without any preset or pre-learned notions of 'triangle', 'square' or any other pattern.
And it's been done, many times. The relationships between the images are not dependent on pre-existing hierarchies of features. They are just statistical affinities, associates that are made through Bayesian matching. This is what avoids the "well, what classifies *that* feature set, and then what classifies that classifying feature set, and ..." type of recursing problem.
The human brain, and neural nets built on the same architectural principles, don't work that way. The neural net doesn't need and can't use such a visual ontology. It works "bottom up" through an astronomical number of neurons, associations being "non-semantic" and purely isomorphic. For two images P1 and P2 to be more (or less) associated with each other, we don't need to know about 'triangle' or any other term. We only need the "pixel data", and to be able to have neuronal connections accumulate according to the "pixel associations" that obtain in P1 and P2.
If you think about a set of "happy face" icons, and a set of "sad face" icons, the icons in each set may be (will be) quite different from one another in terms of size, aspect ratio, color, contrast, curvature and angles, but if the pixel analysis diverges on "mouth corners up", versus "mouth corners down", that is how the associations will cluster.
Nothing need be known by the system about "mouth", "mout corners", "up", "down", "face" or any of that. It's just brute image feature matching, matching without knowing or needing to know what any of those terms represent.
-TS
Touchstone,
ReplyDeleteSo, the triangle picture P1 *is* determinate, unambiguous content, as a percept. It is a "visual pattern" the neural network processes. Distinguishing features are identified, based on the existing "visual vocabulary" of the network
No perception is unambiguous. If we've learned anything from the post-modernists, or from Wittgenstein, it's that, unless you endorse essentialism, there's no such thing as a non-interpretive perception. Anything that you see, hear, read--all of these are merely your own interpretations. There is no such thing as wholly determinate content.
Consider Wittgenstein's refutation of Hume's imagism. If Hume is correct, then we merely perceive images--almost like photographs--that are then stored in our minds. Wittgenstein gives us the example of the image of a man on a hillside. Is he walking up or sliding down? Nothing in the make-up of the picture can tell us. The content is utterly ambiguous. So it goes with everything. The only explanation is an appeal to irreducible intentionality, and machines simply do not possess it.
The reason a computer is capable of registering certain "determinate" things is simple: we programmed it to do what it does. No matter how complex the system architecture gets, a computer is ultimately as simple as a series of symbols. A computer matches "this symbol" to "that symbol" because that's how it is designed to work. It has no intentionality aside from that which we give it. Therefore, it only has determinate content because we programmed it to recognize certain things in certain ways. It's that simple.
But importantly, if we don't force its training toward "triangle", the system will adapt its associations to distinguish triangles visually from circles, or single straight lines, all by itself.
That's because the lines of code--the series of symbols--that you used to program the system are set up to produce certain effects. "This symbol" refers to (intentionality) "that symbol". Whether or not a computer can recognize shapes is irrelevant. To the computer, the shape is wholly indeterminate without an infusion of intentionality--that is, our programming to tell it that "this" means "that". It's designed in such a way that it can sort images "all by itself", but its ability to sense the similarity was programmed by us. It can't help but find it, because it was designed to do so. Even if there was no similarity, it would be forced to group certain objects because of lower-level coding.
Triangles obtain without that, and become "concepts" -- clouds and clusters of associations, distinct from other clusters of associations for [what we call 'square'], distinct from yet other clusters of associations for [what we call 'circle'], and on and on. The concepts emerge from the visual phenomena, mechanically, associatively. No "universals" or platonic forms needed or useful.
ReplyDeleteNo need to bring up the New Riddle again, I assume.
However, it's important to remember that there are only two options when dealing with a system of signs: either it obtains its meaning from the "outside", or it obtains its meaning via infinite internal self-reference. That is, either we impart determinate meaning, or the system can only ask itself what one ambiguous symbol means by appealing to another. This applies even if it perceives something, because this perception is made and stored with code. Unless each symbol is given a hard, fast, determinate meaning by us, then the machine is left to forever appeal to extra symbols, each of whose meaning is as ambiguous as the last. Of course, this second option makes the entire system vacuous of content. You can thank Jacques Derrida for that argument.
As associative neural net learning shows, "triangle-ness" as an abstraction does not and cannot (by definition) rely on a SINGLE symbol encoding. Rather, the abstraction is a cluster of related associations, where "related" just denotes a statistical/Bayesian affinity between the "pixel data" for nodes in the cluster.
Unless associations are determinate, then you're left with Derrida's vacuous set of infinite reference. But a computer does not have intentionality by its own nature, and so it cannot give a set of symbols determinate content.
Those sets do admit of the hazards of ambiguity, but that is not a problem for the association set that the neural net holds as an abstraction, but a problem of contextualizing those abstractions to come to some determinate semantics. It can't always be done, ambiguity and semantic underdetermination are a persistant problem in thinking and language.
It's impossible without intentionality, Touchstone. Computers only have it because we give determinate meanings to the symbols that run them. If the meanings of the symbols were ambiguous, then there would be no place where the "buck stopped", so to speak; and we'd be left with Derrida.
Nothing need be known by the system about "mouth", "mout corners", "up", "down", "face" or any of that. It's just brute image feature matching, matching without knowing or needing to know what any of those terms represent.
ReplyDeleteYou've manifestly failed to realize that the problem extends to the very architecture of the computer doing the matching. The code itself is a series of symbols. These symbols either have determinate content or indeterminate content. If the content is determinate, then it has intentionality--because we put it there. If the content is indeterminate, then you're left with an infinite regress without any content.
Thank you Rank, I was about to ask for a translation of Touchstone's prose, but I see you handled it well.
ReplyDeleteOut-of-touchstone said... The visual features that distinguish "triangle" from "non-triangle" are not language bound.
ReplyDeleteWhat's that got to do anything? That has nothing to do with what he's talking about. As usual, you don't even get the point. You're in such a rush to prove how wrong Feser must be that you never stop to actually figure out what he's saying in the first place.
Might also call on Glenn for a re-link to that paper in the other thread regarding computer programming and its relation to Aristotelian logic. If nothing else, it will give Touchstone something else to chuckle at while writing obfuscatory sentences. Perhaps my 'perceptrons' are malfunctioning.
ReplyDeleteOut-of-touchstone said... The human brain, and neural nets built on the same architectural principles, don't work that way
ReplyDeleteUh, right. So if the human brain doesn't work that way, and yet the human mind does, then the mind cannot simply be equivalent to the brain. Congratulations, you just proved it yourself.
It's just brute image feature matching, matching without knowing or needing to know what any of those terms represent.
EXACTLY. The computer/brain cannot explain REPRESENTING. If you are telling me that you do not actually engage in representations, or meaning, or knowing, that all your mind does is cluster around statistical groupings -- well, then actually that would explain a lot.
No problem, Josh. Touchstone doesn't seem to be familiar with the problems inherent to intentionality and semiotics. Because it's impossible for a sign to have its own wholly determinate content--nothing about "this symbol", in itself, tells the whole story--, it must be placed there from the outside. Either this "outside" is wholly determinate (as with intentionality) or it is indeterminate, in which case we must appeal to further signs forever.
ReplyDeleteEXACTLY. The computer/brain cannot explain REPRESENTING. If you are telling me that you do not actually engage in representations, or meaning, or knowing, that all your mind does is cluster around statistical groupings -- well, then actually that would explain a lot.
These same problems have plagued semiotics for decades. Crazies like Derrida bite the bullet on intentionality and tell us that human thoughts work as signs, too. If Touchstone did that, then all meaning would vanish in an instant.
@Brian
ReplyDeleteTouchstone, I have a question for you. Can you recommend any sources where you think your views have been sufficiently argued? I do not mind buying and reading books.
I don't know which views you are referring to -- my views on 'universals' and theory of mind, per this thread, perhaps? Or my views on (scientific) epistemology, more broadly? My atheistic conclusions?
Give me a little more direction, please.
That said, here's what books come to mind when reading your request:
1. Gödel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter
2. The Methodology of Scientific Research Programmes by Imre Lakatos
3. Arguing About Gods by Graham Oppy
4. The Third Chimpanzee by Jared Diamond
5. A Universe From Nothing, by Lawrence Krauss
-TS
George R and Anon - I see where I was going wrong, thanks Guys.
ReplyDeleteA Universe From Nothing, by Lawrence Krauss
ReplyDeleteLOL
I read Godel, Escher, Bach. It was very well-written, though that's not to say I endorse Hofstadter's conclusions.
ReplyDeleteI have not read Hofstader's book but hearing him at an interviw I can't say I was in any way impressed with what he had to say.
ReplyDeleteLakatos is probably the best reference on that list. He tried to salvage whatever was left from the ruins of positivism (although it's debatable how successful he was, especially given the destructive force of Feyerabend's work on the myth of scientism)
Oppy is not that great either. Craid has refuted many of his objections to Theism and has shown how faulty his thinking are. One common theme in his work is his misunderstanding and consequent caricaturing of arguments, which makes his book/articles even weaker.
The 3rd chimp book I've not read nor have I ever heard of the author.
The the universe from nothing is just LOL (to echo the sentiments of another anon user). I don't know what's more awful, how krauss tansies science with his sophisms and misrepresentation of cosmological theory (much in line with what Popper called promissory materialism) or his inability to do any kind of philosophy... This is the same guy that claimed in a debate that 2 + 2 = 5!
*Craid = Craig
ReplyDeleteDamn auto-correct!
ReplyDelete*faulty his thinking is...
**krauss tarnishes science...
Anon7:52: Jared Diamond is that guy who studies civilizations and their rise and fall. I actually watched his TED talk recently: http://www.ted.com/talks/jared_diamond_on_why_societies_collapse.html
ReplyDeleteI haven't read any of his books, though.
@rank sophist,
ReplyDeleteNo perception is unambiguous.
Or rather, they are all ambiguous. Or better yet, we agree that they are pre-interpreted as percepts, which is what I was saying in comment you responded to, here. I said the image is "determinate, unambiguous content, as a percept", and the point of that contradiction of Feser (and perhaps you here) is that we don't have information just at the post interpretive layer, but "raw input" at pre-interpretive layer. As the image comes into our eyes, it's determinate content AS raw visual input. That's a key distinction because it is at this layer, and prior to/without human abstraction that we can analyze (with computers, if we want) this raw input in such a way that categories, classes and groups automatically emerge in the neural network. This is just to say that there are objective features of these images that have statistical feature affinities with each other, but which are not (yet) attached to any linguistic concepts.
If we've learned anything from the post-modernists, or from Wittgenstein, it's that, unless you endorse essentialism, there's no such thing as a non-interpretive perception.
You're confusing PERCEPT with PERCEPTION. Easy to do, but these are not the same thing. By 'percept', I am referring to our raw sense-data, the 'input pixels' we start with prior to any interpretation, and which an interpretation must have to operate on. There, are, and must be non-interpretive *percepts*, else you have nothing to interpret. Your Wittgenstein reference is problematic on its own, but that's not relevant to my point about *percepts* as raw input.
Anything that you see, hear, read--all of these are merely your own interpretations. There is no such thing as wholly determinate content.
That can't be true, transcendentally. There must be some raw input into the senses which we take as the starting point for the process of contextualization and interpretation. When my computer program, which interprets visual input for the purposes of identifying English letters and numbers gets a new item to process, it's a "brute image" -- it's just a grid of pixels (quantized data so that the computer can address it for interpretation). In a human, or a chimp, the optic nerves terminate in the brain (at the LGNs in the thalamus) and provide raw visual stimuli to the neural net, whereupon all sorts of integrative and associative interpreting begins across the neural mesh.
Again this is important to understand because the pre-interpretive features of our input data provide objective points of differentiation and classification -- the basis for meaning. That is nothing more than to note that what we call "triangle-ish" images and "square-ish" images are not so called by caprice; the images have, prior to any interpretation, or labeling, physical features that distinguish them, and distinguish them as distinct groups.
-TS
(con't)
ReplyDeleteConsider Wittgenstein's refutation of Hume's imagism. If Hume is correct, then we merely perceive images--almost like photographs--that are then stored in our minds. Wittgenstein gives us the example of the image of a man on a hillside. Is he walking up or sliding down? Nothing in the make-up of the picture can tell us.
Oy, more reliance on philosophers for subjects like visual perception and cognition. Cells in the retina and the thalamus fire in response to movement, changes in light/dark, velocity. Movement fires different cells for horizontal activity and vertical activity. So before any interpretation, which happens in the visual cortex, the brain receives not just spatial-chromatic ("picture") information, but signal fires for dynamics of motion direction, velocity, and other changes. These are discrete signals themselves, like the "picture" data, and not interpretation itself. Just fodder for for integration as the first step of that process in the PVC.
Which is just to say that Wittgenstein, bless his heart, is talking out his behind here, from a position of thorough ignorance of what is going on in his own brain, physically. He can take some comfort in the fact that he was hardly more equipped by science to speak on the matter than was Aquinas, but when we read something like that NOW, it's just obsolete as a context for thinking about this subject. The brain's "pictures" DO have motion cues that come with it, prior to any interpretation, for direction, and velocity. This is how sight works, before the visual cortex even gets hold of it. The "sliding down" vs "walking up" interpretations are NOT on an even footing, and BEGIN that way for brain, as our visual sense machinery is constantly streaming in motion cues (and other cues) in along with "image" data.
-TS
@rank sophist (con't)
ReplyDeleteThe reason a computer is capable of registering certain "determinate" things is simple: we programmed it to do what it does. No matter how complex the system architecture gets, a computer is ultimately as simple as a series of symbols. A computer matches "this symbol" to "that symbol" because that's how it is designed to work. It has no intentionality aside from that which we give it. Therefore, it only has determinate content because we programmed it to recognize certain things in certain ways. It's that simple.
I'm just noting, as things roll on, how often this is "simple" for you. ;-)
In the case of unsupervised learning, the neural net doesn't have recognition of certain things wired into it -- that's what "unsupervised" indicates in the terminology. Rather, the system is programmed to "recognize", or more precisely, to build associations and to maintain and refine them through continuous feedback. So that means it can and will group "triangle-ish" images and "square-ish" images (if its mode of recognition is visual/image-based) without it being told to look for 'triangle' or told what a 'triangle' is. The system doesn't speak a language or use symbols that way, but it "understands" triangle-ishness and square-ishness such that it can predictably process images that we would say belong in this pile or that (or neither) correctly. It has demonstrable knowledge of the concepts, then in a pre-linguistic way, provably -- give it a test, and it can distinguish between those types. Add in five-pointed stars, and it will learn to associate star-ish patterns together, without ever knowing, in a labeled or linguistic sense, what a "star" is.
But hold on, you say -- it only does that because we programmed it to recognize and categorize generally. Yes, of course, but so what? We are programmed by our environment to recognize and categorize. These are adaptations with enormous advantages for the evolving population. If that point doesn't suffice to dismiss your "it's programmed" complaint, then it seems the question just shifts to skepticism about evolution and emergence.
Which is fine, and I need to do nothing more than note that if that's the case, all the worse for Dr. Feser's article. He has completely misunderstood the basis for human pattern recognition, visual integration and typing. He can then say, well, even if all that science-y stuff is right, it stil takes God to make that happen, a telos. Fine, but it makes the article a throw-away, an exercise in mistakes about cognition and missing the point, the real basis for his superstitions.
-TS
@Nick Corrado
ReplyDeleteAnon7:52: Jared Diamond is that guy who studies civilizations and their rise and fall. I actually watched his TED talk recently: http://www.ted.com/talks/jared_diamond_on_why_societies_collapse.html
I haven't read any of his books, though.
His "claim to fame" book, and the way I like most of his fans became aware of him, is Guns, Germs, and Steel. That's a book I highly recommend, too, but it's not one that topically covers the epistemology/worldview ground I was going for in my list. His Collapse, the book behind the talk you link to, was by far his poorest offering, in my opinion.
@rank sophist
ReplyDeleteThat's because the lines of code--the series of symbols--that you used to program the system are set up to produce certain effects. "This symbol" refers to (intentionality) "that symbol". Whether or not a computer can recognize shapes is irrelevant. To the computer, the shape is wholly indeterminate without an infusion of intentionality--that is, our programming to tell it that "this" means "that". It's designed in such a way that it can sort images "all by itself", but its ability to sense the similarity was programmed by us. It can't help but find it, because it was designed to do so. Even if there was no similarity, it would be forced to group certain objects because of lower-level coding.
No, because if you are coding for similarity as the basis for grouping, "lower-level coding" won't help group. Grouping is a function of a similarity test.
Humans are programmed by the environment to do similarity testing, and make associations based on that, just like our associative back-prop neural net software programs do -- we developed the software architectures from what we've learned about the human brain, so the software design is informed by the hardwire design bequeathed to us by evolution.
But that just makes humans a fantastically more scaled out version of machines we build. But machines all the same, in the sense of deterministic finite automata.
This is important for understanding the poverty of Dr. Feser's article. Even if I stipulate, arguendo, that some kind of Cosmic Mind or Supernatural Deity is required for "bootstrapping" the environment with a telos that is sufficient to program humans and other animals with the "wetware" to make associations and develop them in arbitrary complex ways through recombinant patterns of trillions of neurons, to recognize, categorize, contextualize, etc., THIS DOESN"T HELP DR. FESER'S ARTICLE ONE BIT. For he is saying that the local process of interpreting that triangle image takes immaterial intervention directly.
Not God as the source of any telos now manifest in our brains working as they do. But rather, Dr. Feser's hylemorphic dualism. If you complain that any explanation on the mechanisms of recognition, chunking, contextualization, abstraction and conceptualization just points back to a creator, well, you've thrown Dr. Feser under the bus, because that mechanism as a mechanism does not need and cannot use the dualistic ontology that Dr. Feser is arguing for with his triangle example.
So, would you agree, then, that appealing to a fundamental "prime telos", so to speak, as what you call 'God' abandons Dr. Feser's appeal to the necessity of immaterial intellect for THAT individual for the basic task of identifying/recognizing 'triangle'?
Humans are good pattern recognizers, and I see this pattern a lot:
A; We need an immaterial intellect for conceptualization and understanding.
B: No we don't. Look at this program...
A: Well, that just proves something is needed to program us or the machine for conceptualization or understanding.
B: But the question was about the need for immaterial intellect to *perform* the task of conceptualization and understanding!
A: You still need a Cosmic Designer to have that mechanism come to be.
Dr. Feser is saying an immaterial intellect is needed to perform the local act of interpretive meaning. Not God doing it, but the individual's 'immaterial intellect'.
You are playing the role of B here, shifting the 'immaterial intellect' away from the individual, and appealing to a Great Cosmic Immaterial Intellect which has created the mechanisms the brain uses to interpret and establish meaning. That's a good move in practical terms as a "ground shift", but it leaves Dr. Feser's claims in the ditch, unneeded and frivolous when you do that.
-TS
Nick Corrado said... I read Godel, Escher, Bach. It was very well-written, though that's not to say I endorse Hofstadter's conclusions.
ReplyDeleteWhat, you don't believe that recursion is 𝓜𝓪𝓰𝓲𝓬𝓪𝓵?? GEB is amusing, but the philosophy is as you might expect ignorant fluff. I won't LOL at Krauss but only because everyone else already has and he doesn't deserve the attention. I have new sympathy for Out-of-touchstone, though. He couldn't be expected to know any philosophy when he's been fed all this nonsense by people who should know better. If he hangs around here, there's hope he will at least pick up some real understanding about it.
Seriously, I feel like Touch is talking about something irrelevant to Thomistic ideas.
ReplyDeleteBut well let's just see what happens next.
Stupid blogger. When you post a comment, the letters show up, but not on the main post (at least for me). That should have said, "What, you don't believe recursion is ~ṁâģïçãļ~?"
ReplyDeleteOut-of-touchstone said... Your Wittgenstein reference is problematic on its own, but that's not relevant to my point about *percepts* as raw input.
ReplyDeleteNo, your point about percepts is irrelevant. That's not what anyone is talking about.
There must be some raw input into the senses which we take as the starting point for the process of contextualization and interpretation.
Yes, of course. It's the meaning, the interpretation where the indeterminacy comes in.
the pre-interpretive features of our input data provide objective points of differentiation and classification -- the basis for meaning.
Again, nobody ever said they weren't necessary. Just that they aren't sufficient.
the images have, prior to any interpretation, or labeling, physical features that distinguish them, and distinguish them as distinct groups.
See, when you make silly claims about not needing forms and then say that there are "features that distinguish them as groups", it just goes to show that you do not understand what forms are about in the first place. Why not make an effort to find out?
You don't need forms.... what distinguish them from one another is OF COURSE.... the number of poneys related to that IMAGE!!!!!
ReplyDeleteafter all, reality is just poneys all the way down, and if you get them small enough .... yada yada ... they become dots and the poneys in your head can see the poneys that fly out of the other poneys, creating a pattern that is not a poney, even though is made really just of poneys, therefore proofing once and for all, that everything is a poney ... deep down
I need my pills...
@rank sophist,
ReplyDeleteNo need to bring up the New Riddle again, I assume.
Well, at least it was *relevant* before, even if you misunderstood the problem. ;-)
However, it's important to remember that there are only two options when dealing with a system of signs: either it obtains its meaning from the "outside", or it obtains its meaning via infinite internal self-reference.
Meaning only obtains "inside", as a set of associations. But they aren't "semantical" in a fundamental sense. That is, meaning is not derived from anything more fundamental as a "source of meaning", something semantically transcendental to it in the human brain. Instead, it's a hugely complex, acyclic graph, made up of neural associations. A useful bit of pedagogy in getting this point is the definition of English words. The definition of a word in English is given not in an appeal to more fundamental (transcendental) units of meaning, but just in terms of other English words. The definition points to other concepts that it is associated with.
So it's not regressive, and doesn't fall to a vicious cycle demanding ever more "fundmental" bases for meaning. It's a peer graph, and a huge one, too, in the case of English.
It's all just circular, then??? There's no "meaning" in that, surely! In a roundabout way that's true, every word is defined just in terms of other words. But this (and here, there are more and better works in a technical sense than Hofstadter, but if you've not read Hofstadter and this seems problematic to you, you should read him) is how meaning obtains; not by appeals to more fundamental elements of meaning, or a superstitious appeal to an 'immaterial intellect', but as the network of subject-object relationships, the graph of arcs of meaning between nodes.
This does NOT mean, however, that "outside" doesn't matter. "Inside" itself is meaningless without the accompanying concept of "outside", remember. Without the outside, there are no referents for any symbols we might establish. The "books" are kept internally, but the "arcs" of meaning are predicated on our inside interacting with the outside. Meaning as "internal only", no outside needed, is incoherent. For meaning to cohere, we must have "outside" referents for our internal associations.
-TS
Out-of-touchstone said... Oy, more reliance on philosophers for subjects like visual perception and cognition.
ReplyDeleteDid I mention that this is not about visual perception??
signal fires for dynamics of motion direction, velocity, and other changes. These are discrete signals themselves, like the "picture" data, and not interpretation itself.
Oh geez, he's talking about a painting, not watching the guy fall down the hill on his backside in real time.
The system doesn't speak a language or use symbols that way, but it "understands" triangle-ishness and square-ishness such that it can predictably process images that we would say belong in this pile or that (or neither) correctly.
You put "understand" in quotes because that isn't real understanding. And if something analogous were all there was to human meaning and intention, then it would not be real understanding either. So either there is something more going on, or else you are seriously claiming to be an eliminativist about understanding.
He has completely misunderstood the basis for human pattern recognition, visual integration and typing. He can then say, well, even if all that science-y stuff is right, it stil takes God to make that happen, a telos.
Sigh. "If that science-y stuff is right"??? Did you really say that with a straight face, or do you know deep down what a strawman that is?
Out-of-touchstone: Well, I don't understand what Feser is saying, but I know something about visual image processing, so I'll just talk about that instead. And if Feser disagres with anything I believe then he a big anti-science dummy head!!!
That is, either we impart determinate meaning, or the system can only ask itself what one ambiguous symbol means by appealing to another.
ReplyDeleteYes, but "appealing to another" decreases the ambiguity, and establishes meaning! That process of association is the process of creating meaning, as it provides differentiation and specification out ambiguity. For every association that is A->[B,C] leaves out [D,E]. Even if A, B, C, D are, as stand alone entities, perfectly ambiguous, conceptually entity, by creating associations like A->[B,C], we have created some new meaning in the network. For now we know that if we have A it activates [B,C] BUT NOT D. This is what meaning is, rules for "this', but not "that".
This applies even if it perceives something, because this perception is made and stored with code. Unless each symbol is given a hard, fast, determinate meaning by us, then the machine is left to forever appeal to extra symbols, each of whose meaning is as ambiguous as the last. Of course, this second option makes the entire system vacuous of content. You can thank Jacques Derrida for that argument.
It's only vacuous if only requires that "meaning" be understood in a magical, supernatural way, something apart from the arcs and nodes that make up meaning and context in a network. Derrida is one who saw this with acute clarity. Derrida understand that “pure language” entails terms necessarily including multiple senses that are irreducible to a single sense that provides the normative, "proper" meaning. This an artifact of the graph, the mesh of associations we maintain that constitute meaning and sense for us. Computational linguistics researcher and AI guys look at at that and nod -- this is what they've understood as manifest from their computational machinery all along.
It doesn't make language empty of meaning, for Derrida or anyone else. It is "babelized", to use a term I think he came up with for this idea, internally translated, overloaded *and* idiomatic. It's "impure" because it's associative, and associative in a fuzzy, graphed way (neural networks!), agains the 'pure' idea of language and meaning as transcendental to any network in which it might be expressed -- the immaterialist superstition, in other words.
-TS
I have new sympathy for Out-of-touchstone, though. He couldn't be expected to know any philosophy when he's been fed all this nonsense by people who should know better. If he hangs around here, there's hope he will at least pick up some real understanding about it.
ReplyDeleteFair enough. But for that to happen he needs to change his attitude and show a desire to learn. Dropping in to unload the usual torrent of lurid yet incoherent assertions only to be refuted, while remaining completely oblivious to said fact isn't going to help him.
It takes a lot of will power and a lot of reflection to free yourself from materialistic assumptions, especially they are naive/unconscious (I speak for myself here). I have seen no effort on his part to even try to understand much of what I and many others have told him to far.
Compound that with the fact that any time he stops by the thread is usually derailed into the "Touchstone Show" and we have ourselves a little problem.
Touch is an asshole ... basically saying ?
ReplyDeleteWell he would be someone nice to read and talk with; wasn't for the fact that all he cares is ripping at other people's ideas with assertions and intimidation.
But that is web atheist behavior for you ... I still hope it is the internet to blame for that.
@Eduardo,
ReplyDeleteSeriously, I feel like Touch is talking about something irrelevant to Thomistic ideas.
I think that's basically right. The Thomistic concepts, as I read them in Dr. Feser's posts, and in a much more extreme way in Dr. George's gerrymandering for human intellect as ontologically peerless (that is, dualist) only serve to confound obfuscate the questions it gets applied to.
Perhaps its better to say that the subjects Dr. Feser addresses here aren't aided by any Thomistic treatment. Thomism is not a falsifiable framework or set of propositions, so it's not a matter of it being "false" or "wrong" in that context. "True" and "false" aren't applicable, there. Rather, it's just a kind of "fog" that gets layered onto whatever is the subject of the day.
A recurring theme in my reading of Dr. Feser's post is that those treatments are just inert or frivolous with respect to the subject, which, like cognition, or semantical processing, avails of other heuristics for deriving knowledge and insight on the subject.
What we do know about human cognition and pattern matching and visual integration and associative networks as the substrate for meaning are just conspicuously absent in Dr. Feser's exposition. A philosopher is not necessarily a scientist, but science is a resource a careful philosopher should at least take passing note of on matters like this.
-TS
Out-of-touchstone said... we developed the software architectures from what we've learned about the human brain, so the software design is informed by the hardwire design bequeathed to us by evolution.
ReplyDeleteInformed by design? Congrats, you just appealed to formal and final causes. But perhaps that was only "a way of speaking", in which case, feel free to make the necessary point without using any philosophically equivalent synonyms.
THIS DOESN"T HELP DR. FESER'S ARTICLE ONE BIT. For he is saying that the local process of interpreting that triangle image takes immaterial intervention directly.
There is no "intervention", which suggests to me that again you are way off base with what Feser is saying in the first place. But you are wrong even apart from that, because you are considering only the "outside" effects. It is entirely possible for God to program the universe to make creatures capable of acting in interpretive-like ways without having any immaterial intellects themselves. (Or on your view, this apparently all just traces back to the Big Bang, because the Big Bang is a thinking thing... or something). But if a creature actually understands or interprets something on its own merits, then yes, that can only be because it possesses an intellect. Nothing you have said refutes this (because you are not even actually addressing it). Even if your alternative worked, it would at most be an alternative, not a rebuttal.
Meaning only obtains "inside", as a set of associations. But they aren't "semantical" in a fundamental sense.
So "meaning" is not "semantic"? And that's not a problem? I guess you are an eliminativist.
It's all just circular, then??? There's no "meaning" in that, surely! In a roundabout way that's true, every word is defined just in terms of other words.
But of course that obviously isn't true. In fact, if you eliminate meaning and understanding (the real things, not the simulated outside imitations), then no argument you offer can be "true". You can't even claim it is "statistically clustered in a way likely to be true" because you cannot show that our pseudo-thoughts cluster in a suitable way. So there is no point listening to anything you say. It's just a network of nodal connections, any relationship to the truth is purely coincidental.
I would love that you would demonstrate what you say instead of just assert.
ReplyDeleteNow about your falsificationism... well it might be not falsifiable by your particular epistemological theory, so what about the arguments to why yours is the only one that works???
You have to conclude that your "Performative model" system is then only one that works.... and "works" have to be defined by that same system of course.
Another thing... he wasn't really talking about cognition, cognition was just related topic, but what he was talking about was the ontology of what we were seeing. Your whole talk about how we come to know stuff is only important to the matter when you project your metaphysical beliefs onto what is going on in the brain ... which is just pure assertion.
So again ... back to your nominalism. Start with, what is really out there, jump the mechanisms in the brain ( you can talk about it in general terms because they are not part of the discussion ); them somehow concludes that, this whatever that is outside the brain is what we think it is. Because seriously... when you say that all the "data" coming from stuff outside is ambiguous ... it just go through my head that you will someday confuse a chair with an elephant. I mean you will confuse the objects, not their names or images; you are going to see the elephant instead of the chair because it is really all ambiguous in the beginning.
Perhaps its better to say that the subjects Dr. Feser addresses here aren't aided by any Thomistic treatment. Thomism is not a falsifiable framework or set of propositions, so it's not a matter of it being "false" or "wrong" in that context. "True" and "false" aren't applicable, there. Rather, it's just a kind of "fog" that gets layered onto whatever is the subject of the day.
ReplyDeleteI'm so freaking sick of this positivistic CRAP. I wish we could all just agree that anyone who doesn't get past this nonsense is not worth engaging with. All it does is deflect from worthy opponents.
Immaterial conception is not a 'God of the gaps' "hypothesis." None of what you have said here refutes or even really applies to the substance of Feser's article. Why not quote from it, and then contradict a quote with reasons supporting?
It's only vacuous if only requires that "meaning" be understood in a magical, supernatural way
ReplyDeleteWhat vacuous is your attempt to fabricate meaning out of reductionistic, incoherent materialist concepts that do no justice to the splendor of reality but are rather a finely chopped-up remnant of its fullness. We're talking about a cat and in its place you offer in your definition a broken bone. You then proceed to commit ad hominem fallacies against an entire metaphysical paradigm in a pathetic attempt to propagate what has already been shown to you to be false. You're still in the middle of an infinite regress problem and everyone recognizes it except you. Obfuscating the issue with unnecessary verbiage that is irrelevant, while simultaneously misunderstanding the other side is either total ignorance of intellectual dishonesty!
What we're telling you is that even the "babelized" claim to language itself would require determinate meaning in order to make sense. We are all aware of the contextual interpretations of language and we all heard the usual relativism tosh. The fact is you either have relativism and infinite regress into absurdity (epistemological nihilism) or you recognize the necessity of The Absolute.
And please stop with the strawmen against Theism because you;re starting to sound very juvenile.
@Eduardo
ReplyDeleteYou don't need forms.... what distinguish them from one another is OF COURSE.... the number of poneys related to that IMAGE!!!!!
after all, reality is just poneys all the way down, and if you get them small enough .... yada yada ... they become dots and the poneys in your head can see the poneys that fly out of the other poneys, creating a pattern that is not a poney, even though is made really just of poneys, therefore proofing once and for all, that everything is a poney ... deep down
Seriously, you come up with the funniest stuff.
This has to be as good as the "natural selection the feral spirit of evolution".
LOL, man!
@Anon,
ReplyDeleteEXACTLY. The computer/brain cannot explain REPRESENTING. If you are telling me that you do not actually engage in representations, or meaning, or knowing, that all your mind does is cluster around statistical groupings -- well, then actually that would explain a lot.
I'm saying that is a distinction without a difference -- you can't "feel" your brain making these associations and activating these networks of connections, because there are no nerve in the brain to give you awareness of what is acutally going on physically in your head, so you suppose it's "magic", a ghost in your machine.
Humans have machinery that enables and develops representational thinking, and uniquely as a matter of degree and depth, if not necessarily of kind with respect to other animals or machines, meta-representational thinking. That isn't controversial.
What is controversial is whether that representation is real, reified in nature or not.
We can suppose some "immaterial intellect" is required for representation or meta-representation to occur. But we can similarly suppose "immaterial particle faeries" attend to the actions and behaviors of elementary particles, moment by moment, keeping everything moving as it should. There's no falsifying it, there's only the realization that such conjectures do not add anything to our knowledge or models of the world around us.
-TS
Oh, dear God!
ReplyDeleteHe started with his it's neither "true" or "false" rubbish now...
Expect an influx of nominalism-meets-misinterpretated falsification-meets-positivism-meets-materialism rant.
I honestly don't have the patience for this and this speak precisely to my earlier post about his inability and unwillingness to learn and understand.
@Josh,
ReplyDeleteI'm so freaking sick of this positivistic CRAP. I wish we could all just agree that anyone who doesn't get past this nonsense is not worth engaging with. All it does is deflect from worthy opponents.
This is not positivism. You can dismiss it as you like, but if you are dismissing it on the basis of its positivism, you're not following what's being said.
Immaterial conception is not a 'God of the gaps' "hypothesis." None of what you have said here refutes or even really applies to the substance of Feser's article. Why not quote from it, and then contradict a quote with reasons supporting?
You can see above that I, unlike anybody else here, have quoted Dr. Feser's article numerous times, and at length.
See my posts with these timestamps for the very thing you're asking for, already provided by me, not provided by anyone else in this thread:
August 18, 2012 9:04 PM
August 18, 2012 9:35 PM
August 18, 2012 9:35 PM (There are two separate comments with the same minute stamp).
Similar engagement with Dr. George's article, which you offered for consideration, if I recall, occur upthread of that.
-TS
Out-of-touchstone said... Perhaps its better to say that the subjects Dr. Feser addresses here aren't aided by any Thomistic treatment.
ReplyDeleteSo Feser isn't just WRONG, he's STUPID. Ok. And your reaction is, man, everyone here is saying stuff that makes so little sense to me they must all be idiots! It never occurred to you at any point to think, gee, maybe I'm not understanding what their point actually is, perhaps I should ask some questions? Because you're some kind of super-genius, naturally. Do you honestly expect anyone to take you seriously?
Yeah, watching this conversation, a Family Guy paraphrase comes to mind. "I suppose I should find all this annoying, but really I'm just bored as hell."
ReplyDeleteTS does this same act in each thread. A lot of mangled, tortured understandings, lecturing, and completely ignoring anything that causes trouble for his position. This time he seems to not even understand what he's criticizing. I'm sure he'll decide it's all everyone else's fault, not his. After all, he's a programmer, unlike... well, actually we've got several programmers here.
@touchstone
ReplyDeleteThis is not positivism. You can dismiss it as you like, but if you are dismissing it on the basis of its positivism, you're not following what's being said.
Of course it's positivism. All you do is take the positivism mentality and simply replace 'verificationism' with 'falsificationism' and then proceed to assert in your usual bombastic, yet obscurantist tone that statements are not meaningful because they cannot be said to be either "true" or "false" (would like to point to the irony of putting the word truth in quotes, since by implication it undermines its very value). Apart from the fact that I've shown you that such claims are worthless, due to the insurmountable problems falsificationism faces, I have also provided you with a quote from Popper himself warning (better yet, disciplining) anyone who dares to abuse his notion of falsificationism by pretending that it's something else than what it truly is. Since you continue to espouse this ridiculous view that Popper warns against, here is the quote from the Logic of Scientific Discovery once again:
"Note that I suggest falsifiability as a criterion of demarcation, but not of meaning. Note,
moreover, that I have already (section 4) sharply criticized the use of the idea of meaning
as a criterion of demarcation, and that I attack the dogma of meaning again, even more
sharply, in section 9. It is therefore a sheer myth (though any number of refutations of
my theory have been based upon this myth) that I ever proposed falsifiability as a
criterion of meaning. Falsifiability separates two kinds of perfectly meaningful statements:
the falsifiable and the non-falsifiable. It draws a line inside meaningful language,
not around it."
I even bolded the most important part for you in case you're unable/unwilling to process/understand what he is saying.
So stop abusing falsificationism, stop distorting its utility and stop being so damn intellectually dishonest!
@Anon,
ReplyDeleteWhat vacuous is your attempt to fabricate meaning out of reductionistic, incoherent materialist concepts that do no justice to the splendor of reality but are rather a finely chopped-up remnant of its fullness.
I think the real point of resistance is showing through here. You're aesthetically not all tingly about alternatives to your intuitions, ergo it's false. Somehow. Must be.
We're talking about a cat and in its place you offer in your definition a broken bone. You then proceed to commit ad hominem fallacies against an entire metaphysical paradigm in a pathetic attempt to propagate what has already been shown to you to be false.
Well, it's false on stipulation of the primacy of your own paradigm, perhaps, but that's just to beg the question. It's not been shown such, or even engaged (with a few noble exceptions noted!) except as an exercise in affirming one's consequent.
You're still in the middle of an infinite regress problem and everyone recognizes it except you. Obfuscating the issue with unnecessary verbiage that is irrelevant, while simultaneously misunderstanding the other side is either total ignorance of intellectual dishonesty!
Do you suppose the English language is devoid of meaning for its speakers? If not, how does this happen? How does meaning obtain without infinite regress??? It's just words offered as the components defining other words, right? Is it magic that allows it avoid descent into infinite regress?
What we're telling you is that even the "babelized" claim to language itself would require determinate meaning in order to make sense. We are all aware of the contextual interpretations of language and we all heard the usual relativism tosh. The fact is you either have relativism and infinite regress into absurdity (epistemological nihilism) or you recognize the necessity of The Absolute.
I read that from you as 'the necessity of [the aesthetic appeals I demand] of The Absolute'.
Look, meaning for humans (and derivatively, for machines modeled on the same architecture) is neither inert nor is it "determinate" in any final, absolute and perfectly unambiguous sense. Meaning is practically determinate, "close enough" to achieve agreement and effective communication between humans (and internal dialectics). There are many cases where language becomes ambiguous or confusing, because the determinacy, the precision of the usage, is not sufficient to effectively convey the intended concepts from sender to receiver. This isn't trivially dismissed as sender or receiver (or both) just being stupid, or careless; meaning is an exercise in varying levels of ambiguity. Anyone who's worked with either computer language construction itself, or use of computer languages to implement natural language comprehension understands this with stark clarity: good enough is good enough.
And for many intents and purposes, it is good enough. If it works, it works, as a communication process, and there's no need to postulate "The Absolute" when 'determinate-enough-for-effective-communication" provides all the explanatory capital we need in light of the evidence we have, neurologically, behaviorally and otherwise, no magic thinking needed!
Do you suppose old Jacques concluded that his beloved French could not bear human meaning after all? Should he have abandoned its use after coming to his conclusions? No, because it's an error to cast this in binary-thinking terms: meaning obtains in pragmatic, fuzzy, associative ways. It's not magical or metaphysically "absolute", but neither is it unable to carry and convey meaning. It's just a lot more messy and complicated and "naturally human" than traditional human conceits about their minds and their languages find aesthetically pleasing.
-TS
Touchstone = Mr Bombastic
ReplyDeletelol now I remember this song
http://www.youtube.com/watch?v=Vcfu6Z3it_8
-----------------------------------------------
Damn I like this stupid ass song hahahahahah !!!
@Anon,
ReplyDeleteAll you do is take the positivism mentality and simply replace 'verificationism' with 'falsificationism' and then proceed to assert in your usual bombastic, yet obscurantist tone that statements are not meaningful because they cannot be said to be either "true" or "false" (would like to point to the irony of putting the word truth in quotes, since by implication it undermines its very value).
Hey, pause the reflexive cut-and-paste diatribes for a second and read with some care. I never said, and have not believed that statements cannot be meaningful without being cast as "true or false" propositions. That's preposterous.
What I have said is that as a matter of KNOWLEDGE about the extra-mental world, propositions that ARE cast as "true or false" statements about the world around us are NOT MEANINGFUL AS KNOWLEDGE ABOUT EXTRA-MENTAL REALUTY if they do not carry semantics for "true" vs. "false". They are "meaningless as knowledge" if true cannot be distinguished from false. That's just an entailment of what we mean by "knowledge" - the 'truth' requirement of the epistemology.
All manner of other statements can be generated and used that are richly meaningful outside of that constraint; 'true or false' as a proposition about extra-mental reality is not applicable or the basis for meaning of the statement. If I say "all bachelors are unmarried men", that is a statement that is not falsifiable, not a true/false statement about the extra-mental world. It's a definition, an association, a declaration of meaning, and on that invokes subjects ("unmarried", "men") that do have real world referents for those symbols.
But the statement itself is both meaninful, and non-falsifiable. It's not possibly a candidate for the set of propositions (or models) that we would include as knowledge of the extra-mental world.
You hear criticism of statements that do obtain as propositions about our extra-mental reality ("Immaterial intellect exists and is crucial for human thinking.") and make a leap from that, as a criticism of putative claims to knowledge, to a general dismissal of meaning for all statements which are not even putative claims to knowledge. That's a mistake, and neither reflects what I've said here, or believe.
-TS
Touchstone,
ReplyDeleteAugust 18, 2012 9:04 PM
August 18, 2012 9:35 PM etc.
My mistake; I should have said, why not quote from Feser's article and then contradict with something relevant with reasons supporting?
For instance, you just go on a rant:
So, the triangle picture P1 *is* determinate, unambiguous content, as a percept.
Why? Because we make computers do it? Who the hell cares?
The important concept here is that nowhere is there any normative, "platonic" or archetypal "triangle" needed.
You mean there's no class written into the pattern recognition software that allows for this recognition to take place? Mon Dieu!
This is all irrelevant. Rank Sophist showed it. The point is that concepts in principle can't arise out of percepts because of their material conditions. Your recourse to "statisical fuzziness" will never get you to a determinate, unambiguous concept or universal that can be applied to a class of objects. It's mere equivocation on the term 'concept' to denote the "fuzzy" perception that allows both chimps and humans to recognize a green circle as associated with some instinct, or something.
@touchstone
ReplyDeleteYou're aesthetically not all tingly about alternatives to your intuitions, ergo it's false
You couldn’t be more wrong. I use to have the same intuitions as you do, but upon realizing how incoherent they were I decided to abandon them. I made reference of this in a previous post.
Well, it's false on stipulation of the primacy of your own paradigm, perhaps, but that's just to beg the question.
It has been shown to you in the past by myself and several other users as well as by Rank Sophist in this discussion. It’s also been shown in a plethora of books that unveil the false pretensions of materialism. I’m not going to go over all the literature with you right now. Even if I did, I doubt that you would be willing to listen.
How many individuals on this blog have told you time and time again about this? You don’t listen, you don’t understand the argument and yet persist in creating strawmen and provide irrelevant responses as a means to salvage your worldview.
Do you suppose the English language is devoid of meaning for its speakers? If not, how does this happen? How does meaning obtain without infinite regress??? It's just words offered as the components defining other words, right? Is it magic that allows it avoid descent into infinite regress?
Essence.
The real question however is, how would meaning obtain through an infinite regress? An infinite regress of “meaning” (using quotes as so not to do the word meaning a disservice) is a contradictio in adjecto.
I read that from you as 'the necessity of [the aesthetic appeals I demand] of The Absolute'.
If you want to claim that logic is an aesthetic demand that’s fine with me. ;-)
Look, meaning for humans (and derivatively, for machines modeled on the same architecture) is neither inert nor is it "determinate" in any final, absolute and perfectly unambiguous sense.
Meaning is determinate. Errors in discovering said meaning caused by humans does not negate said fact.
Meaning is practically determinate, "close enough" to achieve agreement and effective communication between humans (and internal dialectics)… [shortened for space conservation]… good enough is good enough.
I’m starting to think that we’re talking a different language here (and I don’t mean coming at it from two different metaphysical positions but specifically, English vs I-don’t-know-what-language). I already told you that acknowledging diversity and cultural context as well as ambiguity between human beings does nothing to undermine what Feser, myself, anon, Rank and everyone else is saying. For the “babelizing” to make sense you need determinatation of meaning as a fundamental aspect of reality, teleology.
Your claim now has shifted to pragmatism and what “works”… To I need to point the obvious about the incoherency of assuming that something “works” can exist in suspended animation? Or the consequent relativism that follows from it? Which takes us back to square one?
And for many intents and purposes, it is good enough. If it works, it works, as a communication process, and there's no need to postulate "The Absolute" when 'determinate-enough-for-effective-communication" provides all the explanatory capital we need in light of the evidence we have, neurologically, behaviorally and otherwise, no magic thinking needed!
More pragmatism. See above. Also, if my memory doesn’t fail me, a few weeks ago Rank or another users took the time to unveil the problems that lie behind pragmatism in more detail than I have.
Do you suppose old Jacques concluded that his beloved French could not bear human meaning after all? Should he have abandoned its use after coming to his conclusions? No, because it's an error to cast this in binary-thinking terms: meaning obtains in pragmatic, fuzzy, associative ways.
ReplyDeleteAn inherent problem with all post-modern deconstructionist efforts is that once they are done, they end up refuting themselves or fall back on pragmatism as you have done so once again here. Or in the case of those of us who have heard enough such nonsense, end up simply ignoring them. Your friend derrida has often been characterized as an obfuscationist and sophist by his contemporaries by the way because of the mess he created. I kind of like him in a way because he (maybe without realizing it) unveiled the follies of a materialistic worldview.
Meaning does not obtain in materialistic, non-determinate, reductionistic way because in such cases meaning does not even exist in the first place but is instead fabricated. It’s an illusion. It’s a phantasm that exists only in the mind of the epistemological nihilist.
For the last time, stop with the strawmen and appeals to the word “magic” every time you speak about anything opposing your view. People are already having a hard time taking you seriously and you’re making yourself sound like the anti-intellectualist “new atheist” types.
By the way Touchstone,
ReplyDeleteLook out for this book by Thomas Nagel coming out this fall:
http://www.amazon.com/Mind-and-Cosmos-ebook/dp/B008SQL6NS/ref=dp_kinw_strp_1
My guess is a war will break out once this hits the shelves. An atheist refuting materialistic darwinism?
WHAT?!?!
Putting all seriousness aside, it's definitely going to be a good /popcorn moment. ;-)
Wait .... Do we actually have people that think that Touchstone is a serious guy ???? or girl ???
ReplyDeleteI mean people, come on, Touch is here to cruse you people to high heavens; That model of his behavior/motivations seem to work rather nice according to the data so far!!!
*Think the spam filter ate my previous post.
ReplyDelete@Touchstone
A new book is about to hit the shelves this fall and my guess is that it's going to create a war. It's called:
Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False
by Thomas Nagel (atheist)
Kind of telling how even a self-ascribed wishful atheist is now abandoning materialism, reductionism and whatever other absurdity that goes along with it, no?
I can't wait to read it and more importantly sit back and enjoy the show of the polemics that will commence.
/popcorn
@Josh
ReplyDeleteThis is all irrelevant. Rank Sophist showed it. The point is that concepts in principle can't arise out of percepts because of their material conditions.
My objection is that no principle, for your "in principle" is provided. When we say "we cannot see stars shining beyond our event horizon, and this is impossible in principle", we can appeal to the physics principles which both obtain as effective in practice/observation, and which MATHEMATICALLY preclude perception of light from sources beyond a certain distance in space-time from us, due to the constraint of the speed of light.
That's a principled use of "in principle". But your use... I can't see any underlying principle. What is it? I understand the intuition and sense of incredulity, but please don't confuse that with a principle that you'd invoke as an epistemic constraint.
Can you articulate the principle your "in principle" refers to?
Many times, in discussion with creationists on abiogenesis, I hear that formation of replicating cells or organisms by natural processes is "impossible in principle". But when I ask for the principle, the response is the principle of incredulity, an argument from ignorance: "I just don't see any way that could happen". I don't doubt that's their earnest position and view of things, but that's a disingenuous use of "in principle" when that happens.
Your recourse to "statistical fuzziness" will never get you to a determinate, unambiguous concept or universal that can be applied to a class of objects. It's mere equivocation on the term 'concept' to denote the "fuzzy" perception that allows both chimps and humans to recognize a green circle as associated with some instinct, or something.
It's not an equivocation, it's the rejection of immaterialist conjectures as meaningful distinctions against associative patterns. I'm not trading between the two senses, unaware or without the required univocity. I'm saying the immaterialist definition of 'meaning' isn't itself meaningful as a matter of examination of how humans, animals, or machines-programmed-to-work-like-rudimentary-brains actually function. Again I understand the intuition -- "but, but, it's not the same, it just doesn't seem to be the same!" -- and share it even. But I can't locate the basis for that belief anywhere outside of our intuition, an intuition we know from humans is given to just such conceits about its uniqueness and the "immaterialist intellect" at work it conjectures in the absence of being able to feel or directly sense in a brain with no internal innervation what is actually going on (until science established a beachhead on the subject).
-TS
Mind reading ....
ReplyDeleteOps more mind reading ...
Touch should stop with the possible outcome of the discussion in his head, it is starting to confirm the Childish Behavior Model.
@Anon
ReplyDeleteMeaning is determinate. Errors in discovering said meaning caused by humans does not negate said fact.
Ipse dixit. If that's how you roll, then:
Meaning is associative, relational, fuzzy. Metaphysical intuitions about Cosmic Absolute Meaning cannot negate this fact.
QED, huh?
It's not just my reverse hand-waving in response to your hand waving. I can point you the physical structure and the electro-chemical activity of humans engaging in "processing of meaning". Creating meaning, determining meaning, deeploying meaning, associatively, relationally, and fuzzily. You can watch this activity happen on an fMRI.
For example, here's the abstract of an article from 2002 Annual Review of Neuroscience by Susan Brookheimer, "FUNCTIONAL MRI OF LANGUAGE: New Approaches to Understanding the Cortical Organization of Semantic Processing":
Until recently, our understanding of how language is organized in the
brain depended on analysis of behavioral deficits in patients with fortuitously placed
lesions. The availability of functional magnetic resonance imaging (fMRI) for in vivo
analysis of the normal brain has revolutionized the study of language. This review
discusses three lines of fMRI research into how the semantic system is organized in the
adult brain. These are (a) the role of the left inferior frontal lobe in semantic processing
and dissociations from other frontal lobe language functions, (b) the organization of
categories of objects and concepts in the temporal lobe, and (c) the role of the right
hemisphere in comprehending contextual and figurative meaning. Together, these lines
of research broaden our understanding of how the brain stores, retrieves, and makes
sense of semantic information, and they challenge some commonly held notions of
functional modularity in the language system.
The paper can be read online here:
http://www.cogsci.ucsd.edu/~sereno/170/readings/30-fMRILang.pdf
The point of providing the abstract is twofold: 1) to show that there is more behind these beliefs than than dogma ("Meaning is determinate"!), and 2) that we have instrumentation that provides observation of these associations and connections *at work* -- in vivo.
What does "Meaning is determinate" stand on, beyond a dogmatic pronunciation of the claim? I'm distinguishing here between 'associative/relational/fuzzy' and 'determinate' as absolute/perfectly unabmiguous. Associative meaning is sufficiently determinate for effective human communication, so it's 'determinate' as a practical matter, just not 'Cosmically Determinate' as you suppose.
-TS
Is anyone going to explain the difference between extrinsic and intrinsic meaning to TS? Anyone? He seems to need an EZ mode introduction to this topic.
ReplyDeleteWe can see the object Brain acting as it perceives something.
ReplyDeleteMeaning is defined as a process in the brain
Meaning rises in the brain, because dur... I have just defined it is something that happens in the brain alone.
Therefore there is no need to add anything like forms or essences to our model, because meaning emerges in the brain. ( Although arguments of necessity don't work to infer the existence or non existence of something... )
------------------------------------------
It feels like you are playing around with words, and trying to define things the way that you need through the method that you like. I don't know touch, this doesn't seem to be any good as an argument. Are you interpreting in some different way here Mr Bombastic, or are we really just defining meaning any way we wish and calling a day?
@Anon
ReplyDeleteI’m starting to think that we’re talking a different language here (and I don’t mean coming at it from two different metaphysical positions but specifically, English vs I-don’t-know-what-language). I already told you that acknowledging diversity and cultural context as well as ambiguity between human beings does nothing to undermine what Feser, myself, anon, Rank and everyone else is saying. For the “babelizing” to make sense you need determinatation of meaning as a fundamental aspect of reality, teleology.
But that isn't the source of the core problem with indeterminacy per natural models of cognition. Dr. Feser says in his post at Biologos:
But doesn’t neuroscience show that there is a tight correlation between our thoughts and brain activity? It does indeed. So what? If you smudge the ink you’ve used to write out a sentence or muffle the sounds you make when you speak it, it may be difficult or impossible for the reader or listener to grasp its meaning. It does not follow that the meaning is reducible to the physical or chemical properties of the sentence. Similarly, the fact that brain damage will seriously impair a person’s capacity for thought does not entail that his thoughts are entirely explicable in terms of brain activity.
That's not what a natural model of cognition understands to be the fundamental challenge in determinacy; those are problems but superficial ones, logistical challenges compared to the problem that obtains in the architecture of the brain itself. As a hugely scaled mesh of associative neurons, the basic mechanism for determining meaning, or identifying activated associations is FUNDAMENTALLY FUZZY, as a matter of neurophysiology.
This has nothing to do with "smudging the ink" -- that kind of wave off is all noise, either evasive or ignorant of this aspect of cognition. Meaning obtains, from all the evidence we can gather and analyze, as an associative mesh that *physically* does not admit of the kind of perfect unambiguity and precise univocity intuited by many.
So, complaining that you do indeed understand the hazards of 'culture context' on this point just confirms you are not grasping the problem, the problem that obtains in the architecture of the brain itself, as effective in creating and deploying meaning via those associative networks -- good enough, excellent enough for effective communication -- but architecturally incompatible with your intuition about "The Absolute" as an aspect of meaning as used by humans.
-TS
Touchstone, I don't think that part is referring to human cognition. Did Feser actually claimed that neuroscience were talking about indeterminacy in that way ?
ReplyDeleteI mean it looks like he is arguing against ideas that involve "A correlates with B, so therefore B is made of A or is reducible to A". Or vive versa.
FUNDAMENTALLY FUZZY
ReplyDeleteWrite that one up on the board, Mr. Hand.
I'll get to your objections in a bit, Touchstone. However, for now, I should point out that there was no "outside" for Derrida, because any perception (even a "percept") was reduced to "language" as soon as--even before, since lanugage shapes the percept--it occurred. It's what happens in your system, too. Do you know what that means? It means that there is no such thing as truth, meaning or objectivity. Know what that means? It means that science is impossible. It means that there is no such thing as logic, no such thing as "falsification", no such thing as research--all of these things are destroyed. Not even percepts are safe, Touchstone, if they are reducible to lines of signs.
ReplyDelete@touchstone
ReplyDeletethe basic mechanism for determining meaning, or identifying activated associations is FUNDAMENTALLY FUZZY, as a matter of neurophysiology.
This sentence obtains only if you assume reductionistic materialism. Chemicals in the brain (hence neurophysiology) not determine meaning.
an associative mesh that *physically* does not admit of the kind of perfect unambiguity and precise univocity intuited by many
Again, without the assumption of materialistic reductionism this is simply vacuous.
the problem that obtains in the architecture of the brain itself, as effective in creating and deploying meaning via those associative networks -- good enough, excellent enough for effective communication -- but architecturally incompatible with your intuition about "The Absolute" as an aspect of meaning as used by humans.
It is you that doesn't understand what he is being told and you are now purposefully ignoring what I said and simply responding in a circular manner without even addressing my devastating critique of your worldview.
The brain does not "create" meaning. Given materialism the best you can claim is that it fabricates illusions of meaning. Once again, meaning in your worldview is a phantasm. In fact, everything you said up to this point, given materialism, is mindless babble!
If the brain as part of a reductionistic-cum-materialistic worldview cannot discover meaning then that serves as a refutation of your worldview. The intellect is thus necessary for apprehension and use of meaning in reality. Your entire argument in fact, is a self-defeating argument against your own view.
It seems that all you did was explicated the reductio ad absurdum that lies behind your beliefs. I recognized it, now it's time for you to recognize it as well.
@Josh
ReplyDeleteIs that charming chap G.K. Chesterton on your avatar?
Is that charming chap G.K. Chesterton on your avatar?
ReplyDeleteIndeed!
Also, I should note that this same problem is why analytic philosophers laugh at Derrida. His system refutes the logic that he used to create his system--it's worthless. Therefore, the claim that mind-brain activity can be reduced to signs and associations is likewise self-refuting. You've wrecked the very enterprise that allowed you to reach that conclusion.
ReplyDeleteoh come on, self refutation is not something so bad .... after you ignore it.
ReplyDeleteTouchstone: It reads a lot like an Intelligent Design maneuver I see regularly -- since abiogenesis has no known natural recipe, therefore God.
ReplyDeleteSince that doesn't mention "intelligence" or "design", it wouldn't seem to qualify as an ID argument. Where are you getting that from?
I am pretty certain the ID proponents ... serious ones; have never done that argument ... in public XD at least.
ReplyDeleteBut who knows, both sides are always at each others throats and lie really hard about one another.
Amazing how we always go back to evolution ... or is it darwinism.. It is darwinism I think.
@Anon,
ReplyDeleteThis sentence obtains only if you assume reductionistic materialism.
No, there's nothing in an naturalist model of meaning as an emergent property of the brain, any more than gravity as a natural process can only obtain if we assume some form of philosophical materialism. "naturalist meaning" does not produce a contradiction with supernature, or the idea of a God, personal in nature otherwise. It would negate "immaterial intellect" insofar as that was synonymous with the machinery for concept formation, meaning, abstraction and (meta-)representation, but it only obviates what it naturalizes, there. Everything else can be as supernaturalist and immaterialist as you like. A supernatural god may have designed the universe such that humans, or some kind of sentient creature will evolve, developing natural faculties for recognition, comprehension, concept formation and semantics/meaning. That does not (cannot) require assuming materialism, and no logical contradictions obtain.
This recurring charge that on materialism, "meaning is meaningless" and language is somehow vacuous is just an exercise in equivocation, clinging to supernaturalist/dualist concepts of for "meaning" and "understanding" and forcing them into a materialist model, where they are, indeed, meaningless, divide-by-zero operations. On materialism, the materialist concept of meaning would obtain, and neural associations in the brain is (or may be, depending on what the science shows) how meaning is reified in human, and other minds.
It's silly to complain that "OMG on the reductionist materialism nothing means anything!". That's nothing more than denialism about what that materialism would entail, namely that meaning was a natural, physical phenomenon, and supernaturalist intuitions about meaning WERE NOT RELEVANT in that case. "My definition of 'meaning' and how it obtains must change on materialism" is NOT a case against the meaningfulness of semantic structures in a materialist paradigm. It's just a reflection of the inapplicability of the ones you are wedded to in your current paradigm.
This is very much like misunderstanding the concept of 'motion' from people who understand the colloquial and physics sense of the term, but are not aware of the potentiality->actuality sense deployed for the term in A-T. One cannot force alien concepts on a different paradigm, but must address the concepts in the frames in which they are constructed WITHIN that paradigm.
The charge of 'vacuousness if materialism' commits this error, and judges a completely different set of semantics and constructs for 'meaning' by its own parochial notions from a framework external to it.
Chemicals in the brain (hence neurophysiology) not determine meaning.
Well, you have ipse dixit down, and in its reiterative form, too.
-TS
@Mr. Green
ReplyDeleteSince that doesn't mention "intelligence" or "design", it wouldn't seem to qualify as an ID argument. Where are you getting that from?
God as the intelligent designer, sorry, thought that would be quite obvious as the connection. Ask Dembski who he thinks designed biological life. Ask Behe. Ask Fuller. Ask Paul Nelson. Ask Philip Johnson.
And on an on.
1. We are ignorant of chemical pathways for abiogenis on natural, impersonal processes.
2. Therefore, this not possible in principle (why, because we can't think how it might happen!)
3. Therefore organic life must be the product of an Intelligent Designer.
4. Even if aliens seeded life here on earth, the design of THOSE aliens, if they are not supernatural creators, requires a Intelligent Designer, on grounds of (1,2) -- a biogenesis is not possible in principle.
5. This Intelligent Designer, capable of creating life where nature itself was incapable, is therefore a Supernatural Intelligent Designer. This we call "God".
-TS
You see man, the problem is that your concept of meaning isn't the one I like... I mean, wait ... is not that ONE I CAN APPLY! I will not show that to be true because is too damn hard that is why I always dodge anyone who asks to demonstrate something... always U_U!
ReplyDeleteWell I guess we can summarize this whole thing as: " I would love to discuss with you as long as we start from the idea I am right ... always."
Need my pills ...
Although this Evo-Cretio talk is really messy, what Touchstone said is half correct. Yeah they all believe that G*d designed life, I think that is pretty clear.
ReplyDeleteThe argument ... well I never seem them do it, but I have seem the EVO side says that they do those arguments.
But the best way to proof is just to go to Discovery institute site and get the quotes from there. I think that settles if these people actually made the arguments as it was presented.
Although this Evo-Cretio talk is really messy, what Touchstone said is half correct. Yeah they all believe that G*d designed life, I think that is pretty clear.
ReplyDeleteNo ID argument given by the main ID proponents concludes to God's existence. None.
At best, they infer intelligence based on demonstrable capabilities of known intelligent agents.
The argument may be flawed, but Touchstone misrepresents it badly.
My bet is that he is lying ... judging how the discussion is, I wouldn't be impressed
ReplyDeleteOh needless to say, I was also an asshole with half ass ideas about the whole thing so I talk from experience. Seriously most of the people in the talk know nothing of the other side... and I was sort of like that... yeah shame on me u_u I know
@touchstone
ReplyDeletemodel of meaning as an emergent property of the brain
Oh, so now you're appealing to irreducible emergence and trying to hide behind that notion? I already explained this to you in another discussion but as usual you're not listening. Emergence is just a side-ways manner of appealing to dispositions (teleology) and latent aspects of reality (forms), which are actualized. Furthermore, they represent a discontinuity in metrology since they are irreducible. Well, well... Doesn't that sound quite like what we've been telling you? Of course it does. What you fail to understand is that reductionism is your only real option here. Appealing to emergence is trying to play the game by our rules. Unfortunately you just lost (again).
The fact that you think that intellect can be replaced with machinery is precisely the core of your confusion and problem. We're wasting time trying to explain this to you it seem.
When I referenced contradiction I spoke of contradictio in adjecto... A contradiction in itself... As in materialistic meaning is a contradictio in adjecto.
A supernatural God may have designed the universe such that humans, or some kind of sentient creature will evolve, developing natural faculties for recognition, comprehension, concept formation and semantics/meaning.
This is not about God, but about coherence vs absurdity. I am well aware of physicalist Theists (e.g. Baker)as well as mereological Theists (Van Inwagen). You're being irrelevant again.
On materialism, the materialist concept of meaning would obtain, and neural associations in the brain is (or may be, depending on what the science shows) how meaning is reified in human, and other minds.
The materialistic concept of "meaning" is meaningless. Contradictio in adjecto. An illusion. What you're saying here is mere empty verbiage appealing to "science" (emphasis on the quotes).
I will explain it to you one last time despite your constant and dishonest attempt to ignore what we've been telling you. If materialism holds, then there is no meaning in the world, period. That is true by definition. So whatever construct you'll create as your paradigm, using whatever brain process, whatever science (or "science"), whatever bombastic super-duper nerd talk you conjure will never be able to obtain meaning because reality in its totality is devoid of it!
You can believe that you have found meaning in a materialistic world but it would be no different than believing in santa claus. Neither one exists and believing in either is a delusion. I simplified it as much as I could for you. Please try to understand.
meaning was a natural phenomenon
Here you are either admitting that nature is ridden with teleology or again committing contradictio in adjecto.If the former, welcome to Aristotelianism if the latter it's as if you said nothing at all (again).
the meaningfulness of semantic structures in a materialist paradigm
If those structures are meant to describe how reality is then your materialism is refuted. If those are mere constructs of the materialist's imagination then you're deluding yourself again.
different set of semantics
If we all operate on different semantics then I suppose that we are all enslaved in our little mind incapable of communicating at all. While this might somewhat describe what is going on when we try to explain things to you, the rest of us to don't believe that each one of us operates under different semantics, let alone ones (as per materialism) that are mere illusions that have no relation to the reality which we inhabit.
Well, you have ipse dixit down
Tu quoque. ;-)
*metrology = mereology
ReplyDelete(damn auto-correct)
I said the image is "determinate, unambiguous content, as a percept", and the point of that contradiction of Feser (and perhaps you here) is that we don't have information just at the post interpretive layer, but "raw input" at pre-interpretive layer.
ReplyDeleteUnfortunately, this move does not work. Percepts are representations of something else, which means that they are reduced from that "something else" to the language of the percept. The "something else"--the exterior material processed by the percept--necessarily cannot exist unreduced in the brain. (If it did, then the brain would become it.) It has to be transformed into code first. However, this means that the code ("language") pre-exists the percept, and so determines it. As a result, there can be no such thing as "determinate content" even on the level of percept, because even percepts are interpretations.
In simple terms: if our brains work like code, then the code pre-exists pre-conscious "raw material". If this is the case, then all "raw material" is reduced to the language of the pre-existent code, which means that even "percepts" are representations of something else. All representation is interpretation. Therefore, all percepts are interpretations. If we then deny the existence of intentionality and immaterial intellect, we are left with a system that is manifestly self-refuting, just like Derrida's.
This is just to say that there are objective features of these images that have statistical feature affinities with each other, but which are not (yet) attached to any linguistic concepts.
Percepts would have to occur through a pre-linguistic language, as I said above. If we hold that minds are totally material, we are left with the ridiculous position that there are no such things as objective features, even on the level of percept.
When my computer program, which interprets visual input for the purposes of identifying English letters and numbers gets a new item to process, it's a "brute image" -- it's just a grid of pixels (quantized data so that the computer can address it for interpretation).
But the "brute image" is processed as code, and the code itself must have either determinate or indeterminate content. It's obviously the case that the code is determinate, because we gave the symbols determinate content. If the content was indeterminate, then we'd be left with Derrida's paradox, and the machine would be incapable of taking in "objective" percepts in the first place.
In a human, or a chimp, the optic nerves terminate in the brain (at the LGNs in the thalamus) and provide raw visual stimuli to the neural net, whereupon all sorts of integrative and associative interpreting begins across the neural mesh.
If computer code is determinate because we gave it determinate content, then the code of our minds would have to be determinate as well. Otherwise, even our percepts would be completely non-objective and indeterminate, since that would be the nature of the "code" in which they are processed. In other words, our "brain code" would have to contain intentionality, which is exactly what your computationalism is trying to explain away.
That is nothing more than to note that what we call "triangle-ish" images and "square-ish" images are not so called by caprice; the images have, prior to any interpretation, or labeling, physical features that distinguish them, and distinguish them as distinct groups.
ReplyDeleteEvery code reduces perception (or "percept") to representation. Therefore, the code itself must contain "interpretation" and "labeling"--otherwise, the representation is indeterminate as well.
Which is just to say that Wittgenstein, bless his heart, is talking out his behind here, from a position of thorough ignorance of what is going on in his own brain, physically. He can take some comfort in the fact that he was hardly more equipped by science to speak on the matter than was Aquinas, but when we read something like that NOW, it's just obsolete as a context for thinking about this subject. The brain's "pictures" DO have motion cues that come with it, prior to any interpretation, for direction, and velocity. This is how sight works, before the visual cortex even gets hold of it. The "sliding down" vs "walking up" interpretations are NOT on an even footing, and BEGIN that way for brain, as our visual sense machinery is constantly streaming in motion cues (and other cues) in along with "image" data.
Versions of imagism based on moving images have been refuted by Wittgenstein's followers. They suffer from the same innate problems.
Not to mention that, again, either the "code" (the format for pre-conscious representation) is determinate or it is not determinate. If it is determinate, then it has irreducible intentionality. If it is indeterminate, then the percept is not determinate either.
Rather, the system is programmed to "recognize", or more precisely, to build associations and to maintain and refine them through continuous feedback. So that means it can and will group "triangle-ish" images and "square-ish" images (if its mode of recognition is visual/image-based) without it being told to look for 'triangle' or told what a 'triangle' is.
I know. I'm not an expert in programming, but I know how computers operate. Again, the problem arises as soon as you introduce the term "programmed". Programming is code, and the code must have determinate content. In the case of computers, this is obviously the case: we put it there. In the case of the human brain, there is nothing to give the code itself determinate content.
But hold on, you say -- it only does that because we programmed it to recognize and categorize generally. Yes, of course, but so what? We are programmed by our environment to recognize and categorize.
This, of course, does not work. Feser has attacked similar bizarro reasoning in the past. You're merely engaging in the homunculus fallacy. Here's Feser's post against Coyne's similar inanities: http://edwardfeser.blogspot.com/2011/05/coyne-on-intentionality.html
If we're going to interpret the environment, then both our "code" and the environment itself must have irreducible intentionality. If neither has intentionality--and thereby determinate content--, then all content is indeterminate and there are no such things as "objectivity", "accuracy", "science" and the like.
No, because if you are coding for similarity as the basis for grouping, "lower-level coding" won't help group. Grouping is a function of a similarity test.
The similarity test is a function of the determinate content of the code itself.
Humans are programmed by the environment to do similarity testing
ReplyDeleteThen the environment must have determinate content, and therefore intentionality, or it could not give it to humans. In turn, humans could not give it to machines.
For he is saying that the local process of interpreting that triangle image takes immaterial intervention directly.
It does. Intentionality is beyond the material--as you well know--and no image of a triangle has determinate content from its material components. This applies even if you deconstruct the triangle into a series of code-associations: the code-associations themselves would need to have intentionality.
Humans are good pattern recognizers, and I see this pattern a lot:
A; We need an immaterial intellect for conceptualization and understanding.
B: No we don't. Look at this program...
A: Well, that just proves something is needed to program us or the machine for conceptualization or understanding.
B: But the question was about the need for immaterial intellect to *perform* the task of conceptualization and understanding!
A: You still need a Cosmic Designer to have that mechanism come to be.
Actually, this would be more accurate.
A; For there to be determinate content, irreducible intentionality must be positied.
B: No it doesn't. Look at this program...
A; Well, that just proves that the code was given determinate content by us.
B: Uh... uhhh... homunculi!
"B" is basically a representation of Dennett.
Dr. Feser is saying an immaterial intellect is needed to perform the local act of interpretive meaning.
Dr. Feser is saying that, without forms (universals), there could not be determinate content. Nothing about the physical make-up of an image gives it determinate content. All computationalist attacks against this idea--"percepts" and whatnot--invariably presupposes intentionality, because the "code" must itself have determinate content. The only thing that can abstract this determinate content in its non-visual, non-representational essence is an immaterial mind. Anything less is merely a visual impression, reduced to code.
So it's not regressive, and doesn't fall to a vicious cycle demanding ever more "fundmental" bases for meaning. It's a peer graph, and a huge one, too, in the case of English.
I know it isn't a vicious regress. Derrida wasn't that terrible of a philosopher. It's still a self-refuting position, though.
Without the outside, there are no referents for any symbols we might establish.
Exactly, Touchstone. That's what Derrida tells us. Because your system involves the reduction of the "outside" to code form, you're stuck in Derrida's very same self-refuting system.
Yes, but "appealing to another" decreases the ambiguity, and establishes meaning!
No, it doesn't.
It doesn't make language empty of meaning, for Derrida or anyone else.
Yes, it does.
Unfortunately, this move does not work. Percepts are representations of something else, which means that they are reduced from that "something else" to the language of the percept. The "something else"--the exterior material processed by the percept--necessarily cannot exist unreduced in the brain. (If it did, then the brain would become it.) It has to be transformed into code first. However, this means that the code ("language") pre-exists the percept, and so determines it. As a result, there can be no such thing as "determinate content" even on the level of percept, because even percepts are interpretations.
ReplyDeleteThis equivocates on "language". If you suppose that signal encoding is language, in an unequivocal way, than any sighted organism is a linguistic being. For the brain (or the computer program processing visual input from a camera of some type), the external light patterns are encoded, but this is not semantic language, to use a term that should help avoid equivocation. For the brain (or the computer program) this is as "raw" as "raw" gets. This is the starting point of the chain, the encoding of photon patterns to electrical signals that precede interpretation against concepts and symbols.
If you want to understand the raw input to the brain (or program) as 'interpreted' by virtual of its translation from photon dynamics to electrical signals, fine -- that in way precludes it from being determinate content. It is what it is. For example, if you were to look at a pixellated input image that might be processed by an OCR program, those pixels are content -- they are information bearing configurations of matter (it takes some Kolmogorov complexity to describe the signal in a lossless way, for example). They just are not abstractions at the level of linguistic symbols or semantically rich concepts. The information is what it is as content, prior to any interpretation by the brain (program).
We could, alternatively, just shift the input frame back and refer to the photon inputs for our eyes as the "raw input". That sets aside your concerns about the "interpreted-ness" of any encoding process by eyes or machine-with-camera of those photon actions to electronic signals. The same point obtains -- the input is content, determinate as is-what-it-is. What human or program may do with it can go do many paths, depending on the processing features in place. But at the head of the chain, we begin with 'brute content' that is unprocessed conceptually or linguistically.
-TS
@rank sophist,
ReplyDeleteIn simple terms: if our brains work like code, then the code pre-exists pre-conscious "raw material".
It may be the case, but I see no reason to think this *must* be the case. Code -- neural network connections that map and remap adaptively based on feedback loops -- is thought to be an adaptation of evolution, and emergent feature of animal cognition. You have have a means of demonstrating this can't be the case now?
If this is the case, then all "raw material" is reduced to the language of the pre-existent code, which means that even "percepts" are representations of something else. All representation is interpretation. Therefore, all percepts are interpretations. If we then deny the existence of intentionality and immaterial intellect, we are left with a system that is manifestly self-refuting, just like Derrida's.
Well, back to the photons bouncing around and coming into our eyes (or the camera attached to our processing program): for a given photon P inbound to your eye, what it is it a representation of, in your view?
There's nothing self-refuting about properties that emerge from certain configurations of matter and particular interactions, any more than we suppose that the "wetness" or water is self-refuting because neither of the elements that make up water (2 H + 1 O) are not "wet" like water is. Where did the "wetness of water" come from??? It's a product of the combination of those elements, a feature synthesized from them.
On your view, water cannot be wet because such synthesis cannot obtain -- Hydrogen and Oxygen aren't wet on their own. On semantical capabilities of brains, this is a feature that is synthesized from the configuration and interactions of its constituent parts, "wetness" as 'meaning-processing', a faculty that is supervenient on brains.
-TS
That emergent characteristics you just spoke is also known as FORM to your ...adversaries.
ReplyDeleteThis equivocates on "language". If you suppose that signal encoding is language, in an unequivocal way, than any sighted organism is a linguistic being.
ReplyDeleteIt doesn't equivocate. Any series of signs is a language of sorts--a semiotic structure. If the brain works by signs and associations, then it works through a kind of pre-language.
If you want to understand the raw input to the brain (or program) as 'interpreted' by virtual of its translation from photon dynamics to electrical signals, fine -- that in way precludes it from being determinate content.
Thank you for admitting it. This means that your system is self-refuting.
We could, alternatively, just shift the input frame back and refer to the photon inputs for our eyes as the "raw input".
There are no such things as photon inputs under your system, Touchstone. There are merely things that we refer to as "photon inputs" after they have been reduced to our indeterminate pre-linguistic brain code. We can never know them in themselves. And, because our brain code only obtains "meaning" by association with other symbols--and never by anything "beyond the text"--, we're stuck with the destruction of all science, knowledge, truth, objectivity and so on.
The same point obtains -- the input is content, determinate as is-what-it-is.
Aside from the fact that this move is incoherent within your system, you have merely committed the homunculus fallacy: you relocated determinate content to nature, which means that you've relocated intentionality to nature. Again, though, your system is already in ruins.
@rank sophist,
ReplyDeletePercepts would have to occur through a pre-linguistic language, as I said above. If we hold that minds are totally material, we are left with the ridiculous position that there are no such things as objective features, even on the level of percept.
That doesn't follow. How do you suppose a material mind entails the absence/impossibility of objective features.
Think about a program that categorizes shapes based on the input from a camera. On an adaptive neural network architecture, with back propagation, in unsupervised mode, it will begin to "learn" the features of the images it sees. It will find statistical affinities between the pixel configurations of the inputs it processes and develop clusters of associations, associations that will activate, stronger or weaker, based on the processing of the next inbound image to be processed. This is a material system that learns features, and can distinguish them. It is also contingent on the objective features of the input it processes. There is no caprice or will or emotion or subjective bias to interfere. It's just code cycling mechanically in the machine, deterministically.
On that description, do you reject that the system learns the features of the images presented (that is, it can distinguish them predictably, and more precisely and accurate as the model processes more and more input)? What part of those features do you reject as non-objective, and what would the "objective features" be in that case, if any obtain under some other view?
-TS
@rank sophist
ReplyDeleteBut the "brute image" is processed as code, and the code itself must have either determinate or indeterminate content.
No, that's not true any more than every question is a hard "yes" or "no". As the code processes images on input, it will fire on different associations to different degrees. Some association humans would call "triangle-ish" may fire to 40% of its potential, which translating for human thinking about it would mean "this looks sorta somewhat triangle-ish". That *same* input image may fire on associations humans would call "square-ish" to 60% of its potential, meaning (put in human-friendly terms) "this looks like pretty much like a square".
The code is deterministIC, but the determinatION of the abstract content of a given image is fuzzy, an array of mixed signals (and in practice, it's not just two perceptrons that activate, but can be very many). It's "60% square", "40% triangle, a conflicted answer, but with a small bias toward 'square' (er, what humans would call a square -- the program isn't labeling these like humans do). But the results could be "30% Square", "30% Triangle", "30% Star". So what's the verdict, then? Without further feedback into the system that favors one of these (or some other association), there is no clear determination.
It's crucial to understand, then, that {determinate|indeterminate} are not the available options for this network. There is always some ambiguity in the system, but even so, there is not total ambiguity and parity between all associations. To think of it in hard {determinate|indeterminate} terms is to misunderstand how associations work in the brain (or our program). This should no more difficult than understanding the applications of fuzzy logic systems in the real world. Fuzzy logic replaces a {True|False} pair with a potential ranging from 0 to 1.0 (for example) on propositions. So a "0.6" is more "true" than "false", but less true than "0.9". If you ask if proposition X is "true" in a binary sense, all you can do is deprecate your available information into a "rounding up" or "rounding down" result for 0 or 1, false or true. But this loses information available in the system, information that can and does provide the basis for better performance in the system because it can represent more accurately the partial indeterminacy in the system.
It's obviously the case that the code is determinate, because we gave the symbols determinate content.
That's a mistake about computing. The code can be fully deterministic in terms of what opcodes and instructions it executes (and for nearly all programs exist self-mutating code instances this is the case), but the execution of the instruction itself can lead to indeterminate results or states of the system. Calling a library routine that returns (pseudo) random values, for example, can put the program in an indeterminate state (and input from a camera can do the same thing). And more broadly, see the famous Halting Problem in computing.
If what you say is true, the Halting Problem is no longer a problem, and congratulations, you are now world famous!
-TS
@rank sophist
ReplyDeleteIf the content was indeterminate, then we'd be left with Derrida's paradox, and the machine would be incapable of taking in "objective" percepts in the first place.
No, that is only a problem if you assume everything must be 100% determined, or 100% ambiguous. This is a classic example of the hazards of binary thinking. The content is "indeterminate-in-the-sense-of-certainty", but it is determinate to some degree of potential. Importantly, as more and more input is processed, the objective features of what has been seen and processed already can be brought to bear on new input, and the determinacy of both -- the repository of stored associations and connections and the associations and connections assigned to the new input image -- can be improved (or degraded, as it happens). The determinacy and specificity of the associations and connections of the overall system is fluid, and reacts with continuing feedback. This can be seen when a neural net app "discovers" which features provide performative disposition (generates positive feedback), and with just a small amount of new input, its network connections reach a "tipping point" where it's discriminating abilities spike sharply.
All of this operates OUTSIDE of the binary notion of {determinate|indeterminate}.
If computer code is determinate because we gave it determinate content, then the code of our minds would have to be determinate as well.
No. Go read up a bit on the Halting Problem and see what you make of that. Is that a problem, in your view? Why can you not, given all the time and energy you could want, determine if a given (deterministic) non-trivial program terminates or runs forever?
Interesting, and for related reasons you have not grasped here, Greg Chaitin can calculate a "halting probability", an estimate of the Halting indeterminacy. It can't be completely computed, so it itself is only partially determinable. Zing. {indeterminate|determinate} is just a massive category error on these issues.
Otherwise, even our percepts would be completely non-objective and indeterminate, since that would be the nature of the "code" in which they are processed. In other words, our "brain code" would have to contain intentionality, which is exactly what your computationalism is trying to explain away.
No, strong AI doesn't explain away intentionality, it just obviates any need for an immaterial homunculus, a ghost in the machine that is "really doing the thinking" apart from physical matter and energy. Intentionality obtains -- humans are beings with a stance of intentionality. This is a feature of our evolved physiology. Computationalism just aims to show that intentionality, along with meta-representational thinking (among other things) does not require and cannot use notions of an 'immaterial intellect'. The reaction seems to be, here, that on such a model, 'meaning' is meaningless. But that's only true with a definition derived from a supernaturalist model for 'meaning'. Meaning just obtains naturally on materialism.
-TS
Touchstone: Ask Dembski who he thinks designed biological life. Ask Behe. Ask Fuller. Ask Paul Nelson. Ask Philip Johnson.
ReplyDeleteWell, who they think designed life is irrelevant, unless that's explicitly part of the arguments they offer as being ID.
1. We are ignorant of chemical pathways for abiogenis on natural, impersonal processes.
2. Therefore, this not possible in principle (why, because we can't think how it might happen!)
etc.
That is indeed a bad argument, but what I meant was, can you cite where Behe or Fuller or Nelson or Johnson actually say that?
Touchstone,
ReplyDeleteIf you can't overcome my objection to your first two repsonses, then the rest of your posts collapse. Rather than draw out this argument to obscene lengths, I'd prefer to keep it short and snappy. If you can wiggle out of my objection--I don't see how--, then I'll take on your more recent posts. Otherwise, I'm not sure it will be necessary.
@Eduardo
ReplyDeleteThat emergent characteristics you just spoke is also known as FORM to your ...adversaries.
No, but that's a good point to make for clarity: people see the word emergent, and think "emergentism", as in the belief that features that are fundamentally irreducible on its parts for a composite. By "emergence", and properties that supervene on such a composite, I mean that the phenomena is obscured (to us) by the complexity of the interactions of the components. That is, the "saltiness" of NaCl is deducible from knowledge of the chemical/physical properties of sodium and chlorine, even though neither of these to components of salt are themselves "salty".
Emergentists, as non-reductionists, may hold that an emergent feature of consciousness or intellect is *fundamentally* irreducible. That's not what I subscribe to.
-TS
Emergent to me is .... A+B = C
ReplyDeleteNature C was not there and it is here now. That is emergency to me. Now, your emergent idea and the idea form works just fine to comprehend how salt becomes what it is from other stuff. Now emergency as far as I can understand it can be shown to be correct or at least shown to be the best option.
Now you might believe that A gets close to B and becomes C... but that is your opinion, I want to know is emergency has limits, or rules or something like that, What are the group of definitions that describe emergency completely, how I should use it... what could show it to be wrong; you know that very lengthy discussion of an idea, anyways why I bother telling you...
Asserting that Saltiness emerges from other stuff, is just an assertion. I can see that it could, but it doesn't warrant it over anything else, and it doesn't show anything wrong forms or essences or any of the ideas you cri... you rip at ALL the time
You are hardly the first one to defend these ideas, and your ideas have been criticized. It would be nice to see you argue that those critiques are irrelevant or they fail, or something like that.... but you never do that do you.
Anyways, is not your problem, I should stop being lazy and think from your position and from the Thomist position, and hardly need you, even though I had hopes... badly placed, no doubt.
Actually wait .... I just read a post that were like a really good post at 7:34 PM
ReplyDeletet_t I am moved .... okay well I am just normal; took my pills.
okay well I am just normal; took my pills.
ReplyDeleteHey! Touchstone also takes pills ('high tech' medication anyway), and they don't make him normal. What's your secret?
Might also call on Glenn for a re-link to that paper in the other thread regarding computer programming and its relation to Aristotelian logic. If nothing else, it will give Touchstone something else to chuckle at while writing obfuscatory sentences.
ReplyDeleteBetter late than never (yes, Touchstone, this does mean there is still hope for you yet):
Aristotle and Object-Oriented Programming: Why Modern Students Need Traditional Logic
Yes, but "appealing to another" decreases the ambiguity, and establishes meaning!
ReplyDeleteNo, it doesn't.
It doesn't make language empty of meaning, for Derrida or anyone else.
Yes, it does.
Hm, the above reminds me of something...
To Touchstone: I've been over some of this same ground with Mr. Rank Sophist (see some of these posts where I try to illustrate how intentionality can arise in a mechanical system using the ribosome as an example) and I can pretty much promise you that you are wasting your time.
I am tempted to say that he's too stupid to understand, but I don't think that's accurate, it's more like he's actively working to not understand. Which is fine if you are doing religion, not so good if you claim to be doing philosophy.
At the top of Touchstone's list of books he recommends for reading is Gödel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter. In the preface to the 20th Anniversary Edition of GEB, Hofstadter answers a question that had long been on people's mind: what is the book about? He succinctly states his purpose for having written the book thusly: "In a word, GEB is a very personal attempt to say how it is that animate beings can come out of inanimate matter."
ReplyDeleteThough I don't know if Touchstone expressly believes that animate beings can come out of inanimate matter, I do think it would be surprising if he disagreed with the notion that such can happen. At any rate, since Touchstone seems to have some respect for Hofstadter, I thought I'd provide some quotations from Hofstadter's work. There are only two quotations, but they are rather lengthy, and it will take a few comments to post them. Though lengthy, they are easy to read, and just as easy to understand.
My reason for posting these quotations is two-fold: a) by his criticism of Marie George, Touchstone seems to think that 'monkey see monkey do' qualifies as the kind of thinking George was writing about or at least indicating (in the paper linked to by Josh), and I think what comes from Hofstadter's use of his cognitive apparatus provides a better idea/example (than does Touchstone's hallowed 'monkey see monkey do') of what George might have had in mind when she wrote about thinking; and, b) I think Hofstadter shows--without breaking a sweat--the kind of honesty and integrity of thought which rational people can appreciate.
(cont)
Quotation I of II
ReplyDelete...an article by Mitchell Waldrop in the prestigious journal Science (Waldrop, 1987)... described in flattering terms the analogy-making achievements of SME, the Structure Mapping Engine (Falkenhainer, Forbus & Gentner, 1990), a computer program whose theoretical basis is the "structure-mapping theory" of psychologist Dedre Gentner (Gentner, 1983). After a brief presentation of that theory, Waldrop's article went through an example, showing how SME makes an analogy between heat flow through a metal bar and water flow through a pipe, inferring on its own that heat flow is caused by a temperature differential, much as water flow comes about as a result of a pressure differential. Having gone through this example, Waldrop then wrote:
To date, the Structure Mapping Engine has successfully been applied to more than 40 different examples; these range from an analogy between the solar system and the Rutherford model of the atom to analogies between fables that feature different characters in similar situations. It is also serving as one module in....a model of scientific discovery.
There is an insidious problem in writing about such a computer achievement, however. When someone writes or reads "the program makes an analogy between heat flow through a metal bar and water flow through a pipe", there is a tacit acceptance that the computer is really dealing with the idea of heat flow, the idea of water flow, the concepts of heat, water, metal bar, pipe, and so on. Otherwise, what would it mean to say that it "made an analogy"? Surely, the minimal prerequisite for us to feel comfortable in asserting that a computer made an analogy involving, say, water flow, is that the computer must know what water is--that it is a liquid, that it is wet and colorless, that it is affected by gravity, that when it flows from one place to another it is no longer in the first place, that it sometimes breaks up into little drops, that it assumes the shape of the container it is in, that it is not animate, that objects can be placed in it, that wood floats on it, that it can hold heat, lose heat gain heat, and so on ad infinitum. If the program does not know things like this, then on what basis is it valid to say "the program made an analogy between water flow and such-and-so (whatever it might be)?
Needless to say, it turns out that the program in question knows none of these facts. Indeed, it has no concepts, no permanent knowledge about anything at all. For each separate analogy it makes (it is hard to avoid that phrase, even though it is too charitable), it is simply handed a short list of "assertions" such as "Liquid(water)", "Greater(Pressure(beaker), Pressure(vial))", and so on. But behind these assertions lies nothing else. There is no representation anywhere of what it means to be a liquid, or of what "greater than" means, or of what beakers and vials are, etc. In fact, the words in the assertions could all be shuffled in any random order, as long as the permutation kept identical words in corresponding places. Thus, it would make no difference to the program if, instead of being told "Greater(Pressure(beaker), Pressure(vial))", it were told (Beaker(Greater(pressure), Greater(vial))", or any number of other scramblings. Decoding such a jumble into English yields utter nonsense. One would get something like this: "The greater of pressure is beaker than the greater of vial." But the computer doesn't care at all that this makes no sense, because it is not reaching back into a storehouse of knowledge to relate the words in these assertions to anything else. The terms are just empty tokens that have the form of English words.
(cont)
Despite the image suggested by the words, the computer is not in any sense dealing with the idea of water or water flow or heat or heat flow, or any of the ideas mentioned in the discussion. As a consequence of this lack of conceptual background, the computer is not really making an analogy. At best, it is constructing a correspondence between two sparse and meaningless data structures. Call this "making an analogy between heat flow and water flow" simply because some of the alphanumeric strings inside those data structures have the same spelling as the English words "heat", "water", and so on is an extremely loose and overly charitable way of characterizing what has happened.
ReplyDeleteNonetheless, it is incredibly easy to slide into using this type of characterization, especially when a nicely drawn picture of both physical situations is provided for human consumption by the program's creators (see Figure VI-I, page 276), showing a glass beaker and a glass vial filled with water and connected by a little curved pipe, as well as a coffee cup filled with dark steaming coffee into which is plunged a metal rod on the far end of which is perched a dripping ice cube. There is an irresistible tendency to conflate the rich imagery evoked by the drawings with the computer data-structures printed just below them (Figure VI-2, page 277). For us humans, after all, the two representations feel very similar in content, and so one unwittingly falls into saying and writing "The computer made an analogy between this situation and that situation." How else would one say it?
Once this is done by a writer, and of course it is inadvertent rather than deliberate distortion, a host of implications follow in the minds of many if not most readers, such as these: computers--at least some of them--understand water and coffee and so on: computers understand the physical world; computers make analogies; computers reason abstractly; computers make scientific discoveries; computers are insightful cohabiters of the world with us.
This type of illusion is generally known as the "Eliza effect," which could be defined as the susceptibility of people to read far more understanding than is warranted into strings of symbols -- especially words -- strung together by computers. A trivial example of this effect might be someone thinking that an automatic teller machine really was grateful for receiving a deposit slip, simply because it printed out "THANK YOU" on its little screen. Of course, such a misunderstanding is very unlikely, because almost everyone can figure out that a fixed two-word phrase can be canned and made to appear at the proper moment just as mechanically as a grocery-store door can be made to open when someone approaches. We don't confuse what electric eyes do with genuine vision. But when things get only slightly more complicated, people get far more confused--and very rapidly, too.
Quotation II of II
ReplyDeleteA particularly clear case of a program in which the problem of representation is bypassed is BACON, a well-known program that has been advertised as an accurate model of scientific discovery (Langley et al 1987). The authors of BACON claim that their system is "capable of representing information at multiple levels of description, which enables it to discover complex laws involving many terms". BACON was able to "discover", among other things, Boyle's law of ideal gases, Kepler's third law of planetary motion, Galileo's law of uniform acceleration, and Ohm's law.
Such claims clearly demand close scrutiny. We will look in particular at the program's "discovery" of Kepler's third law of planetary motion. Upon examination, it seems that the success of the program relies almost entirely on its being given data that have already been represented in near-optimal form, using after-the-fact knowledge available to the programmers.
When BACON performed its derivation of Kepler's third law, the program was given only data about the planets' average distances from the sun and their periods. These are precisely the data required to derive the law. The program is certainly not "starting with essentially the same initial conditions as the human discoverers", as one of the authors of BACON has claimed (Simon 1989, p. 375). The authors' claim that BACON used "original data" certainly does not mean that it used all of the data available to Kepler at the time of his discovery, the vast majority of which were irrelevant, misleading, distracting, or even wrong.
This pre-selection of data may at first seem quite reasonable: after all, what could be more important to an astronomer-mathematician than planetary distances and periods? But here our after-the-fact knowledge is misleading us. Consider for a moment the times in which Kepler lived. It was the turn of the seventeenth century, and Copernicus' De Revolutionibus Orbium Coelestium was still new and far from universally accepted. Further, at that time there was no notion of the forces that produced planetary motion; the sun, in particular, was known to produce light but was not thought to influence the motion of the planets. In that prescientific world, even the notion of using mathematical equations to express regularities in nature was rare. And Kepler believed—in fact, his early fame rested on the discovery of this surprising coincidence—that the planets' distances from the sun were dictated by the fact that the five regular polyhedra could be fit between the five "spheres" of planetary motion around the sun, a fact that constituted seductive but ultimately misleading data.
Within this context, it is hardly surprising that it took Kepler thirteen years to realize that conic sections and not Platonic solids, that algebra and not geometry, that ellipses and not Aristotelian "perfect" circles, that the planets' distances from the sun and not the polyhedra in which they fit, were the relevant factors in unlocking the regularities of planetary motion. In making his discoveries, Kepler had to reject a host of conceptual frameworks that might, for all he knew, have applied to planetary motion, such as religious symbolism, superstition, Christian cosmology, and teleology. In order to discover his laws, he had to make all of these creative leaps. BACON, of course, had to do nothing of the sort. The program was given precisely the set of variables it needed from the outset (even if the values of some of these variables were sometimes less than ideal), and was moreover supplied with precisely the right biases to induce the algebraic form of the laws, it being taken completely for granted that mathematical laws of a type now recognized by physicists as standard were the desired outcome.
(cont)
It is difficult to believe that Kepler would have taken thirteen years to make his discovery if his working data had consisted entirely of a list where each entry said "Planet X: Mean Distance from Sun Y, Period Z". If he had further been told "Find a polynomial equation relating these entities", then it might have taken him a few hours.
ReplyDeleteAddressing the question of why Kepler took thirteen years to do what BACON managed within minutes, Langley et al (1987) point to "sleeping time, and time for ordinary daily chores", and other factors such as the time taken in setting up experiments, and the slow hardware of the human nervous system (!). In an interesting juxtaposition to this, researchers in a recent study (Qin & Simon 1990) found that starting with the data that BACON was given, university students could make essentially the same "discoveries" within an hour-long experiment. Somewhat strangely, the authors (including one of the authors of BACON) take this finding to support the plausibility of BACON as an accurate model of scientific discovery. It seems more reasonable to regard it as a demonstration of the vast difference in difficulty between the task faced by BACON and that faced by Kepler, and thus as a reductio ad absurdum of the BACON methodology.
So many varieties of data were available to Kepler, and the available data had so many different ways of being interpreted, that it is difficult not to conclude that in presenting their program with data in such a neat form, the authors of BACON are inadvertently guilty of 20–20 hindsight. BACON, in short, works only in a world of hand-picked, prestructured data, a world completely devoid of the problems faced by Kepler or Galileo or Ohm when they made their original discoveries. Similar comments could be made about STAHL, GLAUBER, and other models of scientific discovery by the authors of BACON. In all of these models, the crucial role played by high-level perception in scientific discovery, through the filtering and organization of environmental stimuli, is ignored.
It is interesting to note that the notion of a "paradigm shift", which is central to much scientific discovery (Kuhn 1970), is often regarded as the process of viewing the world in a radically different way. That is, scientists' frameworks for representing available world knowledge are broken down, and their high-level perceptual abilities are used to organize the available data quite differently, building a novel representation of the data. Such a new representation can be used to draw different and important conclusions in a way that was difficult or impossible with the old representation. In this model of scientific discovery, unlike the model presented in BACON, the process of high-level perception is central.
Oh, I forgot to mention that the quotations are from Hofstadter's Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought.
ReplyDeleteQuotation I of II is from Preface 4: The Ineradicable Eliza Effect and Its Dangers (commencing on page 155).
Quotation II of II is from Chapter 4, High-level Perceptions, Representation, and Analogy: A Critique of Artificial-intelligence Methodology (commencing on page 169).
Had to transcribe Quotation I; Quotation II can be found here.
o The human understanding when it has once adopted an opinion ... draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside or rejects[.] -- Francis Bacon
That really is marvelous Glenn, thank you
ReplyDeleteDr Feser, this may be a bit off topic, but i would be interested to find out your position, if you have one, on the SSPX, after an indepth review of your philosophical stand point?
ReplyDeleteI hate to troll, but since I have nothing useful to add I just want to share my amazement at how much time TS must have to be able to write such long books so often in response to simple questions.
ReplyDeleteWell Glenn, in truth I am crazy hhahahah
ReplyDeleteIs just that I have learned through a series of performative models to emulate other people which makes me look pretty damn normal.... But at night !!!!!!!
Or Touchstone is just the ugly duck....
Which means he is a beautiful swan for anyone that can't the story... I think it was a swan at least.
ReplyDeleteSo... Should we start from now on writing down the premises of Dr Feser's post and then discussing them one by one.... It would remove the trolling I say!
So I know some of you have defended that if it is something ambiguous you just wont get meaning out of it... But since I am mostly an ass to Touchstone, i would like that perhaps you Rank or Anon would explain it further, with more details so my model/system based head can grasp the demonstration of what you people are saying.
ReplyDeleteOh ... Touchstone should have asked this.... But nooooo, is too damn hard to do this, is better to call other people's ideas superstitions!!!!!!
PILLS!!!
goddinpotty said... I am tempted to say that he's too stupid to understand, but I don't think that's accurate, it's more like he's actively working to not understand. Which is fine if you are doing religion, not so good if you claim to be doing philosophy.
ReplyDeleteWow, way to reveal your intellectual dishonesty. If you can't understand what somebody is saying, then he must be stupid... or a liar. The anti-religious bigotry is just icing on the cake. To anyone who ever wondered if goddinpotty was intellectually serious, wonder no more.
Lol havent seen that post there ... It would be best if I haven't hahahahaha.
ReplyDeleteIntimidation/assertion.
Why you do not change !?
@Glenn,
ReplyDeleteJust having a quick bit of lunch at work here so no time now to post more than this, but thanks for taking the time to transcribe those quotes (if I understand your comments above on that). Hofstadter is a gifted writer, in addition to being a gifted thinker.
If you've read Gödel, Escher, Bach than you surely can anticipate my response that Hofstadter is quite clear-eyed about the difficulties and challenges that obtain in weak AI, and all the more in strong AI. He has made a bit of a mini-career in being a critic of the many efforts in this are that are simplistic, confused, or over-hyped.
The SME is a notorious example. I'm sure the coding is quite sophisticated and all that, but Hofstadter's point is more forceful than he even puts it: Strong AI is not a feature of a computer, or any array of computers, any more than intelligence is a feature of a brain. Intelligence as we understand it, is a function of a being, a body, not just an organ, an integrated system that interacts with its environment in rich and dynamic ways.
In short, there is no substitute for *experiencing* the real world, and having experienced the real world for a long time, as a predicate for thinking in the real world in ways we would agree (Thomist dogma notwithstanding) are "intelligent" in a similar fashion to humans.
That changes the design spec for AI significantly. Now you have to build an "artificial being", and incorporate something like a nervous system and a hardware/wetware layer for navigating and sensing and interacting in physical ways with the world, not just via a fiber optic data interface. It makes budgets go way up and timelines slow way down. But, as Hofstadter rightly points out, and as I've seen first hand in some of my own past projects, without that, you can't get where you'd like to go.
This is how research programs and innovative projects succeed, but such reckoning with the problem. But to read this here, in the context given, seems to miss the larger message of Hofstadter: we must incorporate the "whole stack" as they say in programming to be able to reproduce the phenomena of thinking, but it is in doing this, and doing this in a robust way, that we can see strong AI succeeding, and succeeding in profound ways. As any software developer will tell you, understanding the problem domain is a key indicator for the eventual success or failure of the project. "Thinking" and "intelligence" and "consciousness" are difficult and complex natural phenomena to model -- we don't have enough science yet to make a good go of strong AI.
Hofstadter points out the problems, the path to strong AI. But this criticism and guidance is given BECAUSE there remain no fundamental problems known, no reason to think that a silicon machine with a similar architecture us carbon machines can't do what we do, and by extension that humans as carbon machines are highly plausible as emergent from natural processes, just because we can build similar (or surpassing) silicon machines. Then, the argument reduces to "well, it couldn't happen by evolution/abiogenesis" from the creationists, and "don't see why not, and no other mechanism is available to explain it".
That's progress, with help from Hofstadter, who, as you point out, is convinced that life arose from non-life naturally, and intelligence is an artifact of that process. If we can build machines that think, and think in the strongest, most robust sense, then the objection "machines can't think" will become just a matter of denialism.
-TS
Eduardo,
ReplyDeleteOh ... Touchstone should have asked this.... But nooooo, is too damn hard to do this, is better to call other people's ideas superstitions!!!!!!
Here's the first paragraph in the Wikipedia entry for 'superstition':
Superstition is a belief in supernatural causality: that one event leads to the cause of another without any physical process linking the two events, such as astrology, omens, witchcraft, etc., that contradicts natural science.[1]
That's right on the money regarding, say, the belief in "immaterial intellect", and it's supposed efficacy in thinking.
-TS
Actually it all depends as you have put before... In defining things the way you need. Define think in a way you like and boom THINK is something you can explain away... Is not denialism, is just people who doesn't want to climb on board you boat.
ReplyDeleteWhy not simply saying that you have no idea what you seeing, all you got is models which could be for all we know, delusions.... Oh wait people already said that to you didn't they ?
Okay, definition noted.... Time for you to proof it it what is natural and what is unnatural so we can know it falls into that definition.
ReplyDeleteSo what is natural?
Touchstone,
ReplyDeleteDo you plan to explain how your views escape self-refutation along the lines I indicated above? That is, that we could never possibly know supra-mental determinate content if the "brain code" to which such things are reduced is indeterminate? There seems to be no move that gets you out of this. Note that, if you accept it, you're committed to total irrationalism without objectivity, science or any of your other favorite things. If you can't escape these issues, then I'm afraid that your attacks on Feser have all failed.
Define physical too touchstone... Let's bring this conversation to tiny tiny bits, so you and I can avoid assertions.
ReplyDelete@rank sophist,
ReplyDeleteYou'll have to pull out the lines you want to stick to for the case for "self-refuting". As it is, you've got a lot to offer as a debate partner, but your "self-refuting" charge, everytime I've seen it doesn't even rise to being applauded as "lazy". It's just a troll's itch you are scratching, as best I can tell, and there's no argument or content to engage. That's annoying, but it's just a bit of noise in the channel. Pushing that to some kind of triumphalism about having made this case... well, that's just a bit of playing with yourself in public, intellectually.
For example, I said, upthread:
If you want to understand the raw input to the brain (or program) as 'interpreted' by virtual of its translation from photon dynamics to electrical signals, fine -- that in way precludes it from being determinate content.
To which you responded:
Thank you for admitting it. This means that your system is self-refuting.
You gotta be kidding me. That's not even comment worthy, not even handwaving worth pausing for.
If you'd actually like to state a case for your understanding of the argumenting I'm making, and why that argument is self-refuting, something I can take a bit more seriously than a naked non sequitur like above, some substance to interact with, I'm game.
Here's another example from you:
I know it isn't a vicious regress. Derrida wasn't that terrible of a philosopher. It's still a self-refuting position, though.
Never mind that I'm not Jacques Derrida, nor one who subscribes to his arguments (you've confused my familiarity and understanding of his some of his critiques -- that "male|female" may be problematic, and abandoned for a "gender space" that admits of degrees of androgyny or other points in that space, for example -- with subscription to his positions), "It's still a self-refuting position, though" is just fine as an idle assertion, something I just roll my eyes at and move on from. But apparently, you think this throws down the gauntlet somehow.
If so, please, get serious.
If you have a case to make, make it. Don't think you can pull a troll's trick and say "self-refuting, now dance! prove it's not!" That may work on other posters. I'm not gonna take that bait.
I'll chalk this up to a simple misunderstanding and not be offended that you'd think I'm such a chump as that, and wait for something to engage, if you've got it on this topic. As it is, it's farther than needed to say "It's not", and that is plenty, given your case.
I noticed this with the "nominalism is incoherent" thing. Oh yeah, by the way, that fails because everyone knows nominalism is incoherent....
That's just a barrier to taking your posts seriously. You have lots to offer to engage with, but seem to gravitated to the parts you offer that are content-free, and conceited. I don't doubt you have the conviction that "nominalism is incoherent", or that some unspecified version of one of Derrida's arguments is self-refuting, but you're carrying on like you suppose these are givens across the board. I know there's a fanboy factor here that encourages that, but it's untoward, awkward.
Make your case man. It's a hell of a lot more interesting to engage, and for others to read, than this kind of stuff.
-TS
@rank sophist,
ReplyDeleteHere's my try to anticipate a case from you foreshadowed by this:
Do you plan to explain how your views escape self-refutation along the lines I indicated above? That is, that we could never possibly know supra-mental determinate content if the "brain code" to which such things are reduced is indeterminate? There seems to be no move that gets you out of this. Note that, if you accept it, you're committed to total irrationalism without objectivity, science or any of your other favorite things. If you can't escape these issues, then I'm afraid that your attacks on Feser have all failed.
First off, even if you, or I, think that I have issues in my own views that are problematic, and inescapably so, that does NOT mean that my attacks on Dr. Feser's ideas have failed. That's a non sequitur. I recently watched a young earth creationist, and an extreme one (uses the Ussher date for the beginning of the world, for example) take apart a new age/pagan type on the issue of miracles on another forum. That's a topic I think a YEC has serious trouble with by virtue of those beliefs, but I should (and did say) that the critiques were a heavy left cross followed by a crushing right hook to the claims made by the subject of her criticism.
I know this is a common apologetics move, but it's a fallacy. The arguments stand on their own merits. The YEC's critiques are not dismissable because she has beliefs on the subject (or other subjects) which she cannot adequately defend. If you are familiar with the lovely human species called the Calvinist, and the Calvinist who endorses "presuppositionalist apologetics", you'll be aware of this problem. Can't explain the origin of the unvierse? Aha! Therefore Calvinism! All your attacks on [Calvinist version of Dr. Feser] fail!
Second, "indeterminate" does not mean "unknowable". It means "not certain, vague" (see a dictionary to confirm. We can and do have knowledge that obtains as function of doubt and uncertainty. In fact, beyond the certainty we have of an "i" on the basis of cogito, all of our knowledge is laced with some degree of uncertainty and doubt -- knowledge of the real world is necessarily uncertain for the very reason you identify: it comes filtered through our senses, through a layer we cannot get out of (Thomist notions of immaterial intellect and revelation, etc. notwithstanding). But indeterminacy does not refute or dismiss anything on its own. Fuzzy logic models often outperform the brittle/polar models they replace precisely because they incorporate indeterminacy and uncertainty in their heuristics.
I'll stop there to make sure we're not wasting our time talking past each other on "indeterminacy", and ask:
If I put 1000 marbles into a pachenko machine, where will the first ball I drop end up? Is that determinate in your view, or not? How about the second ball? Do you know where it will end up?
If that is an indeterminate outcome (and I'll let you decide if it is or not), once I've let all thousand marbles go into the machine, do I know the shape of the distribution of those marbles in the slots at the bottom? Can I make reliable guesses as to the approximate shape I might expect? Is that determinate or indeterminate?
Your answers to that will go a long way in providing clarity for any debate we might have on this. From this, I will better understand the argument you are making with those terms.
-TS
Not that argument are necessarily any better hahahahha
ReplyDeleteBut Rank's argument might be something like... You have a group, A, B, C...Z. Now they are all ambiguous, in other words, the measurement that we use to measure understanding can be pinpointed .... It can't be measured. That is suppose to be ambiguous.
It seems that touchstone is rather saying that a ambiguous characteristic is something like, A has "x" to "y" in our measurement. So it is ambiguous, but it is defined somewhat. Now this could be the wrong interpretation of his view but his claim that you can know X becuase you can talk about with Y and Z seems to be rather awkward. It seems to work only, if youndefine that you know all rest, otherwise you will never know anything.... X is defined by 2Y and 3Z, Y is defined by X and 4Z, and so on. Now you could go on to "solve" the system, BUT X, Y and Z is still unknown. Well maybe touchstone percept is just and unknown thing....
It does seem that Rank is correct, that you can't find any meaning in this system.
Touchstone, I don't think your case is similar to that YEC. Rank didn't claim that you were wrong becuase you had such and such belief. He said you model, or alternative doesn't work, and therefore can't hurt Feser.
ReplyDeleteYou are confuse Touch, you are more confuse than me. Go back read it again, draw if you have to!
Eduardo has it pretty much right. X, Y and Z will remain unknown no matter how much you define them in terms of X2, Y5 and so forth. Something has to be unambiguous--determinate--for it to disambiguate. Either you must eventually reach something with determinate content (understood as non-ambiguous, not as "deterministic"), or nothing is ever determinate. Derrida accepts the conclusion that there is nothing determinate, and you, by saying that brain code is indeterminate, must necessarily bite that same bullet.
ReplyDeleteFurther, your attacks against Feser fail because they were based on a certain model of thought. However, this model of thought (reduction to indeterminate signs and associations), as I have just shown, is self-refuting. Certainly, Feser could be wrong--but it sure isn't because your objections hold.
You gotta be kidding me. That's not even comment worthy, not even handwaving worth pausing for.
If you'd actually like to state a case for your understanding of the argumenting I'm making, and why that argument is self-refuting, something I can take a bit more seriously than a naked non sequitur like above, some substance to interact with, I'm game.
I elaborated on the claim right below that. Your system must necessarily be self-refuting if it contains nothing determinate, because then its own truth is indeterminate. This is the same problem that wrecks Derrida's argument: it winds up as "the certainty that nothing is certain". It's patently self-contradictory.
Because you cannot shift the burden of determinacy to the "outside"--there is no "outside" if we can only know the "outside" via reduction to indeterminacy--, you cannot ever reach what Derrida calls the "Transcendental Signified": that which is wholly determinate and unambiguous. As a result, it must be the case that the very bones of your claims are indeterminate, and hence the system undermines itself.
Touchstone,
ReplyDeleteThe issue is much simpler than you are making it.
Feser:
In particular, there is nothing in the picture in question [triangle] or in any other picture that entails any determinate, unambiguous content.
You:
Second, "indeterminate" does not mean "unknowable". It means "not certain, vague"
Given that particulars are indeterminate, whence comes determinate, unambiguous concepts? It's a simple question, and statistical overlapping can never get you there. It's asymptotic. It's a difference in kind. It's a simple logical question that has been asked for 2000 years.
Rank puts it another way:
That is, that we could never possibly know supra-mental determinate content if the "brain code" to which such things are reduced is indeterminate?
In fact, beyond the certainty we have of an "i" on the basis of cogito,
ReplyDeleteSadly, Derrida's system--and, subsequently, yours--undermines even that certainty. Even the idea of "doubt" becomes indeterminate.
But indeterminacy does not refute or dismiss anything on its own. Fuzzy logic models often outperform the brittle/polar models they replace precisely because they incorporate indeterminacy and uncertainty in their heuristics.
But the fuzzy logic models themselves would have to be determinate, rather than indeterminate. If the models themselves were indeterminate, then they could not measure anything. Under your system, the models, too, must crumble.
Determinate to Touchstone, seems to be a determinate number indo a string of real numbers. That is why his fuzzy logic seems ambiguous to him hahahahhaha.
ReplyDeleteBut that is not what you people mean is it?
Touchstone: Superstition is a belief in supernatural causality: that one event leads to the cause of another without any physical process linking the two events, such as astrology, omens, witchcraft, etc., that contradicts natural science.[1]
ReplyDeleteThat's right on the money regarding, say, the belief in "immaterial intellect", and it's supposed efficacy in thinking.
Even for Wikipedia, that's a bad definition, and yet it still is nowhere near the money. The intellect is not supernatural, nor does anything about it contradict natural science. If you're not simply trying to be a smart-alec, then you are seriously misunderstanding the thing you are attempting to argue against.
Out-of-touchstone said... @Eduardo That emergent characteristics you just spoke is also known as FORM to your ...adversaries.
ReplyDeleteNo, but that's a good point to make for clarity: blahblahblah
Uh, so in other words, Eduardo was exactly right: you have no clue what "form" is. Sheesh.
Guys, can we make a new rule? No carrying on discussions of Thomism with anyone who can't figure out forms.
Anonymous said... I hate to troll, but since I have nothing useful to add I just want to share my amazement at how much time TS must have to be able to write such long books so often in response to simple questions.
ReplyDeleteBut it doesn't really take much time when you put zero effort into understanding the thing you're blindly attacking, you see.
Anon above me,
ReplyDeleteThat can't be possibel since touchstone is a seasoned debater and a great knower of Thomism!
Is simply not logical and therefore by my models.... impossible, that he has no clue what FORM is!
Because you cannot shift the burden of determinacy to the "outside"--there is no "outside" if we can only know the "outside" via reduction to indeterminacy--, you cannot ever reach what Derrida calls the "Transcendental Signified": that which is wholly determinate and unambiguous. As a result, it must be the case that the very bones of your claims are indeterminate, and hence the system undermines itself.
ReplyDeleteThis is nothing different than noting that no perfect triangles occur in nature. No triangle (or circle, or square, or...) can be reified according to a Euclidian ideal.
But that doesn't preclude our understanding of, use of, and location of triangles in nature. How can this be? Because close enough is close enough. Every triangle is imperfect -- fuzzy, jagged as a matter of physics compared to an ideal triangle -- but that doesn't preclude us from using and manipulating triangles, conceptually, or as features of physical objects.
Your claim can only hold ground if you assume that a given concept-in-context is either wholly and perfectly unambiguous, "universally perspicacious", or else it is completely opaque, intractable, utterly impenetrable. Either a perfect Platonic Triangle, or no triangles anywhere, at all.
As soon as you allow for degrees of ambiguity and and degrees of uncertainty, the system runs fine, just as nature does without a single perfect triangle.
But, again, we're getting ahead of ourselves, I think, as the more you use the word 'indeterminate', the more conflicted the usage becomes. If you want to use that word as your fulcrum, that's fine, but you'll have to provide the measure you are using for that term. The definition you regard as controlling in your claim should be provided. As it is, it's indeterminate, not clear or certain enough to apply -- my best guess is that you are using it to mean "not certain, not definitely or precisely known".
Tell me what your operative definition is for "determinate" and "indeterminate", and we can make further progress perhaps. Given any statement, how to you establishing the determinacy of its meaning, if any, and what criterion is used to measure it?
Thanks,
-TS
Olly shit... You can actually post a good reading post!
ReplyDeleteThis was by far the BEST post that you made in the thread Touchstone, congrats....
Who could imagine that all those poneys that compose Touchstone could do something so remarkable
By the way, your conclusion of no forms or perfect forms, doesn't seem to have any premiss to get to it.
ReplyDeleteYour argument can be turned to you, and say that we have no need for your system, because we have these things you don't really grasp.... Worthless argument, this argument of necessity.
@Mr. Green,
ReplyDeleteEven for Wikipedia, that's a bad definition, and yet it still is nowhere near the money. The intellect is not supernatural, nor does anything about it contradict natural science. If you're not simply trying to be a smart-alec, then you are seriously misunderstanding the thing you are attempting to argue against.
The intellect operates *outside* of natural physics, no? If not, are you proposing that the immaterial intellect is amenable to natural models? If not, you have action, and more flagrantly, personal action/will, obtaining outside of supernatural.
Someone who is superstitious about "Friday the 13th" doesn't need to posit a deity or a demon as the predicate for their fears about bad fortune on that day. The cause may be *exactly* immaterial in the way the "immaterial intellect" is held to be immaterial on A-T. The salient feature of that superstition is that it identifies interaction and causality outside of natural processes, transcending physics.
As for contradicting science, science doesn't provide a natural model for cognition or human language processing, then allow for "a bunch of immaterial stuff in here to round out the intellect". That scientific models do not rule out "immaterial intellect" explicitly does not mean there's no contradiction. Science doesn't rule out immaterial unicorns as the cause of gravity, either, but it would be ridiculous to maintain that "there's no contradiction with Immaterial Gravitational Unicorns™ and our model of gravity". That the "immaterial intellect" isn't even a coherent concept for science -- it's a divide by zero -- is enough. Science leaves things out because they aren't needed, and they are alien to the model.
This is precisely what the label 'superstitious' looks to identify, belief in activity and interaction that have no basis in science, no basis in our knowledge of nature. The "Friday the 13th" superstition doesn't "contradict science" if "immaterial intellect" doesn't. That immaterial bad luck just obtains *in addition* to science, yeah?
If the belief in "immaterial intellect" is simpatico with science, then so is the Broken Mirror superstition, so is the Black Cat superstition, so is the Garlic as Vampire Repellent superstition, and on and on. None of these "contradict science" in the equivocal way you invoked above. Science doesn't discredit Black Cat superstitions or bother to contradict, any more than Immaterial Intellect superstitions. It just is superfluous, beliefs that are useless and extraneous to scientific models.
-TS
@Mr. Green
ReplyDeleteEven for Wikipedia, that's a bad definition, and yet it still is nowhere near the money. The intellect is not supernatural, nor does anything about it contradict natural science. If you're not simply trying to be a smart-alec, then you are seriously misunderstanding the thing you are attempting to argue against.
Should have added...
If you believe "immaterial intellect" is part of the natural world, and in such a way that our scientific models can (or should) incorporate that dynamic, then I agree, I was confused about the way you (and others construed the term), and will retract the claim that the belief is superstitious. I'd be quite surprised to learn this, but stand to be corrected.
I've never encountered "immaterial intelligence" as even a contemplated or putative component of a natural model. It is always placed outside of nature, beyond the reach of science and natural knowledge (hence the "immaterial").
-TS
I don't have much to add other than the AI example of BACON provided by Glenn reminds me of an old Bad News quote:
ReplyDelete"I could play 'Stairway To Heaven' when I was 12. Jimmy Page didn't actually write it until he was 22. I think that says quite a lot."
Because close enough is close enough.
ReplyDeleteAwesome.
Either a perfect Platonic Triangle, or no triangles anywhere, at all.
Why is it that a sizable amount of the opponents on this blog keep making this mistake? There's a middle ground called moderate realism where the concept of triangularity exists in the mind, not in a Platonic realm.
Every triangle is imperfect -- fuzzy, jagged as a matter of physics compared to an ideal triangle -- but that doesn't preclude us from using and manipulating triangles, conceptually, or as features of physical objects.
Repeating the question (yet again): Why doesn't it preclude us the use of a determinate concept, given that all the material we have to work with is indeterminate?
This is nothing different than noting that no perfect triangles occur in nature. No triangle (or circle, or square, or...) can be reified according to a Euclidian ideal.
ReplyDeleteThat isn't what it means. Not in the slightest. Under your system, brain code reduces the determinate to the indeterminate. Because we use our brains to think, this makes it impossible to know anything determinate at all. Whatever we call "determinate" is always already invaded by total indeterminacy, because it has always already been reduced to brain code that has no determinate content. Everything, including all talk of photons and so forth, becomes totally ambiguous and relativized. Everything is illusory. At this stage, the methods that you used to reach this conclusion are undermined, and your argument goes down the toilet.
But, again, we're getting ahead of ourselves, I think, as the more you use the word 'indeterminate', the more conflicted the usage becomes.
Determinate: Having exact and discernible limits or form.
Indeterminate: Not certain, known, or established.
Given any statement, how to you establishing the determinacy of its meaning, if any, and what criterion is used to measure it?
I subscribe to the system defend by Prof. Feser.
To summarize the debate so far.
ReplyDeleteTS: We don't need intentionality (and, subsequently, forms), because we can form objective, pre-conscious percepts. Just look at these computers!
Me: Computers have intentionality infused by us. Each symbol of code has semantic, determinate content. If, like computers, our brains reduce input to code, then the code would have to possess intentionality as well.
TS: Well, you're right that it reduces input to code, but we still don't need intentionality!
Me: Then our "brain code" would be utterly ambiguous.
TS: We can just shift the determinate content (intentionality) to the outside!
Me: If our brain code works by reduction, then any idea of the "outside" is only another instance of indeterminate code, and it therefore contains no meaning whatsoever.
TS: ...
@rank sophist
ReplyDeleteThat isn't what it means. Not in the slightest. Under your system, brain code reduces the determinate to the indeterminate. Because we use our brains to think, this makes it impossible to know anything determinate at all. Whatever we call "determinate" is always already invaded by total indeterminacy, because it has always already been reduced to brain code that has no determinate content. Everything, including all talk of photons and so forth, becomes totally ambiguous and relativized. Everything is illusory. At this stage, the methods that you used to reach this conclusion are undermined, and your argument goes down the toilet.
They are just not certain. 90% is not 100%, but it's not 0%. Right? Look you have two systems -- an extra-mental system (the world beyond our senses) and a model (a conceptual "map" of the 'territory' that is the extra-mental world). To the extent you can build a model that makes novel predictions and accounts for the empirical evidence (input from the behavior of the 'territory'), our map performs as a map, an isomorphism to the territory. It's never complete (we don't even know what that would mean, or how that could be established) -- nor perfectly unambiguous (same problem), but we can judge the relative strength of those isomorphisms; some maps track more closely to the territory than others, based on the input we have available from the territory.
But none of that is certain in any final sense, nor complete (that's an undefined concept), or perfectly unambiguous (it's always susceptible to some form of underdetermination among competing hypotheses). There are no reference frames to even *calibrate* those terms by -- and cannot be, because any "reference frame" itself would have to be show its own basis for calibration, and... boom, vicious regress.
No matter, that's a fool's errand. Ambiguous, uncertainty and indeterminacy come in degrees, and we can (and do!) build systems that are semantically rich, highly specific, and effective for purposes of communication, model building, knowledge development, etc. Your key mistake is captured in "totally ambiguous". Somewhat ambiguous is not totally ambiguous, and there's no basis for thinking this is a binary phase space -- total determinacy or total ambiguity, and that exhausts all the options.
The methods for my conclusions not 100% certain, and cannot be. But neither are they 100% ambiguous or opaque. They are meaningful, and yet carry some measure of ambiguity, as is intrinsic to human language. They are reliable -- when you get on an airplane, you are placing your well being in the hands of this epistemology -- but they are not and cannot be 100% certain.
Determinate: Having exact and discernible limits or form.
That's not operative. How do you determine wether a limit is exact? Providing an example would be very helpful, because I have no idea that you have a working definition that can be applied for your argument based on what you have said so far.
Indeterminate: Not certain, known, or established.
That's an indeterminate definition. How do you establish certainty, so that we might see if we have it or not? Please provide an applied example.
"Given any statement, how to you establishing the determinacy of its meaning, if any, and what criterion is used to measure it?"
I subscribe to the system defend by Prof. Feser.
You have got to be kidding me. I'm am embarrassed for you, responding like that. You must be pulling my leg - a wry play on "indeterminate", I hope. If you are serious... fail!
Why not engage with terms and concepts we can apply and test here? It's an interesting exchange, possibly but you're just blowing smoke, here. Hiding behind Dr. Feser.... tsk. Make your own case -- copy and paste and borrow all you like, but present it as yours.
-TS
There was a fair amount of internal critique of these kinds of problems within Artificial Intelligence during the 80s and 90s. See, for example, Lucy Suchman's Plans and Situated Actions, or Phil Agre's Computation and Experience. A summary: yes, a lot of computational models of the mind suffered from wishful thinking (or what would be called here inherited rather than original intentionality) and some rather crude ideas of what the mind does.
ReplyDeleteThis does not prove anything about the ultimate possibility or impossibility of computational or mechanical models of thinking, however. It just shows that the early efforts were too simplistic and the problems are much harder than was thought, and require some real sophistication to solve. The two named writers employed critical practices from sociology and philosophy to try to reform AI, not to prove that it was impossible.
Hofstadter, who did a lot of his work around the same time, also was critical in his own way of standard AI, but from a different perspective.
They are just not certain. 90% is not 100%, but it's not 0%. Right? Look you have two systems -- an extra-mental system (the world beyond our senses) and a model (a conceptual "map" of the 'territory' that is the extra-mental world).
ReplyDeleteYou clearly are not as familiar with Derrida as you claimed. Touchstone, there is no extra-mental system. When you apply Derrida's associative signs, the model is all that is left. That's just how it works. You can't even coherently grasp the idea of an "extra-mental system" anymore. Whether you realize it or not, this argument is already over.
Touchstone,
ReplyDeleteIf you've read Gödel, Escher, Bach than you surely can anticipate my response that Hofstadter is quite clear-eyed about the difficulties and challenges that obtain in weak AI, and all the more in strong AI.
Regarding the particular point, why drag GEB into it? Surely knowing that you had or would read the quotations provided above ought to be sufficient for one to display the anticipatory prowess of which you speak.
...we don't have enough science yet to make a good go of strong AI.
Do not despair—there's always the 'in principle' possibility that someday we will.
...there remain no fundamental problems known, no reason to think that a silicon machine with a similar architecture us carbon machines can't do what we do...
I agree that insufficient science should never be seen as a fundamental problem--especially when seeking to accomplish a stated scientific goal.
(But then perhaps you meant to say that the problem isn't one of fundamentals, thus meaning to imply that we already have all the non-physical, nonmaterial 'proof' that is required, and that it's just a matter of time ere these non-physical, nonmaterial somethings can be successfully instantiated in physical, material things.)
...and by extension that humans as carbon machines are highly plausible as emergent from natural processes, just because we can build similar (or surpassing) silicon machines.
Since I'm not a biochauvanist, I can say with a straight face that humans as leverers, weight bearers and locomoters are highly plausible as emergent from factories, simply because we can manufacture...
If we can build machines that think, and think in the strongest, most robust sense, then the objection "machines can't think" will become just a matter of denialism.
And if we can help you nudge your net worth up and beyond that of Bill Gates', then the objection that you aren't worth more than Bill Gates could only be made from ignorance.
Let me put it in more concrete terms.
ReplyDelete(1) A system of signs obtains its meaning from outside of itself.
(2) Our brains are based on a system of signs.
(3) Our brains reduce all outside input, even pre-conscious, to this system.
(4) Mental processes are totally within the system.
(5) The concept "outside of the system" is totally within the system.
(6) The referent for the concept "outside of the system" cannot be known except by reduction to the system.
(7) Therefore, the referent for the concept "outside of the system" is always already part of the system.
(8) But an outside of the system is required to give the system meaning.
(9) Therefore, the system's meaning is unknowable or non-existent.
Needless to say that all concepts related to "photons" and other scientific things have already been reduced to the system of signs. Under your brand of computationalism, you're stuck with Derrida's universal relativism, in which science teaches us absolutely nothing. However, since this undercuts all of the premises that led you to say that your brain was based on a system of signs, you can rest easy knowing that you've refuted yourself, and that science can live another day.
Touchstone seems stuck in not understanding what Rank is talking about.
ReplyDeleteI don't know if you system actually works too.... The one you are trying to explain there in the last comment.
A is 90% determinate
ReplyDeleteEverything that is not A is also 90% determinate
If you describe A by non A... you get 100% determinate
From an environment that only offers you with 90% of determinacy.
Yeah ... I agree it is possible ...without my pills.
If that what you mean... Then this is superstition by your own definition! Brilliant!!!!
ReplyDelete@rank sophist
ReplyDelete(8) But an outside of the system is required to give the system meaning.
No, and that can't be true, by its own measure. Anything you suppose is 'outside the system' is *inside* the system by virtue of being the grounds for some meaning. This is transcendentally true, it's presupposed by the concept of meaning. If you have to go 'outside' to ground meaning, you have necessarily brought whatever-it-is 'inside'. If it's not inside, and it's not part of the system, it cannot be the basis for meaning, it's not referencable in the system as the basis for carrying semantic cargo.
This is not bound by physicalism or naturalism or any mode of existence. If you postulate a Transcendental Cosmic Immaterial Meaning Machine as the absolute manufacturing point or authority of "absolute meaning", that is not "outside" the system of meaning, it *inside* the system, and not just inside, its *central* to it. Any Thomistic basis for meaning is necessarily 'inside' the system of meaning -- that's why it would be described as a 'basis for meaning'.
So your (8) isn't just dubious as a premiss, it's transcendentally false. It cannot possibly be sound. Being 'outside' means it's not connected or related to our system of meaning.
The error at (8) is sufficient to dismiss your conclusion, but it's worth pointing out that (9) doesn't follow from (8), even if (8) were somehow possibly sound. When we create meaning, and rely on meaning, we are not talking about the "meaning of the entire system", we are making semantic distinctions *within* the system. The 'meaning of the system as a whole' is undefined, because there are no referents external to it with which to associate relationships. But if I wonder what the meaning of "apple" is in conversation, that is a concept that identifies the relationship between subjects and objects *inside* the system -- "apple" is not "horse", for example, as a rudimentary distinction between referents. "Apple" can and does have meaning by distinguishing what it does *not* refer to, inside the system. Concepts in side the system are "system-internal". It's not illusory or meaningless -- you use this system to good effect just in participating in this thread. Where sender can impart information -- this and not that -- you have meaning, demonstrated.
If you are not confusing "the system's meaning" -- the meaning of the system itself, with the "instances of meaning within the system", then there's no ergo, no basis for your "therefore" in (9).
I appreciate your putting this in succinct, more concrete terms. That is helpful and productive, thank you. It reveals the nature of the problem in what you've been claiming.
-TS
In The Last Superstition you talked about the problem Athiests have explaining " forms " of secondary qualities like color and gave the example of two people looking at a red object. One saw green and one saw red. That would be hard to reconcile with their explanation of neurons firing in response to various frequencies. But couldn't they respond that the neurons in the color blind individual were genetically defective?
ReplyDeleteP.S. Have some of your students spend a little time on the Philosophy Forum at Catholic Answers. Its a mess, they need some real philosophers over there and they are being overwhelmed by athiests and kooks.
... Shiiiit, touchstone didn't get it again. Those eight steps are not necessarily premisses. They are a chain of thinking, the number eight is a conclusion of some stuff before.... Are you seriously saying you didn't notice it was a chain of thinking of some sort ?
ReplyDeleteRS, are you a philosophy professor? A grad student in philosophy? It's been over a year since I've visited this blog and back then you were nowhere to be seen.
ReplyDeleteHere's an example a professor in Comp Sci used with us long ago now on the concept of semantics and "inside/outside".
ReplyDeleteConsider a computer game (maybe now we'd just call it a screensaver...) where circular objects float around a 2D space (the screen), and interact like virtual magnets, attracting and repelling each other based on polarity and proximity to other objects. Inside this system, we can ask, and in fact must ask in order to resolve the physics, and have the objects move toward each other or away from each other: What is the DISTANCE between Object A and Object B?"
This is an internal relationship set. "Distance" is meaningful, and is calculable within the system. A "distance" obtains between any two objects on the screen. It works, you can write code against it, the semantics are clear enough to do math against it.
So if we can identify the semantic grounding for "distance" in this system, what, then: What is the DISTANCE of the system itself, what is the DISTANCE of the game?. No sooner does the professor finish asking the question than the students complain that the question itself is confused. And they are right, the professor's point was to show the implicit context of our notions of semantic, the level of description that obtains for semantics to work, to be effective in carrying semantic weight. Someone had made the point that in our computer algorithms we could focus on breaking the problem down computationally and develop machinery that worked on "distance" in some context-free sense.
Asking what the "distance of the system" is illustrates the transcendentals of that system. For "distance" to have meaning, it can ONLY obtain inside the system, because meaning itself is predicated on relationships between nodes in the system.
A physics book which I can't recall the title of anymore made the same point, reading it some years later. How big is a proton? Well, we can provide a meaningful answer only by way of comparison to other things in the system. And that works. But if you ask, "how big is the universe?", there is nothing (that we know of) to compare it to. And even if we did have something else to compare it to, if we could compare, those things would all be part of our universe, resetting the question.
Same principle, and it's not a novel or esoteric one. I just wanted to take a moment to invoke them in this context because the same principles apply, here. Meaning obtains as a set of relationships between entities within a system.
-TS
Touchstone, your point is irrelevant to the argument. You see, is just like I said before, the basic stuff in your system is all meaningless... Is like you ask, HOW BIG IS A PROTON? And the question itselfmhas no meaning whatsoever,mhow even gonna try to answer it... What Rank is talking about, goes all the way there. The meaning/determinacy talk was about the objects not just what humans or brains feel about the environment....
ReplyDeleteYou system still has no determinate stuff is it will never have... Well you can pretend that it does I think.
@Eduardo,
ReplyDelete... Shiiiit, touchstone didn't get it again. Those eight steps are not necessarily premisses. They are a chain of thinking, the number eight is a conclusion of some stuff before.... Are you seriously saying you didn't notice it was a chain of thinking of some sort ?
Oh, I understood it to be a 'chain of thinking'. A premise is a 'link in the chain' for a syllogism, for a rigorous chain of thinking. If you read (8), it's not a conclusion -- it starts with "But", not "therefore" or "Then" or "because of this..." It's a proposition offer as true (or perhaps we should say 'sound') as the predicate for the conclusion to follow. As it happens, (9) doesn't follow from (8) even if (8) is true, but that's not the point I'm making to you -- (9) has the form of a conclusion or a production, where as (8) has the form of a premiss.
-TS
Touchstone,
ReplyDelete> @rank sophist
>> (8) But an outside of the system is required to give the system meaning.
> No, and that can't be true, by its own measure. Anything you suppose is 'outside the system' is *inside* the system by virtue of being the grounds for some meaning...
You read and considered--starting at (1) all the way up to, including, and beyond (8)--then circled back to 'refute' (8)?
Realize ye not that ye could simply have taken aim and lobbed your attempted refutation at (1) ("A system of signs obtains its meaning from outside of itself")?
While travelling from Los Angeles to San Diego via Chicago can be more entertaining and fun, some pople might think it can also be a tad bit less efficient, as well as somewhat more time consuming.
If your implicit assertion is correct--that a system of signs does not obtain its meaning from outside of itself--then it follows that there cannot be any such thing as an idiom, i.e., an expression whose meaning cannot be derived from the individual meanings of its constituent elements.
Touchstone, let me then check slowly myself...
ReplyDeleteNumber 1, seems to be the premiss that you were fighting against... But it seems to me it is correct that a set of symbols and relations by themselves have no meaning what so ever without something to give them meaning.
Number 2 seems to be your conclusion, and it is Rank's premiss... I think you made very clear that all we have is just signals.
Number 3 is also your point of view, and it seems that you system is all about number 3
Number 4 is also your point of view... You don't like or agree with any sort of dualism so you stuck with this one.
Number 5 seems to be common sense... Any conceptmyou have is INSIDE the system.
Number 6 also follows from your view too
Number 7 is referent to when you said.... Meaning is in the outside.... You said it... But you gonna havento accept that certains things are outside the system which you can't... You are stuck with seven
Number 8, the new problem.... Is just stating your idea that meaning is outside, but since you say it is inside, them meaning is in the signs.... So the premiss you don't want is NUMBER 1, not this one....
@Anonymous
ReplyDeleteRS, are you a philosophy professor? A grad student in philosophy? It's been over a year since I've visited this blog and back then you were nowhere to be seen.
On the chance that by "RS", you actually meant "TS" ('r' is right next to 't' on my keyboard), I'm absolutely, perfectly uncredentialed in philosophy. I work in software and technology development, and for a good part of my career in projects that supported scientific research. In debates elsewhere, people I've been talking with for a long time occasionally cite Dr. Feser in support of their (usually more peculiar) ideas, so those references pointed me over here at some point. Last year (or was it the year before, I spent some time engaging Randian Objectivists on a couple blogs, as an offshoot of other discussions. That was interesting for a bit, and some good discussion was had -- very similar to Thomists from my point of view in terms of fetishistic impulses on metaphysics.
There's some strong, articulate thinkers here, and unlike the Objectivists' blogs, where the blog owners lead the way, some of the combox posters exceed the level of care and thoughtfulness of Dr. Feser's posts, certainly more conversant with competing ideas and frameworks.
Which is just to say I'm a complete nobody in the hierarchy of the philosophy profession, just a tech nerd that sees an opportunity for interesting criticism and discussion on a different worldview/metaphysics than my own.
-TS
RS mean Rank Sophist
ReplyDeleteOh you think this song is about you don't you!!!! DON'T YOU!!!
oh by the way, yeah you are right EIGHT is not a conclusion as have stated ... the premiss *YOUR WORDS* was not in the chain so you interpreted correctly. But I still think ... ONE is the one you want!!!
ReplyDeleteWell A lá Potty I think I will just watch from now on.
ReplyDeleteTouch ... your last posts got much better, you talked like someone who actually cares about the discussion. Doubt it is accurate interpretation of signs in your head but is much better than Mr bombastic form before ...
Well I will let Rank take care of you... I am afraid you don't even know WHERE IS IT or ABOUT WHAT the conversation is.