I’ve seen a fair amount of buzz about an Opinionator column in the August 5 New York Times, “Anything but human,” by Richard Polt. Polt is a professor and chair of philosophy at Xavier University, specializing in Heidegger and in Continental and Greek philosophy.
Polt’s beef is that, as a human, he resents biologists’ notions that we are simply highly evolved animals, have computers in our heads, and that our ethics can be reduced to changes in our neurons produced by evolution. I’ve put Polt’s quotes in bullet points, and comment about them after each one.
- “I have no beef with entomology or evolution [he’s talking about E. O. Wilson’s own Opinionator column here], but I refuse to admit that they teach me much about ethics. Consider the fact that human action ranges to the extremes. People can perform extraordinary acts of altruism, including kindness toward other species — or they can utterly fail to be altruistic, even toward their own children. So whatever tendencies we may have inherited leave ample room for variation; our choices will determine which end of the spectrum we approach.”
Well, almost no biologist thinks that learning how morality evolved, or develops as a cultural phenomenon, tells us what is the right thing to do. I do think that the rudiments of human morality come from our ancestors, for our relatives show some strikingly moral-like behaviors, but clearly morality has a strong cultural overlay, and what is hard-wired can be overridden by social norms. If that weren’t the case, what is considered moral wouldn’t change so quickly. As Steve Pinker shows in Better Angels of our Nature, our moral attitudes toward violence, slavery, and the treatment of women, animals, children have undergone huge changes in the last few centuries. Nevertheless, there are common aspects of human morality, such as the aversion to gratuitous killing or torture, that do have genetic underpinnings.
- “Next they tell me that my brain and the ant’s brain are just wet computers.”Evolution equipped us … with a neural computer,” as Steven Pinker put it in “How the Mind Works.” ‘Human thought and behavior, no matter how subtle and flexible, could be the product of a very complicated program.’ The computer analogy has been attacked by many a philosopher before me, but it has staying power in our culture,and it works in both directions: we talk about computers that ‘know,’ ‘remember,’ and ‘decide,’ and people who ‘get input’ and ‘process information’. . . None of these devices can think, because none of them can care; as far as we know there is no program, no matter how complicated, that can make the world matter to a machine. Without a brain or DNA, I couldn’t write an essay, drive my daughter to school or go to the movies with my wife. But that doesn’t mean that my genes and brain structure can explain why I choose to do these things — why I affirm them as meaningful and valuable.”
True, no current computer can think like a human, or feel pain, or show compassion, but it’s only a matter of time. And if our brains aren’t complex information processing machines, what are they? Of course we do think in far more complex ways than do other animals, and attribute meaning in ways they don’t either, but what does that mean but that we have evolved a more complex brain that is adapted to interacting with our group-mates, learning language, and so on?
- “But concepts from information theory, in this restricted sense, have come to influence our notions of ‘information’ in the broader sense, where the word suggests significance and learning. This may be deeply misleading. Why should we assume that thinking and perceiving are essentially information processing? Our communication devices are an important part of our lifeworld, but we can’t understand the whole in terms of the part.
Given that our brains evolved, why shouldn’t we assume that thinking and perceiving are essentially information processing. Yes, it’s a complicated form of processes, but what else could it be. And maybe now we can’t understand the whole in terms of its parts, and maybe aspects of our mentality, like consciousness, are emergent properties, but that doesn’t mean that they can’t ultimately, or in principle, be reduced to the interaction of parts. As far as we know, in all sciences emergent properties, like the behavior of gases or fluids, must be consistent with lower-level properties. There are no “top-down” properties that are not consistent with the interaction of parts.
At this point, with his strong anti-reductionist bias, one might ask whether Polt is some kind of dualist: that he thinks there’s some non-physically-based aspect of humans not present in other species. This, of course, is consonant with religious views. But Polt takes care to distance himself from those:
- “By now, naturalist philosophers will suspect that there is something mystical or ‘spooky’ about what I’m proposing. In fact, religion has survived the assaults of reductionism because religions address distinctively human concerns, concerns that ants and computers can’t have: Who am I? What is my place? What is the point of my life? But in order to reject reductionism, we don’t necessarily have to embrace religion or the supernatural. We need to recognize that nature, including human nature, is far richer than what so-called naturalism chooses to admit as natural. Nature includes the panoply of the lifeworld. . . The call to remember the lifeworld is part of the ancient Greek counsel: “Know yourself.” The same scientist who claims that behavior is a function of genes can’t give a genetic explanation of why she chose to become a scientist in the first place. The same philosopher who denies freedom freely chooses to present conference papers defending this view. People forget their own lifeworld every day. It’s only human — all too human.”‘
I’m not sure, then, in what sense Polt is rejecting reductionism. If his claim is that we must at present study emergent phenomena on their own terms since we don’t fully comprehend their components, then I’m with him 100%. But already things like volition, emotions, and other mental phenomena are beginning to give way to reductionist analysis, and there’s every reason to suspect this trend will continue. In the meantime, yes, we can act as though we have free will, we can appreciate Mozart or Dylan without understanding what about our brains, evolution, genes, or upbringing leads to “musicophilia,” and we can revel in our humanity, which I guess is what Polt means as “the lifeworld.” But in the end, it still comes down to molecules, genes, and our environment. (I can’t give a genetic explanation of why I’m a scientist, but I can give a pretty good one based on my environment. And what does Polt mean by “freedly chooses to present conference papers”? He may appear to freely choose things, but that could be an illusion perfectly consistent with determinism.)
In the end, this denigration of reductionism is a denigration of science, and though Polt argues that we needn’t embrace woo, his antiscientific views do buy into that kind of antimaterialist and antinaturalistic thought. Of course we have human needs that have been addressed by religion, but I’m convinced that a properly constructed secular society can also meet those needs, and we don’t have to reject the principle of reductionism to do that.
But most of all I wonder why Polt wrote this column in the first place, unless it’s yet another defense of philosophy’s turf against the incursion of science.
p.s. Polt published a further explanation of his anti-reductionism in an August 16 Opinionator column, and I may take that up later.
h/t: Miss May
Y’know, a lot of philosophers’ objections to the bloody obvious seem to come down to argument from personal incredulity.
Exactly my first thought. He is ignorant about the sciences, especially current scientific knowledge. He has no real idea what he is arguing against, but he knows he doesn’t like it because he finds it undignified. And he thinks he can derive serious arguments against it starting from prescientific classical philosophical concepts and rationalizing his way to a more dignified answer.
Yes. Many of the “big questions” of philosophy are “big questions” of human cognitive psychology. The “hard problem of consciousness” (qualia therefore magic) is philosophers refusing to be convinced that personal experience is not infallible, that sort of thing.
Objectivity just can’t exist without subjectivity.
It did for 13 billion years.
Who is “it”? Objectivity Itself?
Yes. Consciousness (and hence, subjectivity) came late in the history of the universe.
“The ‘hard problem of consciousness’ (qualia therefore magic) is philosophers refusing to be convinced that personal experience is not infallible, that sort of thing.”
This comment is typical of the kind of anti-intellectualism that parades around this website too often unfortunately. If someone is a philosopher and thinks there’s a problem with how science attempts to explain something, apparently, it must be because of the philosopher’s “ignorance about the science” or “a philosopher refusing to be convinced” by something obvious to everyone else. Please learn about the subjects you are discussing and stop committing ad hominem fallacies like this (focus on the issues please, not the person). As much as it strikes you as clear that science has the answers to these kinds of problems, or will soon, there are various areas of science where the answers are not obvious despite your claims otherwise. Condemning philosophers doesn’t advance the discussion one bit. A clue to the fact that there is more to the issue being discussed here than you seem to understand comes from this recent article by David Barash. Note that he’s not a philosopher, but an atheistic, pro-evolution, scientist, who comments on the issue of consciousness as follows:
the problem of consciousness is “so hard that despite an immense amount of research attention devoted to neurobiology, and despite great advances in our knowledge, I don’t believe we are significantly closer to bridging the gap between that which is physical, anatomical and electro-neurochemical, and what is subjectively experienced by all of us [=Polt’s Lifeworld]…,” and
“the hard problem of consciousness is so hard that I can’t even imagine what kind of empirical findings would satisfactorily solve it.”
http://chronicle.com/blogs/brainstorm/the-hardest-problem-in-science/40845
Barash is in agreement with Polt with respect to the basic issue here. Now, are you going to accuse the scientist Barash of being ignorant of the science too? Clearly this is not an issue of being “ignorant about the science.”
No, the example I picked, I’m pretty solidly sure is balderdash and is philosophers using wordplay to argue for dualism. All the evidence is that consciousness is a neurological phenomenon and not special to humans. You’ve come back with a courtier’s reply at best.
I always become wary when I hear he phrase “all the evidence” (especially when no examples are given). The only evidence is the presence of neural correlates for conscious experiences. It’s not new. In the early 20th century it was called “psychoneural parallelism”. What does it show? Not much more than the fact that without a telescope I can’t see Neptune. The light in my telescope, the metarhodopsin in my retina, the electrochemical signal in the optic nerve… these are all correlates of the qualia or sense data at the end of the chain. “…you… OBSERVE sense-data. You are not a conglomeration of sights and sounds…” The nature of that observer (selector, evaluator, artistic director…) is the Hard Problem. I agree with Couchloc – it’s not just anti-intellectual, it is illogical almost beyond irony, when openmindedness about an unolved problem is deemed wrongheaded in face of an almost religious faith that more-of-the-same will undeniably produce an answer. Isaac Newton described light in terms of mechanics, because electromagnetism was unknown. Unsolved problems may require new science.
With all due respect, you are still not quite understanding the issue I fear. Crick and Koch themselves admit in their research that the notion of consciousness they are attempting to explain does NOT address the hard problem of consciousness at all. Philosophers are quite aware of Crick and Koch’s (and other neuroscientific) views since we do read the science. So what you are referring to is not a counter example to my claim. If you read Crick and Koch carefully you would know this. As one commentator explains:
“Crick and Koch take pains to point out that they are not addressing the “hard problem” of consciousness, of how “the redness of red could arise from the actions of the brain,” but only looking to find the NCC. To some ….this seems like a patent evasion of the central issue about experience.”
http://sciconrev.org/2003/04/empirical-constraints-on-the-concept-of-consciousness/
In as much as science is anti-intellectualism, so be it.
Consciousness can’t be empirically defined what I know of (as opposed to awake, sleeping, et cetera), so it is a foregone conclusion to pretend that it exists. It is even personal incredulity to insist that it does, thus far.
Now, the traits that we call mind is something neuroscientists are working on. I don’t see philosophers contribute.
“Now, the traits that we call mind is something neuroscientists are working on. I don’t see philosophers contribute.”
Neuroscientists study nerves (and correlated behaviour) and hope that they are getting closer to mind. Philosophers study mind (among other things) in the hope that they can discover more about it.
Think on, as we say in Yorkshire.
I’d also add that, if Polt’s thinking is symptomatic of the field, a shortcoming of philosophy is this sophomoric tendency to remain empirically uninformed.
Much of Polt’s speculating (e.g., Why should we assume that thinking and perceiving are essentially information processing?) suggests to me that he seems uninterested in checking his ideas against what is already known, for instance in cognitive neuroscience / brain imaging.
Bingo! This is what most irritates me about so many philosophers, (Dan Dennet excluded)they assume that anything they don’t know, nobody knows and make no effort to educate themselves. It invalidates everything they say after they make an assumption that science has shown to be wrong, so what’s the point in reading any further?
Don’t you think that many philosophers might be under the influence of new age ideas? Holism and anti-reductionism are rampant in those circles.
There’s a pretty vicious feedback there, actually, because some philosophers fed into such movements from the back door, so to speak. Heidegger, who was mentioned, is one of them – because he was antiscience and antitechnology. (Of course, that he was a Nazi reactionary objecting to them from the *right* is unfortunately deliberately missed.)
This article about changing attitudes toward religion, spirituality (new age?) and atheism in Switzerland might have some relevance in the context of aversion of reductionism: http://www.swissinfo.ch/eng/swiss_news/Swiss_keep_religion_at_a_distance.html?cid=33178332
There is a trend among Westeuropean academics and other intellectual to reject traditional religion as well as atheism. I suppose most of you will be familiar with this type of bar talk: ‘No, I do not believe in God. The bible is just a compilation of old books. Genesis is one of the many myths about cration. But… but there must be SOMETHING… something at the base of our universe. There must be something after death. There must be some MEANING to the world and to our existence…” These people usually also talk about ‘science not being EVERYTHING’, about reductionism being cold and heartless and ‘missing the point’, ignoring that SOMETHING that simply must be out there somewhere…
I have heard students from the humanities but also from medical faculties talking that way. Maybe they just do not want to commit thenselves and to demonstrate their ‘open mindedness’ in ‘matters spiritual’?
I think there’s a little more to it than that. It’s a mistaken attempt to see ‘science’ as a discipline rather than a method, and to mischaracterise the aims of that ‘discipline’ as ‘providing knowledge’ rather than ‘making useful connections’.
In other words I think Polt and his ilk are trying to disguise science as a kind of philosophy, then point out that it has failed, in order to draw attention away from the more egregious failures of their own version of philosophy.
In fact, there is no unified nor uniform thing as “science.” There are just individual studies,data and research that are characterized as such, largely for rhetorical/journalistic purposes.
It advantages certain people to pretend there is a generalizable activity.
In fact, the so called scientific method is way to diverse to make generic, except in outside reporting.
[sub]
Yes, that is fairly muddled. If we take his rejection of dualism seriously, then ultimately he seems to say that the mind is an emergent property that we cannot understand only in terms of its parts, and hardly anybody would disagree, but there is some ambivalence where one might perhaps conclude that he really means it is independent of the parts.
The problem I have is that you seem to often argue the converse: But already things like volition, emotions, and other mental phenomena are beginning to give way to reductionist analysis, and there’s every reason to suspect this trend will continue. “Give way?” “Can be reduced to?”, as you wrote earlier?
I guess you only want to say that dualism is wrong, and hardly anybody would disagree, but there is some ambivalence where one might perhaps conclude that you really mean that the higher order processes should not be conceptualized any more, on the lines of, to put it into terms that you will surely understand and reject, “it does not make sense to speak of natural selection, after all it is all only interaction of particles”.
From a blog post I wrote a couple of years ago:
On a complete side note, I have become a little leery of the “brain as a computer” analogy, not because I think there is something special about conscious brains that no computer could ever have, but rather because I think it can be misleading in terms of how our brains actually function. Speaking loosely, there should be little doubt that the brain (working in conjunction with the rest of its container body, e.g. especially the spinal cord, the endocrine system, etc.) functions as some type of computer… but the more we find out about neurology, the more it seems that the way it functions is soooo very different from our digital computers. Just as one example, human memory is not something that is encoded as a series of bits to be played back later in perfect or near-prefect fidelity… it’s more like a dance of light and shadow, always in motion, always shifting, never concrete.
The “brain as computer” analogy is apt in many contexts, but it’s misleading when we try to think about how our brains actually work.. so I’m not crazy about it.
I agree. Arguably the embodied brain is an action response filter. Perceptions of external events and internal states are filtered until an appropriate enough response (which may include emotions or actions) is selected from the available options.
The brain doesn’t work as a computer – what computer would neglect so much data? Someone (I can’t find the link) suggested that a better analogy for the brain is an internet search engine. The ‘top rated’ page of the many possible responses is equivalent to what you ‘decide’ to do. The page rank may even change depending on preceding searches, or on the background ranking of fresh pages.
Hmmm. ‘I Google therefore I am’.
A lot of confusion seems to arise because people are tempted to take the metaphor too literally, or to use the wrong elements of brain and computer for the metaphorical comparison.
In a sense any neuron is a computational element, in that it fires in response to some inputs. But it is unlike a common electronic logic element that the similarity is a vague one. In the extended sense the brain is a complex computational system. That the complexity is so different from that of a computer isn’t the important part of the metaphor. That the computational ‘algorithms’ are far from like those in a computer does matter. That the computational nature of the brain was not programmed, as if by some programmer, is not important.
The purpose of the metaphor that I see as being most useful is to emphasise the lack of magic that history has imposed on the human mind; and the computer is the nearest thing we have available as a metaphor.
Reasoning by analogy gets one into an inextricable tangled web. Relevance in/to context is what counts in evolution. Analogy is, however, a seductive siren that I can’t resist either. The difference is that analogy is a starting point rather than a process, and certainly not an end point. Hypotheses may hold up—even to the point of becoming “law” (a dangerous concept, but one that I admit is a useful one). But even then, what we believe to be true must continue to stand up under continuous assault of repeated experiment. We use them as long as they hold up, then replace them when a better way of expressing them comes along. We are beings trying to understand the cosmos while our context is burning, as it were, from under our feet. We suffer greatly from a kind of überhubris.
The Selfish Gene?
I think I’ll write a book called “The Egocentric Gene.”
Your complain seems to be rooted in the modern tendency to presume “computer” equals the familiar “digital computer”. Turing played with a number of variants; however, while they were often more efficient (getting the problem solved faster) they were not more effective (getting no additional problems done). You can design “computers” that work rather more like the human brain, but that doesn’t add any philosophical power.
It’s exactly rooted in that, yes. “The brain is a computer” is a literally true statement, you’ll get no disagreement with me there — but when people hear “computer” they think “this Dell laptop sitting in front of me” (or whatever), and those two types of computers are really not much alike — not just in complexity, but in some more fundamental ways as well.
The metaphor is not wrong, I’ve just grown uncomfortable with it because I think a lot of the ideas it evokes are inaccurate.
Is Richard Polt arguing a “Lifeworld of the Gaps” position (ie, that we don’t currently understand everything via reductionist methods so the Lifeworld is the answer)? It seems that way.
Sure seems that way. I’d really like to hear what a “Lifeworld” is. I am guessing that it is one of those words that can be used to help express concepts, if everyone in the conversation has the same understanding of the term, but of no use whatsoever when it comes to figuring out how any aspect of reality actually functions.
Loosely – one’s own consciousness and social environment. (Disclosure: I never did have much patience for Husserl or the other German “H” philosophers.)
Yep, creating a new(ish?) term rung my alarm bell too. It means that rather than having a clear, worked-out alternative to reductionism or dualism, he just doesn’t like them and wishes he had something else.
As someone else mentioned, if he’s just talking about emergent properties and scale-dependent phenomena, those fit within a mature, robust understanding of reductionism. You don’t need any qualitatively new or different philosophy to deal with those things.
Proponents of intellegent design love to blindly deny evolution while refusing to admit to the alternative explaination that they know is on the mind of their audience. Polt seems to be using this same tack in denying the materialist explaination for behavior. I get a feeling that god is hiding somewhere inside this groundless assertion.
You seem to be identifying reductionism with the very ability to explain things. I don’t see how an explanation for your becoming a scientist that includes multiple sources and influences interacting in complex ways qualifies as reductionist.
Reductionism is just understanding things in terms of its components and parts. Describing how one became a scientist by listing the various influences is exactly reductionist.
A non-reductionist would say that Jerry’s decision could not be understood by looking at the various influences on him.
If reductionism is “understanding things in terms of its components and parts”, it’s not clear how it follows that “looking at the various influences on him” is reductionism.
Maybe you meant to say “looking at the various influences on Jerry’s components and parts”?
It’s depressing to read an article by a university professor that is nothing more than argument from ignorance and assertion. And if it’s not the brain (the meat computer) that makes us act the way we do, then what is it?
The brain has no resemblance to any kind of computer ever conceived by us. Calling the brain a computer, be it of meat or anything else, goes beyond oversimplification and into plain falsehood territory.
On the contrary, there are at least parts of it which seem quite clearly to be computational. That these are not made of silicon or do not have keyboards is not the point; rather that their laws of action are also describable in terms of calculation, input and output is what matters. In my view, the (dated) classic _Vision_ by David Marr is probably the best to read on this. What is that system doing if not calculating gradients, etc. (It may do other things, noncomputational.)
(NB: I do not claim that all of the nervous system is such. I do not claim the contrary either.)
That we can sum and substract and computers can too means nothing. A wheel and a leg can go from here to there and they aren’t anlogous in any way.
no current computer can think like a human
.
Thank goodness for that! Let’s face it, human computers are sloppy and prone to a multitude of errors. Most reads of this web site would agree that a majority of humans mistakenly think their invisible friend in the sky is real. I expect better from electronic computers.
Computers don’t make mistakes, programs do.
Turing’s analysis of the halting problem demonstrates why reductionism fails in dealing with a computational process. It shows why, even if it is fully deterministic, it is impossible to predict its outcome even if we have access to the program and know all the inputs.
So even if we regard the human brain as a computer and mind as essentially the programs that run on it we are still limited to an existential approach to gain an understanding of what is happening.
Turing’s analysis is reductionist, so on the contrary such analysis succeeds in making a testable prediction.
What you mean, since a computational device is a physical device, is that physics (and so science) has limitations.
But we already knew that, the observable universe is finite. Whoop-de-doo …
“denigration of reductionism is denigration of science…”
That’s not right. Stating that reductionism has limits is not denigration. Actually it’s pretty well axiomatic – otherwise infinite regress (objects to mle ules to atoms to subatomic particles to strings to branes to… to…) Polt thinks some human quality is basic – call it consciousness for the sake of a name. Well, some things have to be fundamental. That doesn’t denigrate science any more than quantum theory denigrates Newtonian mechanics.
The denigration comes in when you pretend or claim that the limitations of physics (science) is different than the limitations of physics (nature).
That is also so much bulls*t.
No such difference claimed. There must be fundamental entities and laws. Reductionism is an attempt to find them, not a belief that they don’t exist.
[However, I have followed WEIT’s link for Polt’s further justification, and it is clear he views consciousness etc as emergent rather than more deeply fundamental. My own view is that mind may be fundamental in some way – some quantum physicists believe this is necessary; certainly it has not been disproved and other solutions are still merely speculative (and perhaps philosophically objectionable). “Emergence” is often a circular argument, and in any case scientifically dubious (a theory of emergence makes no testable predictions). Bottom line: my argument re fundamentals is irrelevant to Polt’s position – in fact he thinks reductionism and emergence are the only two possiblities…]
Interesting hypothesis, but I don’t think there is any way that Polt could know this.
If he means right now in the present, that is probably true. But, even if it is currently true, it is not likely to remain true for very much longer. Of course one could always claim that even when we are capable of modeling human cognition and emotions well enough to program a machine to experience the emotion of caring, that it could never be the same as a humans. That the experience of caring that a real human being is capable of is unique for some complex reason.
What if you throw environment into the mix? As anyone of the scientists you are arguing against would? I guess JC already covered that comment but, jeez, how could Polt write that with a straight face?
I think the real problems are that, 1)Polt feels that human beings are special in a way similar to what religious people feel, 2) Polt doesn’t like computers or maybe science and technology altogether, 3) Polt is indignant at the suggestion that a computer could ever be in any way equivalent to a human being, and 4) Polt does not understand what he is arguing against. Polt is arguing from emotion, not data.
Yes, that statement about computers also caught my attention. He casually throws out the word “care”, which has historically only been used to talk about human like behavior, and then dismisses a computer’s ability to “think” on the basis that it cannot “care”.
Is there any way that we could theoretically show that a computer does “care” and thus “think” about something? How do we establish if a person cares about something in particular, other than asking them and taking them somewhat on their word? It seems like a sneaky way of inserting a human requirement in, and then circularly arguing that everything else is disqualified from “thought” it is not human. I suspect that even if a computer program could be made to emulate the responses Polt might be looking for to determine if a person cares about something, he would not consider the program to “really care”; mostly on the basis that it is a program and not a human.
Even though he expressly positions himself against dualism, it is hard to understand why naturalism would always be inadequate, regardless of scale, unless it lacked some special component. A secret part of the “lifeworld” I suppose.
“None of these devices (computers) can think, because none of them can care”
That hit me right in the eye as just plain wrong. And complete nonsense. I can (and do) think about many things that I couldn’t care less about, every day, and so does everybody else.
Unless of course Polt has his own very special definition of ‘think’ that somehow implies caring and makes his whole argument circular.
We make judgments. Some of these are decisions about importance. For example, I might decide that it was more important to pay a vet to operate on my dog than to pay a technician to repair my laptop. I might decide that I didn’t care at all about repairing an old radio. I might decide that the Mona Lisa or the Magna Carta was so culturally valuable that I wold risk my life – or yours – to defend it. Computers will “decide” if the designer or programmer builds in an algorithm. Purpose, judgment, importance (and unimportance) … they always track back to a mind.
But aren’t your judgements influenced by your previous experience of the World around you? Is your evaluation of the Mona Lisa or the Magna Carta and its relative value versus you, me or a.n.other not influenced by your interactions with them, the books that you have read about them, having been to the Louvre, etc., in other words the algorithms that have been written in your mind?
If you have never seen a pin, I might be able to prick you with it without you protesting or flinching the first time. Highly unlikely a second time.
There’s a world of difference between “influenced by” and “determined by”.
People who risk their lives to preserve a work of art, or starve in a garret (van Gogh style) to produce it, are behaving in an extremely non-Darwinian way. Or think of Archimedes, killed by a Roman soldier because he was engrossed in a mathematical problem rather than intent on self-preservation. Of course you could say that they were examples of defective human beings whose instincts and experiences went wrong. But then you’d be living in a different world from the one in which we admire the Aristotles and van Goghs, or Einstein who eke out a living as a petty clerk because of an overpowering urge to describe the universe in a new way, or Beethoven who struggled to compose despite advancing deafness, or many others. Our highest aspirations are those least connected to our Malthusian-Darwinian genesis.
I think that is veering slightly away from my point. I entirely agree that many of the things we think about, we do care about (and it’s probably impossible to be caring about something when we’re not thinking about it). But there are many other things that I notice (in the sense of them swimming through my visual field for example) and therefore presumably must ‘think’ about at some level, that I really don’t care about. I’ve just noticed that the calendar on my wall is showing last months page (obviously that requires enough thought to know what month it is); I don’t care enough to get up and change it. Presumably a computer equipped with a camera and OCR could observe the same thing. This is such a low level of ‘caring’ it negates the normal meaning of the word, I think. (But do I care?)
As I said already, “unimportance” is also a value judgment. You “don’t care” in the sense that you have made a value judgment. A computer “doesn’t care” in the sense that it can’t make a value judgment.
So you’re saying it is impossible to think about anything without making some value judgement and therefore ‘caring’ (even if it’s a conclusion that we don’t care)? And therefore it is impossible to ‘not care’ about something?
And I disagree that a computer can’t make a value judgement. I believe some AI programs have learning built in, which suggests they can develop some sort of judgement of their own.
But all of this seems to be some sort of ass-backwards manouvreing by Polt to demonstrate that computers can’t think (by his implied definition of ‘care’ and ‘think’).
I reserve my right to not care, dammit! 😉
Computers, of course, don’t ever care because they can’t. Human’s can and do. Even those things you believe are unimportant (let’s say, the early history of card games) may be of consuming interest to someone else (that would be me). Humans also have a concept of rights, as you demonstrate. I don’t deny that you could programme in an algorithm to deliver human-style responses, but they’d be second hand, i.e. ultimately human, and they’d be at a loss faced with a completely new situation. Humans are never short of a strong opinion.
Interesting that Polt writes “I refuse to admit that they teach me much…” rather than “I don’t agree…” or “I don’t think that…”.
Is he in denial and does that lead to his muddled conclusions?
Muddled conclusions indeed. Polt arrives at these conclusions by very selectively using language, such as “I refuse to admit”, with the preconception that we all agree on what that phrase (among many such phrases he writes) expressly or implicitly means. Such phrasing is all contextually-based, and Polt builds one puzzling notion upon another.
“The brain is not a computer, and the world is not a piece of tape.”
(Edelman, Gerald M. Wider Than the Sky: The Phenomenal Gift of Consciousness. New Haven: Yale University Press, 2004. p. 39)
“[I]n many scientific circles, there remains a widespread belief that the brain is a computer. This belief is mistaken for a number of reasons. First, the computer works by using logic and arithmetic in very short intervals regulated by a clock. As we shall see, the brain does not operate by logical rules. To function, a computer must receive unambiguous input signals. But signals to various sensory receptors of the brain are not so organized; the world (which is not carved beforehand into prescribed categories) is not a piece of coded tape. Second, the brain order that I have briefly described is enormously variable at its finest levels. As neural currents develop, variant individual experiences leave imprints such that no two brains are identical, even those of identical twins. This is so in large measure because, during the development and establishment of neuroanatomy, neurons that fire together wire together. Furthermore, there is no evidence for a computer program consisting of effective procedures that would control a brain’s input, output, and behavior. Artificial intelligence doesn’t work in real brains. There is no logic and no precise clock governing the outputs of our brains no matter how regular they may appear. Last, it should be stressed that we are not born with enough genes to specify the synaptic complexity of higher brains like ours. Of course, the fact that we have human brains and not chimpanzee brains does depend on our gene networks. But these gene networks, like those in the brain themselves, are enormously variable since their various expression patterns depend on environmental context and individual experience.”
(Edelman, Gerald M. Second Nature: Brain Science and Human Knowledge. New Haven: Yale University Press, 2006. pp. 20-1)
That says the brain is not a digital computer. It does, however, outline just how the brain is a stupidly complicated analogue computer. With some digital modes. And whatever else evolution just happened to throw in there in reaction to selection pressure.
And, of course, Turing showed that an analog computer may be more efficient than a digital one, but no more effective. (At least until your analog computer has full real-number range rather than merely (say) algebraics; which possibility is dubious).
And further we know the mind is embodied, many functions are decentralized.
For example, it seems they recently have identified that when we learn coordinated muscle responses, our neural system acts as a pattern generator for waves, trying out those who makes the intended response.
Even part A doesn’t know how part B works. No central program, just _maybe_ a standardized interface by way of development and evolutionary constraint.
So say good day to dreams of an easy picked “singularity” AI and let us welcome the kludge that is the result of evolution.
Saying that what is done with “building blocks” is in some sense more than the blocks is true in one way and false in another. Certainly, the pattern and dynamic disappears if the blocks are separated, but the pattern is not a subsisting “thing” separate from the blocks like a “soul” in dualistic thinking. The pattern is what Dennett and others call an “emergent” phenomenon. It’s rather like the Buddhist philosophy of “no-self” according to which the self is composed of its aggregates, just as the chariot is made of a wheel, axles, and platform.
People are resistant to reductionism feeling it violates their personhood and dignity. It need not.
A wall is more than a collection of bricks, true. But if you don’t consider the properties of bricks to be important, you might find that a wall made up of gingerbread ‘bricks’ doesn’t last that long…
The wall is a bad example. The pattern is actually designed in by the conscious builder. Dennett also gives the exmple of silveriness, which he says is an emergent property of a mass of metal atoms. But, of course, the silveriness only exists as a perception of a conscious mind. There are examples of emergent properties that avoid this trap, but they don’t shed any light on consciousness (e.g. the difference between the two-body problem and the three-body problem in orbital mechanics – but, even then, it’s only a “problem” for a conscious mind or for the computer he/she uses).
When Polt said that the mind cannot be explained by reductionism, but that he wasn’t a dualist, I was anxiously awaiting his ingenious alternative explanation.
“Nature includes the panoply of the lifeworld. . .”
What the ??!! That’s it? That’s the explanation?
I want my money back. Seriously.
Inevitably, there comes a point in these arguments where a word such as “lifeworld” is introduced, a kind of barrier to understanding. You as the reader are on one side of the “lifeworld” barrier, and the author, with much fuller knowledge of the definition and implications of “lifeworld” stands on the more-advanced side.
You will never catch up with the author.
“Lifeworld” (from German “Lebenswelt”) is a concept used by phenomenologist philosophers and sociologists such as Edmund Husserl and Alfred Schütz.
See e.g.: http://plato.stanford.edu/entries/husserl/#EmpIntLif
“Life-world. The universally structured realm of beliefs, assumptions, feelings, values, and cultural practices that constitute meaning in everyday life. In criticism of the classical theory of knowledge (Descartes to Kant), the concept of the life-world is first introduced as the insurmountable basis for scientific experience. Scientific theories are seen as ‘idealized constructions’ (Husserl), dependent on immediate sense-perception which itself, however, is part of the human everyday world that is taken for granted. Accordingly, the life-world as such is understood as the unproblematic and pre-scientific presupposition of any understanding and meaning, providing an implicit background of once explicitly held or intended and now ‘sedimented’ beliefs, assumptions, and practices. Whereas the life-world has first been conceptualized as the world of the subject (Husserl, Schütz), more recently its genuinely social character has been emphasized (Gadamer, Habermas).”
(“Life-world.” In The Oxford Companion to Philosophy, edited by Ted Honderich, 2nd ed. Oxford: Oxford University Press, 2005. p. 521)
Wow, that’s a mouthful. This is the kind of stuff that makes me think that philosophy is not of much use when it comes to actually figuring out how reality works. It often seems no different than theology. Just what the heck is “The universally structured realm of beliefs, . . . “ supposed to mean anyway? It sure sounds grand but does that phrase describe anything real? Does it add anything, besides grandness, or perhaps obscurity, to the concept that whoever came up with that is trying to describe?
In other words, the world of memes.
So Polt is really saying a human is not just a product of his genes, but of his memes as well. Wow, a ground-breaking idea /sarcasm.
I cannot resist relating this. Pardon for apparently getting off the subject (but maybe not):
When forcibly called in to meet the phenomenologist mafia boss, what does he say to you?
“Let me make you an offer you can’t understand.”
(This is not an original joke.)
Written in bad French or German, in many cases.
I’m surprised people aren’t more annoyed by these things actually. I keep hearing that the brain was “designed” and is “wired”.
If that’s not Intelligent Design language, I don’t know what is. And these are scientists using these phrases!
Also to understand his point, simply note how we describe the behavior of animals. We don’t wax nostalgic about how they are “wired” to be kind. We merely observe their behavior and biology.
Then we objectively speak about how their body systems (brains, neurons, etc) work. For humans, it’s always much more fluffy language.
I didn’t understand that to be his point at all. He is not complaining that scientists are not sufficiently materialistic when it comes to human cognition. He is complaining that they are not fluffy enough. He seems to think that either humans are mystically special in some way, or that human cognition is so complex that science can’t possibly understand it. Or perhaps both, since they kind of go together.
I see what you mean, I suppose only the author can answer that. He did in fact complain about the computer analogy; and in all that, I suppose it depends on how we define materialistic.
Is Biology materialistic? I’d argue that computers are more so, and that biology is not fluffy. But rather, squishy.
And there’s nothing wrong with that. We are composed of living matter. The closest we come to being electronic is within electron activity. Aside from that…..we’re made of organs.
Organs behave in a certain way. Yes, even the brain. I’d argue that saying we’re computers is more mystical and fluffy than admitting we’re just another animal.
Perhaps understanding human cognition is not really possible, because once again, understanding the brain is quite beyond any type of “Ah ha! So, I see!!” -type of explanation.
This comes down to humans’ inability to grasp large numbers. 80 trillion synapse connections. << What's that "like"?? Heck, what is one trillion of anything, "like"?
But, it's okay that we humans have that inability. We humans will never be able to reduce "one trillion" to a meaningful comparison to anything. Does one trillion synapses equate to "Care", as Polt might imply? He claims that a computer cannot "care", but he cannot show that a neural network, with one trillion connections, does not generate care, or multiples of a trillion up to 80 trillion.
In my opinion, large numbers such as 1000 billion (a trillion) are impossible to mentally identify, hold, and carry, in any philosophical argument. Look at the definition of "one billion" by author Lawrence G. McDonald:
"If $1 million in $100 bills is two feet high, it would take three Washington Monument-sized stacks (~600ft) of money to make $1 billion. To make $13.4 trillion…if you converted them all to $1 bills, the stack would reach the moon, 230,000 miles away, with millions of dollars leftover."
I agree with everything you wrote about large numbers and our inability to grasp their magnitude at a basic level. However, I disagree that because the number of connections in our brain is of such a magnitude that therefore we are not capable of understanding human cognition. There are numerous phenomena that involve numbers of such magnitude, that human minds can not grasp, and which we understand well enough to make accurate predictions and develop technologies based on them.
Then again, I guess it all comes down to just what definition of “understand” we are talking about. But, more generally, we have tools that enable us to work usefully with very large numbers. And, as you demonstrated in your comment, we can devise comparisons that do allow us to get some idea of the magnitude of very large numbers. Up to a point at least.
All animal cognition is pretty much the same, of course.
Human cognition descended from other primates and mammals. It is complex but infinitely so. New technologies will, likely soon, make it understandable and quite mundane, also likely.
but infinitely but not infinitely (?)
Wow, let me try that again. Did you mean to say “but not infinitely” instead of “but infinitely?”
Where is Lawrence Krauss when we need him?
Reblogged this on emmageraln.
Poppycock. Caring is just neural triggering that makes you value one thing over another. Computers certainly can be programmed with algorithms that allow them to rack and stack priorities. Computers don’t show an emotional response, true, but they absolutely can (and do) “care,” if by that you mean they prioritize some goals and choices over others.
As a simple example, most computers “care” about implementing critical updates. If you don’t tell them not to, they will shut down and restart to implement them, even if you are in the middle of using them.
I agree.
For people like Polt, “words matter”. Not really. Reality matters.
Words matter if you’re paid by the word, as ideologues, magical thinkers, philosophers, etc are, and not by predictive and empirical accuracy.
If empirical reality is what matters — math (data) matters.
The rest is just pandering to people’s feelings of the moment so you can get attention and money = salemanship.
Academics in humanities are paid by the article, not by the word.
The persistence of an ingrained though diminishing-in-credibility world-picture is not necessarily motivated by salesmanship, especially if a century ago a lot of reputable intellects had that world-picture.
Conspiracy-theories about religion (unless its Scientology or Mormonism! 🙂 ) and dated viewpoints are often not a good idea.
“Reality matters” – that’s an example of “caring”.
Compare and contrast:
“The universe we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but blind, pitiless indifference.” (Dawkins)
The phrase “at bottom” is doing a lot of work in the Dawkins quote, yes-no?
There’s a lot of “caring” in our world — but none of it is done by individual things at lower levels of organization.
“The universe we observe” is populated by human beings who care about all sorts of things that have nothing to do with a meaningless, amoral and indifferent physical basis. In fact we only observe the universe by virtue of being human, and through the filter of our caring, moral, purposive humanity.
I have to wonder if some of the objections to reductionism and science come from the overwhelming amount of humility required to do it properly. People talk about being humble, but when it comes time to actually BE humble in the face of facts they turn away. I honestly feel sad for people that won’t accept the universe as it is, it’s incredibly liberating.
Opponents to reductionism, like Richard Polt, often don’t seem to understand what reductionism entails and as a result end up fighting straw men. It is as if they examine a sand castle with a microscope, merely see a lot of grains of sand, and then go on to declare that there is a special property of being a sand castle, one that cannot be detected by putting a sand castle under a microscope. But the only thing reductionism claims is that ultimately a sand castle is a certain spatial arrangement of grains of sand and that there is no additional ‘sand castle property’ involved. Why is this so hard to accept?
The problem goes both ways. Those who argue against reductionism fear that reductionism takes away all emergent properties and ability to have valid higher-level understanding and explanations. Those who defend reductionism on the other hand sometimes go overboard and become too eager to reduce everything and end up denying there are any emergent properties at all, giving justification to the fear held by the former group.
It needn’t be this way. Reductionist explanations are good and useful for knocking down some of the silly higher-level concepts that we have. But not all higher-level concepts are made invalid or false by reductionism. Prime example is consciousness. Consciousness has a reductionist explanation in terms of components and parts that are not themselves conscious. Those suspicious of reductionism would resist the explanation that this is really how consciousness comes about. The overeager reductionists deny that humans have consciousness at all because it’s just interactions of little robots.
You see the same argument happening with respect to free will because of mistrust of reductionism and overeager reductionism.
Well said!
Why? Because assuming is what philosophers and theologians do. It is not what scientists do. Scientists hypothesize and then design experiments to critically test those hypotheses.
Two words that debunk his philosophical position: “Mandelbrot set”.
From a simple recursive rule that any bright school student can comprehend emerges an astoundingly complex object.
Or further: from the simple physical rules which govern the interactions between a handful of elementary particles emerges the complex glory that is the universe.
And again: from a bunch of chemical elements whose properties are well-known and mathematically definable emerges the entire ecosystem within which life itself is embedded.
Each of these has a fully reductionist cause, but the non-linear interactions between these objects leads to behaviour which is unpredictable, chaotic and unexplainable without watching it unfold in the course of time.
Reductionism / wholism is as irrelevant as the religious arguments of predestination vs. free will.
Well, almost no biologist thinks that learning how morality evolved, or develops as a cultural phenomenon, tells us what is the right thing to do.
Does Sam Harris count as a biologist? He made rather a splash recently claiming that the biological sciences have *already* discovered the right way to live our lives.
Sam Harris is not saying the opposite of what Jerry was saying. They’re talking about different things.
Jerry basically: learning about evolution of morality, doesn’t tell you what’s moral.
Sam basically: neuroscience tells you what’s moral.
“learning about evolution of morality” is not “neuroscience”.
You’d have to niggle your way through more than a few split hairs to successfully pick that nit.
The context of the discussion is whether increasingly accurate models of some empirical phenomenon will ever, even in principle, cough out normative truths. Quibbling about which sub-domain of biology is most likely to do this is irrelevant. Proponents of Scientism sometimes say “neuroscience has proven,” or “game theory has proven,” or (like Rich below) “psychiatry has proven,” or “evolution has proven” that moral facts can be somehow reduced to some domain of biological facts.
Philosophically-minded atheists are not going to stop complaining about this sort of thing until scientifically-minded atheists clean up their own house.
Good example of the dishonest of ideology-philosophy-words based explanations — hyper-personalization: (x says).
The molecules in our brains love ad hominem attacks. If you are fighting for power — “attacking the messenger” is effective, dishonest and degrading of problem-solving but it works — for the attacker.
In fact, individuals don’t matter, unless you want to get paid for what you say. Math(data) is all that matters.
There is no data to support the “words matter” position and increasing evidence debunking it. Simple.
Calling it reductionism, evil, immoral, or attacking individuals are just cheap rhetorical tricks and avoiding the facts.
I’m not sure whether I’m supposed to feel honored or chastened to learn that I am an “example of the dishonest [sic] of ideology-philosophy-words based explanations — hyper-personalization: (x says)”
A virtual pint to anyone who can tell me what any of the rest of this comment means.
Ahhh, best to stick with the booze then.
And get back to your sleeprunning…
The difference between evolution and neuroscience is equivalent to splitting hairs?
It is not a nitpick either to note that Sam Harris really starts with an axiomatic good which is the maximal flourishing of an individual and merely uses neuroscience as a tool to ascertain if a situation passes that moral test.
If you accept his premise, then yes science (neuroscience specifically) can tell us how to live our lives towards that axiomatic good. I’m not sure why he thinks we need a brain scan and why not just ask the person for a verbal report, but presumably the brain scan is more accurate.
On the other hand, no biologist now even suggests we should model our morality after the survival of the fittest rule of evolution.
In the context of a post titled “Is reductionism wrong?” it certainly is a nitpick to say “oh, I don’t believe morality reduces to those kinds of biological facts, but it does reduce to these kinds of biological facts.” Either you think any such a reduction is even a legitimate conceptual possibility or you don’t. (Polt mentions “entomology or evolution”, clearly as examples to point towards this broader family of arguments; I don’t see anyone making hay over the entomology claim.)
And I’m sorry to report, but Harris really does claim the mantle of scientific authority for his value system. The subtitle of is book is “How science can determine human values,” for heaven’s sake. He really does say things in public like “science has proven” this or that moral or metaethical claim. And he is no lone, maverick voice on this score in the atheist community; these views are a commonplace, and it’s a philosophical and moral scandal.
Harris makes no claims that science has solved moral problems generally. From The Moral Landscape:
“While the argument I make in this book is controversial, it rests on a very simple premise: human well-being depends entirely on events in the world an on states of the human brain. Consequently there must be scientific truths to be known about it.
…
I am not suggesting we are guaranteed to resolve every moral controversy through science. Differences of opinion will remain – but opinions will be increasingly constrained by facts. And it is important to realize that our inability to answer a question says nothing about whether the question itself has an answer.”
I’m not sure how clearer he can make it. He makes no claims of success. He provides it as an idea that he thinks is supportable. All the examples he relates where he does claim that a moral distinction can be made are based on the relation of the examples to well-being.
And the sub-title’s use of ‘can’ is not meant to imply ‘has’. He makes it very clear that he sees us being in the very early stages of that process.
There’s a lot of hiding behind terms like ‘normative ethics’, as if there is some normative system to be discovered. But there are so many normative systems that it would suggest that many of them are as bogus as the many theologies that abound. Normative ethics is just so much thinking with your head up your ass, and is only ever any use when put into practice. But putting normative ethics into practice always come back to opinion, and opinion is produced by brains. Unless you believe in a soul, or at least a dualist mind, there seems no room for normative ethics. The only sense in which you can’t get an ‘ought’ from an ‘is’ is if you define your ‘oughts’ to be mystical and abstract. But why should they be so defined? Because philosophers say so? Particularly philosophers that are ignorant of science?
Our best understanding of morality is that it was invented by humans, as a social development, based to a great extent on biological drives. The link to evolutionary sources may be tenuous, but it’s a damned sight better than the theological and philosophical plucking of it from the air of transcendent mysteries.
Our only way of improving our understanding of stuff beyond the poetic is through science, however limited that might be at any time in science’s history – and we are still part of that history as it unfolds even now. Most objections to the possibility that science might one day provide answers (to many questions, not just that of morality) are best on a particularly parochial view about what science today can do. Look at how far science has come in a few short centuries. Are you so sure that the science of another two hundred years, two thousand years, will not provide answers, or at least provide profound insight into moral questions, that millenia of philosophy and theology has only hinted at?
“Either you think any such a reduction is even a legitimate conceptual possibility or you don’t.”
I do. I understand that you might not if you already define morality to be elevated to some higher level that is beynd the reach of science. I don’t see any reason to accept such a definition.
Staircaseghost is right about this. Your appeal to Harris here is not very persuasive since his views are very controversial. We’re told that “science shows that morality is such and such……” and, yet, there’s not really consensus about this in the scientific literature even. So your claim that the problem here rests with philosophers and their understanding of science is pretty tendentious. There is no agreed-upon view among scientists themselves what science teaches us about these issues, as others have noted.
“Science can determine moral values.” –Sam Harris
“I believe that moral questions are outside of the scientific realm.” — Richard Feynman
“Harris makes no claims that science has solved moral problems generally.”
And I make not claim that he claims that it has. But he does claim that certain specific moral problems have solutions — which I am in agreement with him about far more often than I am in disagreement — and then claims that it is science that has solved them.
Like on page 3 of his book, in literally the paragraph after the one you quoted.
I confess to being baffled by your vaguely antirealist attack on normative ethics in this context, especially given your apparent agreement that science can perform a succesful intertheoretic reduction. Did you mean to disagree with Harris?
“I understand that you might not if you already define morality to be elevated to some higher level that is beynd the reach of science.”
I don’t believe morality is “elevated” or at some “higher level”. I am a antirealist. I no more think it’s at a “higher level” than I believe baseball is at a “higher level” than Italian cooking. I simply deny that any amount of improvement in the methods and practices of the one area of human endeavor will somehow render the other irrelevant, on the grounds that they have nothing to do with each other.
No, at best what he’s saying is that science can tell you this or that is good, given that the definition of “good” is maximal flourishing. Like I said he’s just using science as a tool. You still need to define “good” first. Read past the title of his book.
When the question at issue is, “can science determine values?” one really does not need to “read past” the title that says unequivocally that it can. Sam Harris is many things, but an unclear communicator is not one of them.
No one disputes that given any arbitrarily identified empirical property, science can (in principle) tell you how to go about maximizing its instances. What philosophically literate people object to is precisely his totalizing and question begging claim to have identified what that property is. Hell, he even explicitly self-describes his argument structure in terms like “begging the question against” deontology and “ignoring most moral philosophy”. Neither of which are intellectual crimes — unless you’re also hawking a book claiming to have refuted every other view but your own.
I am not going to defend Harris, since I don’t agree with his utilitarianism, or the usefulness of any ethical system. They are too rigid to work as a moral guides, except that ethical guides and councils have proved their value in the mesoscale of companies and universities.
What I am going to do is to support R&C when he points out that there is no “clean up” to do here. Individuals are not areas, and the area of science is pretty set that moral behavior is a) emergent behavior, b) not normative for the working out of moral systems.
Your confidence that you can figure out what on earth R&C is trying to say vastly exceeds my own.
As for as your claim that “moral behavior is emergent behavior”, I’m at a loss as to what on earth that means, but even more at a loss as to which party in the discussion is supposed to be disputing it. But I am heartened to hear someone agree, contra Coyne and Harris, that science is a priori useless for generating normative ethical claims.
What is considered moral or not is not absolute. I do not see what is considered good or bad to be gradations on a scale. They are more like Venn diagrams, & we each have our own sets of these.
DV, both you and Staircaseghost have got Harris’s position almost totally wrong. He’s not saying that biology or neuroscience tells us what’s moral. He’s saying, in short, that if we actually bother to define what we mean by “morality” (like we do with every other word), then the best way to determine how to meet the ends proscribed by such a moral system is with empirical facts, logic, and “The Scientific Method”.
Basically, he’s endorsing a form of ethical consequentialism (i.e. defining what he means by “moral”) and saying that we should use the methods of science (i.e. reason, rationality, logic, etc.) to bring about its desired ends.
It’s actually amazing to me that so many secularists find this controversial. They often bring up the whole “can’t get an ought from an is” canard, which only makes sense if one makes an arbitrary rule that defining the word “moral” is not allowed.
As Harris says, the fact that not everyone agrees on a precise definition of “moral” is no more an obstacle to a science of morality than the fact that not everyone agrees on a precise definition of “health” is an obstacle to a science of medicine.
It’s really a pretty simple argument that many otherwise intelligent people seem hellbent on misunderstanding.
I think this denigration of reductionism is not merely a denigration of science, but also of mathematics.
By around 1900 or so, some of the philosophers split off to do some serious playing with mathematics — EG, Russell and Whitehead with the Principia Mathematica. The other classical philosophers have had trouble following that branch. About the time of Turing, Gödel (and his theorem), and the early work of Chomsky, the classical philosophers seem to have stopped being able to grasp the results. However, the mathematicians kept going into deeper territory.
Now, with the work on information theory, Kolmogorov Complexity, and other mathematical developments, the mathematicians are starting to hand back tools enabling solid answers some of the REALLY big questions of philosophy — like the problem of Induction. And the philosophers aren’t happy about that.
As for devices not “caring”… that’s simply a preference for condition A over condition B, which in turn is merely another form of environmental response. Not hard at all, qualitatively. Quantitatively, getting more complex patterns of caring is harder; but complexity is really easy to generate. Getting complexity that resembles human complexity is harder still… but no harder than recognizing the complexity of another human in the first place.
On the other hand, there are two movements (one slightly more relevant than the other) within philosophy, at least, which do not denigrate (usually) any of those. These are the exact philosophy movement and the computing and philosophy movement. The first was founded explicitly as a push for the use of exact tools within philosophy, even to help understand its history. The 1973 volume that commemorates its introduction has a discussion of how to understand the Forms (as a matter of historical interest) using second order logic.
However, it is quite clear that the diametric opposite of these movements (especially the former, again) exists amongst the phenomenologists, existentialists, some philosophy of religion etc.
So there is a “philosophy of philosophy” of sorts.
Of course the “science of philosophy” is simple and even by necessity simplistic in comparison. Philosophy is untestable by the moving goalpost definition (“if it is testable, it is science”), so not Even Wrong as the saying goes.
I still think there should be a more serious science effort to understand itself than statistics, measurement theory, and to some extent measure theory. A “science of science” as it were. While science has set up an inherent feedback on quality, it is always a good idea to evaluate and improve tools to the extent possible.
Obviously I don’t think “philosophy of science” is helping, on the contrary perhaps. :-/
I think mathematicians are creating tools that they themselves are not happy about. The failure of proof theory, the acceptance of human error and computer proof, and the creation of Chaitin’s quasi-empirical math constructs shows that the dualist platonism that mathematicians hold so near and dear is erroneous.
Nitpick: “Gödel’s theorem” are two related theorems – but you probably knew that.
I think “failure of proof theory” slightly overstates the case. It’s more accurately a limit. That not everything can be proven doesn’t mean that nothing can be proven — and that some particular thing can’t be.
I’m not familiar with Chaitin’s “quasi-empirical” frame; however, I suspect I’d disagree. The empirical at most triggers human exploration and awareness of abstractions.
Brain science has taught us, unequivocally, that human behavior problems are all psychiatric problems. Period.
Even mass social behavior problems originate in individual psychiatry matters — the molecules of the mind. Culture is just brain molecules.
These are medical matters. That’s all.
The rest is just ideological power-driven (protecting one’s paycheck, etc) salesmanship and empirically meaningless.
Aside from a medical, psychiatric discussion — there is nothing factual to discuss in terms of understanding and prediction.
“Ethics” and philosophy don’t predict or explain anything independent of psychiatry.
Psychiatry = molecules. Unequivocally…
You seem to have access to the whole, completed reductionist project. We can all pack up our microscopes and retire.
“You(s)” are irrelevant. That’s the evidence. It refutes everything else. There is no evidence to the contrary.
“Reductionism” is just dishonest name calling and rhetorical tricky — that only works because of molecules in the brain, of course.
Then the whole of science only works (in your view of things) because of molecules in the brain. So I guess Dark Matter is a psychiatric problem. Doesn’t work for me.
Of course, “dark matter” is just a series of images conveyed thru the audio and visual system stimulating molecules in the brain – psychiatry.
Individual beliefs are irrelevant. The molecules are measurable, predictive and factual.
Individual beliefs are, overwhelmingly, the opposite.
Beliefs are NOT in that psychiatric domain (psychiatry = molecules = predictive certainty)?
You lost me.
IMO Polt arguments look like he’s presupposing the specialness of people & then looking for examples of exceptionalness to support his view
He’s just a theologian who doesn’t believe in God
“He’s just a theologian who doesn’t believe in God” – you have it exactly. He wants there to be meaning – but – sorry folks – there is no inherent meaning in the universe. Why is that so bad?
On the contrary, it is Good. We are not slaves under either absolute purpose or its (irredeemably immoral and utterly scary, according to the christian texts) maker.
Time to re-read Steven Weinberg’s essay ‘Two Cheers for Reductionism’ from his book ‘Dreams of a Final Theory’.
I don’t remember all the details of the essay, but I do remember agreeing with it.
Might get back when I have re-read it.
Having re-read the Weinberg essay, I think it entirely consistent with Dr Coyne’s view.
Once again we have a philosopher asking –
“Who am I? What is my place? What is the point of my life?”.
All well & good, but sadly, it seems to me that like the characters created by Douglas Adams, that he is either asking a meaningless question, or he is asking the wrong question.
That’s a bit harsh on Magicthights and Broomfundle.
Majikthise and Vroomfondel.
Strictly speaking they do not actually pose the Ultimate Question themselves, but merely deliver it to Deep Thought as representatives of the Amalgamated Union of Philosophers, Sages, Luminaries and other Professional Thinking Persons.
While arguing that the search for ultimate truth is the inalienable prerogative of your working thinkers, Deep Thought convinces them that a great deal of money could be made by philosophers who were willing to exploit the expected media interest in the Question.
Mush like today, really.
“I demand that I may or may not be Vroomfondel”.
Sheer genius…
I am not sure I agree with Pinker that cultural changes cannot be a direct product of our biology. Not everybody needs to become genetically predetermined to dislike slavery, for a society to put an end to it. Maybe when only, say, 49.9% of population wants to have nothing to do with slavery, it is not sufficient to impose their (biological) beliefs on everyone else and make it a reality. But maybe 50.1% is that magic number.
This is how drastic cultural changes could be a result of genetic changes. It takes time to get there, and will only become visible when the number reaches a specific point.
Sooner or later the helots will revolt!
I’m reminded of the Stanley Milgram Experiments, aptly described as designed scenarios “which measured the willingness of study participants to obey an authority figure who instructed them to perform acts that conflicted with their personal conscience.”
I’m perennially consumed with trying to tease out the root causes of Human Behaviors- to suss out what portion of our lives are ruled by Nature or determined by Nurture.
Is it Possible that in the case of Ethics- the overriding principal is not that humans are pre-programmed~ neurobiologically, Physiologically, or in any other genetic sense to espouse or express ethical ideals of ANY sort- Positive OR negative; good or bad; right or wrong; such as altruism, intolerance of oppression (such as slavery), the refusal to torture a fellow human (family or not)… But that, perhaps, we DO harbor a genetic predisposition to Follow the Instructions/examples set down by those who we perceive to be leaders~ and it the decisions of those leaders that inform our ethical boundaries?
There seems to be a good many Analogues of this type of behavior in Nature. A Good Leader is, in evolutionary terms, a key to survival for the entire group. It may be that we, as individuals, learn to capitulate to the leader’s will, whether we like it or not- because doing so trumps any survival/cultural gain that adhering to our own beliefs might achieve.
Might it it be that This sort of ‘Leader Following’ is the deciding factor in a Culture’s expressed or tacit Moral “nature”… That if, say, 51% of us, at least, are likely to Follow The Leader, Then ‘Simon Says’ pretty much decides our inculcated, engendered, and passed on ethical belief systems/behaviors, not the Morality of the Acts in and of themselves, even in the case of a majority which shares an ethical stance in opposition to the Leader’s?
Just Thinking Out Loud.
I’ve never joined in a discussion of this nature online, until this moment, so i may be completely daft or missing the point and absolutely blind to my own ignorance…
Rose, you are probably wise to expose your belly to this pack:
“Just Thinking Out Loud.
I’ve never joined in a discussion of this nature online, until this moment, so i may be completely daft or missing the point and absolutely blind to my own ignorance…”
But your thinking out loud is real thinking, not breast-beating. It it they who are in good company.
As one of my mentors (the late Raymond Maurice Gilmore) told be once, right up in my face, with firm emphasis, “Wayne, don’t forget, the suspension of judgment is the highest exercise in intellectual discipline.” Upon this principle, and upon my first lesson in botany/ecology lab, a planarian in some muck in a Petri dish, a pipette, and a drop of saline solution, which we were to place well away from the planarian, at the edge of the dish, and observe its behavior. In another dish was a bacterial culture into which we inserted a speck of mold. I have used these lessons for something like forty and over fifty years now, and I have found no reason to abandon them. However, I do find marked inconsistencies with disturbing frequency, yea, at the “highest” levels.
I can’t figure out whether some folks (some prominently mentioned here) are hobbled by the same errors as was Hobbes, suffered from a tendency to exaggerate Rousseau, or just too much primping.
But just thinking out loud (whimper), I have asserted the following “law” for ecology/evolution in several places now without a peep from the authorities–in fact, I meekly offered it to one of the top people in the field after first establishing contact, but received no further reply (busy, I presume, and I was a bit surprised that this person had taken the trouble to reply to my initial enquiry). “Organisms do what they can, where they can, when they can,” I said.
So here I go again. Throwing my body belly-up, expecting to have it ripped from stem to stern, sternly.
Some of us are easily led, some are not, some are leaders. Some, that is, do reach independent decisions.
Rose, I meant to say that “It is (not It it) they who are in good company.”
As to your statement: “A Good Leader is, in evolutionary terms, a key to survival for the entire group.”
Nature doesn’t give a damn about any individual–or group, for that matter. Or any species. If the context doesn’t match the genetics, it’s bye, bye! If a Homo is more sap than sapient, those genes will be culled, and if it keeps up, culled fatally, terminally. But this does not mean that an entire group/population of individuals lacking the fatal gene will not survive to go on reproducing, and barring some contextual shift, edge the individuals/populations toward a genetic compliment that is at least sufficiently adapted to the “new” context.
Culture is another bag of worms, because it “worked” well enough to cause a huge bubble in certain human populations. But the catch is that every unmitigated success, every boom, must needs be followed by a bust–either an adjustment/reduction or an annihilation. It remains to be seen whether or not culture will bring itself into balance, or whether or not the bubble will keep expanding until it pops for the last time.
I saw his two articles recently and thought they were dire. When you cut away the crap, they’re nothing more than “I don’t like it so it can’t be true” arguments, based on really bad science.
Come on guys, don’t hate. It’s actually pure genius: the secret to life, the universe, and everything = LIFEWORLD.
I’m late to this party, but I wanted to point out that there are two senses of “reductionism,” and it’s very easy to equivocate.
The first, “good” kind of reductionism is the kind which the sciences have usually maintained — that there are higher-level processes which emerge from or supervene on lower-level processes.
The second, “bad” kind of reductionism is the type which says that all higher-level processes are explainable in terms of the lower level processes. This latter kind is really a type of misplaced essentialism, and I don’t think most scientists go in for it — it’s a straw man.
For instance, according to this latter sense: if “reductionism is true,” then there should be molecular definitions of “neuron” and “elephant” — there should be necessary and sufficient conditions for affirming membership of some group of molecules among the set of neurons or the set of elephants based upon an analysis of the molecules at the level of the molecules.
The “good” kind is used to chunk off parts of nature in order to point out levels of organization where the arrangement of its constituents can vary considerably without disrupting the higher level. We can use the concept “brain” to point out real things in the world without requiring that everything called “brain” has exactly the same neuronal (or molecular) structure.
The “bad” kind is responsible for ideas like “mentalese,” where it is assumed that brains which share beliefs or entertain the same thoughts will have the same pattern of arrangement and/or firing of neurons (otherwise we can’t call it the “same belief”). This is the kind of reductionism people like Polt seem to have in mind when they criticize it, but again I’m pretty sure it’s a straw man.
An Einstein might say that reductionism makes “things” as simple as possible, but no simpler. He did say that, but please correct me if my assumption that he didn’t is incorrect.
If a principle holds true in all cases and contexts, like the laws of thermodynamics (but are, say, astrophysicists certain that they do?), is that “reductionism?”
Or is reductionism a stated principle that does not hold in all cases and contexts? Is it something like, in practice, debating how many angels can dance on the point of a pin? Is it wishful thinking, wanting something to be true so badly that one can thus render almost any assertion get the assertor a Nobel Prize or at least make him King of the Mountain or the biggest frog in the smallest pond?
Just thinkin’ out loud . . .
Sorry for the new post; I neglected to proof the first one adequately. Please ignore or delete it. I hope this one corrects all my errers.
An Einstein might say that reductionism makes “things” as simple as possible, but no simpler. He did say that, but please correct me if my assumption that he didn’t connect it to reductionism is incorrect.
If a principle holds true in all cases and contexts, like the laws of thermodynamics (but are, say, astrophysicists certain that they do?), is that “reductionism?”
Or is reductionism a stated principle that does not hold in all cases and contexts? Is it something like, in practice, debating how many angels can dance on the point of a pin? Is it wishful thinking, wanting something to be true so badly that one can thus render almost any assertion get the assertor a Nobel Prize or at least make him King of the Mountain or the biggest frog in the smallest pond?
Just thinkin’ out loud . . .
Reductionism is an ideal. For example, no one has a clear idea how gravity and the “electro-weak” (plus strong nuclear) force could be explained as products (though broken symmetry) of a single force; but it’s a realistic project and a successful outcome would be a useful and exciting simplification. But don’t hold your breath – there could be two (or more) genuinely fundamental forces.
I think that, the size of the systems, is confusing how parsimony works against actual complexity, with empirical reduction.
I had to look up essentialism. But this revolves around how you define “explainable”.
In most cases what people mean with an explanation is what amounts to predictive theory. And by definition, emergent properties are those that we can not yet fully predict from other, more fundamental properties. (Where “fundamental” vs “derived” would be against a renormalizable scale, i.e. a scale of size.)
That doesn’t mean they have to remain so. So I would argue that your second kind is perfectly all right on principle. We don’t know (but may suspect that in some cases it is true) that resources aren’t enough to pass from emergent to fundamental theories.
It also doesn’t mean that other definitions of “explanation” are invalid. For example, since derived processes are sharing properties and massenergy with ancestral processes, they have a an “explanation” as precisely “emergent” on a basis of the ancestral process.
The robustness of either set of processes is not a result of our description. The latter suggestion is probably anthropomorphic, which is rampant in philosophy. (Say, “essentialism”.)
Thanks for the reply. I think your last point is especially welcome.
Another way of stating my point is just to say that when we perceive any high-level phenomenon, this perception is not guaranteed to be systematically formulable in terms of lower levels. Not all concepts we have are reducible, even in principle.
But this is a problem with our concepts, not with reductionism in general. It’s why Wittgensteinian “family resemblances” provide so much more conceptual flexibility than traditional essences.
And what does Polt mean by “freedly chooses to present conference papers”?
He may mean that, while the denial of free will appears to be the most consistent inference of evolutionary theory, it is also easily refuted. A person merely has to respond that he cannot help but ignore the proposition because he has no free will.
“Reduction” and “reductionism” are words that have many, many meanings within the philosophical tradition. It is very difficult to figure out in short pieces what someone is on about and why they may deny or affirm them. The most common form of reductionism within a given area is a denial of emergence: that is, the denial that a system has properties that the components do not. Note: this is the sense familiar in bioscientific contexts and has to do with ontology/metaphysics, and nothing to do with predictability whatever – the “unpredictable” notions are often obscurantism or useless.
Anyway, then, denying “reductionism” should really be specific as to which cases. Looking at a few: the computational theory of mind, as it is called, doesn’t require any sort of such thing: in fact, one of the beauties of it is shows that systems can have complex properties on a wide variety of “substrates” … Somehow people think of computing as being at the merely “physical” (as opposed to biological or the like – not “material”, which I use in a broader sense) – this is just simply not so – the theory of computing applies more generally. In fact, I argue that it counts as, in part, a close ally or part of metaphysics, for that reason.
As for the “genetic explanation of why one chooses science” – well, that skips levels, and thus would be a case for inappropriate reductionism. However, *who holds this*? I don’t think *anyone*, even the Pinkers and such of the world, thinks that your entire life history is somehow *unconstrained* by your environment and is merely a function of your genetic makeup. That would be simply insane; so to be charitable, I have to interpret the proposal as one being wondering what the current state of the art is on *influence*, and there the matter is messy. It is clear there is some – what is it? What temperments and personality predispositions, in given environments lead to someone pursuing science? I don’t know, and I don’t think anyone knows much either. But to rule out the “psychology of personality of science” (there’s lots of work being done, slowly, on the psychology of science generally) out of court is unfortunate. What makes someone actually want to read Heidegger (to turn the tables) I haven’t the foggiest idea either, but that too is an interesting question.
I think the most common form of reductionism is taking something apart to understand it better.
I think the most common form of using the word ‘reductionism’ as an intellectual weapon to attack those whose scientific understanding is felt to threaten one’s intellectual turf, is to falsely claim that those scientists deny the existance or importance of aggregate properties.
The flip side of this defensive accusation of reductionism is to claim that emergent properties are in some way magical, I.e. irreducible. Of course a glass of water doesn’t have the same properties as a river, but that doesn’t mean magic happens in a river that can’t be explained in principle by analyzing the forces on water and air molecules.
Once one understands that the sun is powered by nuclear fusion of hydrogen into helium, one doesn’t forget that it’s a damn big hot ball of fire in the sky that lights our days, warms the planet and creates the seasons, burns your skin at the beach, casts the colors of sunset and sunrise in the sky, and inspires songs, poems, and stories in the minds of humans. Reduction is a useful analytical tool that enhances our understanding things. To consider reductionism an enemy of proper understanding is foolish and dishonest. And to imagine that people can look at macroscopic objects and see only the reduction to constituent parts, to the extent that aggregate appearances are denied, is insulting.
True, but one does (sometimes) forget the macroconstraints in addition to the parts. No physical chemist or the like will forget that the liquidity of some water in a glass (say) is also due to the ambient temperature and pressure (though naive philosophers might) but lots of sociology forgets (for example) either the macroscale social facts in which microones are embedded, the converse, or regards them as irrelevant. (There’s a lot of work by Bunge on the merits/demerits this latter point.)
Commenting upon soil Walter L. Kubiëna (1970) said “. . . we proceed from the concept that the soil is not just a mixture of various constituents . . .
“Because of the . . . interaction of particular processes, we might compare it with the works of a watch from the point of view of someone coming from outside our world. Such a person, seeing a number of different watches for the first time, could develop a series of ways for investigating them scientifically.
(1) He could put each watch into a mortar, pound it to a fine powder and then determine the chemical composition of the whole. He would find that it is composed of a certain number of metals, each one present in a certain quantity, plus some glass, some jewels and some fine-grade oil. Of course hew would not gain any knowledge of the action of driving wheels, checking wheels, springs, screws, pivots, chain links, levers, and so forth, which are essential for the operation of the watch. . . .
(2) He could take each watch apart and perform a kind of mechanical analysis by sorting the different pieces into groups. He could then determine the ratio of the sizes of these groups by a gravimetric method. Both methods would enable the analyst to classify all the watches; however, the results obtained would leave him with many unanswered questions. The quantitative data would not give him any insight into the interlocking of the parts of a watch, nor could he conclude anything about the function of each part, much less the functioning of the whole.
(3) He could leave each watch intact and investigate each part in its place, determine its position, its connection with the other parts, until he had a complete knowledge of the construction of the whole.
(4) He could investigate each watch in a state of motion and observe directly the movement of each part and its individual role in the function of the whole.”
[Note: Those interested may consult Walter L. Kubiëna, Walter L., Rutgers University Press, 1970, p. 3 for a fuller account of his points in detail.]
Social reductionism argues that all behavior and experiences can be explained simply by the affect of groups on the individual.
Would that be a kind of “effectation?”
I have to love this, and admit (but not freely, apparemtly) that I do!
This is getting simpler to understand:
“Self-awareness in humans is more complex, diffuse than previously thought
Researchers at the University of Iowa studied the brain of a patient with rare, severe damage to three regions long considered integral to self-awareness in humans (from left to right: the insular cortex, anterior cingulate cortex, and the medial prefrontal cortex). Based on the scans, the UI team believes self-awareness is a product of a diffuse patchwork of pathways in the brain rather than confined to specific areas. Credit: Department of Neurology, University of Iowa
Ancient Greek philosophers considered the ability to “know thyself” as the pinnacle of humanity. Now, thousands of years later, neuroscientists are trying to decipher precisely how the human brain constructs our sense of self.
Self-awareness is defined as being aware of oneself, including one’s traits, feelings, and behaviors. Neuroscientists have believed that three brain regions are critical for self-awareness: the insular cortex, the anterior cingulate cortex, and the medial prefrontal cortex. However, a research team led by the University of Iowa has challenged this theory by showing that self-awareness is more a product of a diffuse patchwork of pathways in the brain – including other regions – rather than confined to specific areas.
The conclusions came from a rare opportunity to study a person with extensive brain damage to the three regions believed critical for self-awareness. The person, a 57-year-old, college-educated man known as “Patient R,” passed all standard tests of self-awareness. He also displayed repeated self-recognition, both when looking in the mirror and when identifying himself in unaltered photographs taken during all periods of his life.
“What this research clearly shows is that self-awareness corresponds to a brain process that cannot be localized to a single region of the brain,” said David Rudrauf, co-corresponding author of the paper, published online Aug. 22 in the journal PLOS ONE. “In all likelihood, self-awareness emerges from much more distributed interactions among networks of brain regions.” The authors believe the brainstem, thalamus, and posteromedial cortices play roles in self-awareness, as has been theorized.
The researchers observed that Patient R’s behaviors and communication often reflected depth and self-insight. First author Carissa Philippi, who earned her doctorate in neuroscience at the UI in 2011, conducted a detailed self-awareness interview with Patient R and said he had a deep capacity for introspection, one of humans’ most evolved features of self-awareness.
“During the interview, I asked him how he would describe himself to somebody,” said Philippi, now a postdoctoral research scholar at the University of Wisconsin-Madison. “He said, ‘I am just a normal person with a bad memory.'”
Patient R also demonstrated self-agency, meaning the ability to perceive that an action is the consequence of one’s own intention. When rating himself on personality measures collected over the course of a year, Patient R showed a stable ability to think about and perceive himself. However, his brain damage also affected his temporal lobes, causing severe amnesia that has disrupted his ability to update new memories into his “autobiographical self.” Beyond this disruption, all other features of R’s self-awareness remained fundamentally intact.
“Most people who meet R for the first time have no idea that anything is wrong with him,” noted Rudrauf, a former assistant professor of neurology at the UI and now a research scientist at the INSERM Laboratory of Functional Imaging in France. “They see a normal-looking middle-aged man who walks, talks, listens, and acts no differently than the average person.”
“According to previous research, this man should be a zombie,” he added. “But as we have shown, he is certainly not one. Once you’ve had the chance to meet him, you immediately recognize that he is self-aware.”
Patient R is a member of the UI’s world-renowned Iowa Neurological Patient Registry, which was established in 1982 and has more than 500 active members with various forms of damage to one or more regions in the brain.
The researchers had begun questioning the insular cortex’s role in self-awareness in a 2009 study that showed that Patient R was able to feel his own heartbeat, a process termed “interoceptive awareness.”
The UI researchers estimate that Patient R has ten percent of tissue remaining in his insula and one percent of tissue remaining in his anterior cingulate cortex. Some had seized upon the presence of tissue to question whether those regions were in fact being used for self-awareness. But neuroimaging results presented in the current study reveal that Patient R’s remaining tissue is highly abnormal and largely disconnected from the rest of the brain.
“Here, we have a patient who is missing all the areas in the brain that are typically thought to be needed for self-awareness yet he remains self-aware,” added co-corresponding author Justin Feinstein, who earned his doctorate at the UI in February. “Clearly, neuroscience is only beginning to understand how the human brain can generate a phenomenon as complex as self-awareness.”
Here are the, pretty simple, facts. The brain is just another organ of the body. Physiology is the domain of medicine.
Ideologies, political/religious/philosophical, etc and the humanities have little explanatory value regarding medical topics. Ideologies have nothing useful to say about the kidney or pancreas or brain.
Sadly, these disciplines seem to now be defensively engaging in bigger non-truths and often nonsensical statements and claims when confronted with the facts of brain science. The quantum effects claims are science ficton.
As is normal, these untruths seem largely driven by desires to make money. That’s normal but time to be honest and acknowledge that any claims by any kind of ideology about brain based matters is just hustling for cash and a scam.
Before we had facts about the brain, maybe ideologies were useful. Maybe not. That’s an empirical question.
Philosophers generally have nothing worth listening to at all about the brain, unless they have been trained in computer science or neuroscience, and sometimes not even then.