Neurologist: “Neuroscience is totally wrong”; touts dualism, spirituality, God

June 28, 2017 • 12:30 pm

There’s a new book out called The Mind of God: Neuroscience, Faith, and a Search for the Soul. It’s by a neuroscientist: Jay Lombard, who appears to have started out respectably but now is going the Chopra route (i.e., off the rails), as you’ll see below. In fact, the book has been endorsed by Deepakity himself:

The Mind of God is inspiring, insightful and thought provoking. This book will awaken new connections in our understanding of the exhilarating relationship between reality, reason, and faith.”
–Deepak Chopra, MD, and New York Times Bestseller of How to Know God

The book is doing pretty well, too: it was touted at HuffPo  and yesterday was #189 among all books on Amazon US. It’s #1 in Amazon’s “Science and Religion” section, just ahead of neurosurgeon Eben Alexander’s bogus book Proof of Heaven, which has sold very well despite being partly questionable and partly fraudulent.  All you have to do to make a million bucks these days is write a book showing that science itself proves God, and that’s apparently what Lombard has done. I must say, the prospect is tempting: though I don’t need the bucks, I’d love to hoax the credulous, God-loving public. But now I can’t, for I’ve said it.

At any rate, you can read how Lombard touts his book by clicking on the screenshot below of his New York Times interview’. (Lombard’s photo looks like a hybrid between Steven Pinker and Harrison Ford):

Lombard appears to be a frank dualist, someone who thinks that the mind transcends the material. Although he’s a bit cagey about God in the interview, his statements are still bizarre. Here are a few:

This one espouses a God-of-the-Synaptic-Gap view, in which Lombard’s failure to fully understand the physical basis of mental illness pushes him toward the metaphysical:

Trying to find the biological origins of psychiatric disease is much more difficult than for a stroke, hypertension or A.L.S. But it’s there. And you see that no matter how reduced you get, you’re left with sand going through your hands. That took me to a completely opposite place, which was to ask questions about purpose and meaning; about suffering, and about how patients themselves make sense of their suffering; and about how I make sense of it as a clinician.

Of course “making sense” of something doesn’t mean you’ve said anything true.

This Q&A baffles me:

What’s the most surprising thing you learned while writing it?

How much we are dependent on our own brains for whatever form of faith we experience in our lives. How reliant we are on the physical aspects of our spiritual being.

That’s a classic Deepity: it’s trivial (of course our beliefs depend on our brain, as does any spirituality we espouse), but sounds Deep and Meaningful.

Here’s his most egregious statement, which is basically a falsehood. It’s the second part of his answer to the question just above:

What I learned personally is that neuroscience is totally wrong. Neuroscientists don’t believe that such a thing as the mind exists. They flat-out reject the concept of mind. I find that a very scary, slippery slope. I think psychiatry has lost its mind, both literally and metaphorically. A lot of the book is about that part of our brain that connects us to our deeper, spiritual underpinnings. It’s not our rational brain; it’s our narrative brain.

Where is there a neuroscientist who says that the mind doesn’t exist as a reification of physical processes? It’s not a glob of stuff that resides in one place in the brain, nor a non-physical thing that’s detachable from the brain, but saying that mind doesn’t exist as an abstraction (even though it’s not what most people think it is) is like saying that “you” don’t exist. But I believe Lombard is a dualist, as evidenced by the stuff below and by the final bit of his interview:

Persuade someone to read “The Mind of God” in less than 50 words.

I believe we are living in a time of huge existential crisis in our society. I want people to ask themselves, first and foremost, if they have a sense of purpose. If they say yes, but they don’t know what it is, they should read the book.

Ten to one that “purpose” says something about God, and if I’m wrong I’ll eat three freeze-dried mealworms. But wait–there’s more! Here are a few screenshots from the book itself, which you can preview on The Mind of God‘s Amazon page:

p. 12, an espousal of Intelligent Design and a claim of a creator God:

p. 40: Some misunderstanding of evolution and a claim that human compassion comes from the “feeling of God”. I’d be curious to see if Lombard takes the position that there really is a God, which is implied above, or claims that the mere idea of God is sufficient to motivate us, regardless of whether it’s true:

I’m a bit distressed but not surprised that the NYT ran this palaver. It’s woo: a scientific rationale for God (Templeton must love it). But the NYT really needs to look at this crap more closely. It didn’t even review Lawrence Krauss’s new book The Greatest Story Ever Told—So Far, which is a true history of modern physics, but the paper gave publicity to the kind of woo described above. Oh, the humanity!

h/t: Michael

A remarkable study: decoding how human faces are identified in the primate brain

June 11, 2017 • 11:00 am

I really dislike posting about a paper that I don’t fully understand, but I guess I’ll have to, as this one seems pretty important. The best I can do is summarize it briefly and give a link so that those of you above my pay grade can look at the messy details. The paper, with only two authors, Le Chang and Doris Tsao of Caltech, was published in Cell (reference and free link at bottom), and involved functional magnetic resonance imaging (fMRI) and electrodes that probed individual neurons in the brains of two rhesus macaques (Macaca mulatta).  (It’s amazing how far technology has advanced!)

The monkeys were presented with images originally derived from 200 human faces, with data taken on a set of 50 landmarks involving both shape and features like eye color and skin tone. Chang and Tsao then constructed composite faces based on various combinations of these landmarks. After identifying a small patch of the macaque brain as being involved in face recognition, they then probed the  firing of individual neurons in this region when macaques saw the faces. Using various statistical analyses incorporating the correlation of each neuron’s firing with the measurements used to construct the facial images, they then figured out an algorithm that best translated the firing patterns into the multidimensionally constructed face.

Once they did that, they could “reverse engineer” the firing patterns alone into facial images; that is, they could test their algorithm by using measurements of firing alone and their model to predict the appearance of new faces seen by the macaques. The remarkable thing is that they could do this with amazing accuracy monitoring only 205 neurons!

Below are a set of reverse-engineered faces (“predicted face”) derived from neuronal firing alone and then compared to the actual image the macaques saw. You can see the remarkable fidelity. What this means is that, as far as we know, the investigators cracked the code of how a facial image is translated into patterns of neuronal firing in the brain.

What does this mean? Well, it means we’re a lot closer to understanding how the brain translates images into neuronal firing, though of course it tells us little about how that firing is reprocessed by the brain itself into an image, for that’s a matter of “qualia”, or consciousness. But it does unravel the complicated nexus by which a whole group of cells work together to create facial images, which of course are crucial for primates recognizing individuals.

One site reporting on this, ZME Science, notes two practical implications:

The researchers can translate from neuron activity in a brain to a human face; from brain activity to visual information. In addition to cracking the code of a brain in a living animal, this study also discovered how brains recognize faces. Before, it was believed that each face cell codes one specific face. However, now we know that each cell represents one piece of visual information that combines with all of the others to form a face. Perhaps human brains also have their own code that works in a similar way. In addition to crime applications, the research could help machine learning for recognizing faces, such as photo recognition on Facebook.

I’m not quite sure what the “crime applications” are unless they ask a victim to envision a perpetrator and use the neuronal firing (instead of a police artist) to get an image of that perp. But that seems impractical. The measurements of neuronal firing might be translatable into computer code, though this too is above my pay grade. Right now I’m just amazed that this was done, and how accurately the investigators could reverse engineer firing into images using so few neurons. I’ll let others speculate about the practical applications.

h/t: Steve

____________

Chang, L. and D. Y. Tsao. The Code for Facial Identity in the Primate Brain. Cell 169:1013-1028.e1014.

The Lost Mariner: a short video inspired by Oliver Sacks

March 14, 2017 • 12:00 pm

Well, I confess that I’ve taken something from Brainpickings, but only because it was tweeted approvingly by Jennifer Ouellette. “The Lost Mariner” is a 6-minute film centered on a patient described in Oliver Sacks’s popular book The Man Who Mistook His Wife for a Hat. You can find more information about the personnel involved, and all the prizes the film garnered, at the Vimeo site. In 2015, Brainpickings described the subject studied by Sacks:

One of those patients was Jimmie G. — a “charming, intelligent, memoryless” man admitted into New York City’s Home for the Aged with only an unfeeling transfer note stating, “Helpless, demented, confused and disoriented.” Jimmie G. is the subject of the second chapter, titled “The Lost Mariner,” which Dr. Sacks opens with an epigraph from the great Spanish filmmaker Luis Buñuel:

“You have to begin to lose your memory, if only in bits and pieces, to realize that memory is what makes our lives. Life without memory is no life at all… Our memory is our coherence, our reason, our feeling, even our action. Without it, we are nothing.”

In the beautiful short film The Lost Mariner, independent animator Tess Martin brings Jimmie G.’s rare memory condition to life using photograph cutouts and live action. The effect is a stunning visual analog to the disorienting see-saw of reality and unreality constantly rocking those bedeviled by memory impairments, exposing the discomfiting yet strangely assuring truth in Buñuel’s words.

Wikipedia describes Jimmie G.’s ailment like this:

  • “The Lost Mariner”, about Jimmie G., who has lost the ability to form new memories due to Korsakoff’s syndrome [JAC: the syndrome is associated with long-term abuse of alcohol, and you forget everything that happens to you within minutes.]. He can remember nothing of his life since the end of World War II, including events that happened only a few minutes ago. He believes it is still 1945 (the segment covers his life in the 70s and early 80s), and seems to behave as a normal, intelligent young man aside from his inability to remember most of his past and the events of his day-to-day life. He struggles to find meaning, satisfaction, and happiness in the midst of constantly forgetting what he is doing from one moment to the next.

The entire chapter by Sacks is in the New York Review of books (free); Jimmie was 49 when admitted to the Home, and is named by Sacks as “Jimmie R.” Read it to remind you of Sacks’ humanity, empathy, and remarkable ability to write.

Click on “vimeo”, and then on the enlarging box ate the vimeo site, to see it on full screen.

The man who visits Jimmie is his brother: the only person he consistently recognizes. He forgets everyone and everything else, including the doctor, within a few minutes. All of us, but especially Sacks, would be curious about what that would be like, and how it would affect your life and well being.

Here’s a bit from Sack’s chapter:

“What year is this, Mr. R.?” I asked, concealing my perplexity under a casual manner.

“Forty-five, man. What do you mean?” He went on, “We’ve won the war, FDR’s dead, Truman’s at the helm. There are great times ahead.”

“And you, Jimmie, how old would you be?”

Oddly, uncertainly, he hesitated a moment, as if engaged in calculation.

“Why, I guess I’m nineteen, Doc. I’ll be twenty next birthday.”

Looking at the gray-haired man before me, I had an impulse for which I have never forgiven myself—it was, or would have been, the height of cruelty had there been any possibility of Jimmie’s remembering it.

“Here,” I said, and thrust a mirror toward him. “Look in the mirror and tell me what you see. Is that a nineteen-year-old looking out from the mirror?”

He suddenly turned ashen and gripped the sides of the chair. “Jesus Christ,” he whispered. “Christ, what’s going on? What’s happened to me? Is this a nightmare? Am I crazy? Is this a joke?”—and he became frantic, panicky.

“It’s okay, Jim,” I said soothingly. “It’s just a mistake. Nothing to worry about. Hey!” I took him to the window. “Isn’t this a lovely spring day. See the kids there playing baseball?” He regained his color and started to smile, and I stole away, taking the hateful mirror with me.

And here’s a three-minute film on the making of “The Lost Mariner”:

Richard Gunderman at The Conversation: our ability to lie shows that the mind is physically independent of the brain (!)

December 27, 2016 • 9:00 am

UPDATE: As the first comment in the thread (by Coel) shows, I was correct in assuming there’s a religiosity to Gunderman’s argument: he’s a trustee of the Christian Theological Seminary. Further, someone who once knew him emailed me and described him as “ultra religious.”

________________________

The motto of the site The Conversation is “academic rigor, journalistic flair”, and it’s funded by an impressive roster of organizations, including the Bill and Melinda Gates Foundation (see bottom for the list). But there’s neither rigor nor flair in a recent article by Dr. Richard Gunderman, “Why you shouldn’t blame lying on the brain“. (According to his bio at the Radiological Society of America, Gunderman is “a professor and vice chairman of the department of radiology at Indiana University, with faculty positions in pediatrics, medical education, philosophy, philanthropy, and liberal arts.” He’s also written both technical books on his field and popular books like We Make a Life by What We Give.) 

Gunderman’s point, which completely baffles me, is that our ability to lie proves that we can actually override the material processes in our brain, and that “the human mind is not bound by the physical laws that scientists see at work in the brain.” In other words, lying proves that there’s a ghost in the machine—a non-physical aspect to our brains and behavior that gives us a form of dualistic free will.

Gunderman begins by noting that functional MRI (fMRI) scans of the brain have shown that lying can be detected by a decrease of activity of the amygdala, which to Gunderman suggests that “subjects may become desensitized to lying, thereby paving the way for future dishonesty.” I haven’t read the studies, so I have no idea whether that last conclusion has any support.

But Gunderman wants to dispel the notion that because lying can be seen as changes in brain activity, it must therefore be a product of the neurological/biochemical processes of brain activity. One section of his piece, called “brain not simply a machine,” argues that because the brain is complicated (with 100 billion neurons and 150 trillion synapses), and because it actually experiences the world through consciousness and emotionality, we can never find a physical basis for those things. Ergo, the brain isn’t a machine!. Gunderman:

As Nobel laureate Charles Sherrington, one of the founders of modern neuroscience, famously declared, natural sciences such as physics and chemistry may bring us tantalizingly close to threshold of thought, but it is precisely at this point that they “bid us ‘goodbye.‘” The language of natural science is inadequate to account for human experience, including the experience of telling a lie.

Consider Mozart’s “A Little Serenade” or Rembrandt’s self-portraits. We can describe the former as horsehair rubbing across catgut, and we may account for the latter as nothing more than pigments applied to canvas, but in each case something vital is lost. As any reader of Shakespeare knows, a lie is something far richer than any pattern of brain activation.

This is a misguided argument, because you can’t describe the effects of Mozart as horsehair on catgut, and no scientists claims such a thing. The sounds have an emotional resonance in our brain after they enter it through our ears. That emotional resonance, of which we’re conscious, depends on our genes and environment: the factors that have built our brain. Just because we don’t understand how it all works manifestly does not say that the brain isn’t a machine. It’s simply a machine whose workings we don’t fully understand.

Gunderson then makes a weird argument about why the brain is “not the mind” (well, the mind is really a product of the brain, just as a knee-jerk reflex is, but never mind). While admitting that we can change how the mind works by physical intervention, he still claims that there is a dualism in our thoughts not explicable by our brains:

A second dangerous misinterpretation that often arises from such reports is the notion that brain and mind are equivalent. To be sure, altering the chemistry and electrical activity of the brain can powerfully affect a person’s sensation, thought, and action – witness the occasionally remarkable effects of psychoactive drugs and electro-convulsive therapy.

But in much of human experience, the causal pathway works in the opposite direction, not from brain to mind, but mind to brain. We need look no further than the human imagination, from which all great works of art, literature and even natural science flow, to appreciate that something far more complex than altered synaptic chemistry is at work in choices about whether to be truthful.

What is the “mind” that is not part of the brain, then? Is it a soul or something not embodied in physical matter? As Mencken said of Thorstein Veblen, “What is the sweating professor trying to say?” Just because art springs from the imagination and perception (and one’s experiences) does NOT mean that all those things are not coded in the brain. In fact, you can efface many aspects of perception and imagination by destroying parts of the brain. Gunderson’s claim that works of art must reflect something more than synaptic chemistry is an unsupported assertion; in fact, the data so far show that he’s wrong. He really needs to specify exactly how he think the mind is independent of the brain; I’m puzzled that a doctor (unless he’s religious) can even make such a claim. If the mind is disconnected from the brain, how does imagination get from the mind to the brain? Is there a soul above it all?

But the most bizarre part of Gunderman’s article is that he sees lying as proof that the mind is independent of the brain:

In fact, our capacity to lie is one of the most powerful demonstrations of the fact that the human mind is not bound by the physical laws that scientists see at work in the brain. As Jonathan Swift puts it “Gulliver’s Travels,” to lie is “to say the thing which is not,” perhaps as profound a testimony as we could wish for free will and the ability of the human mind to transcend physical laws

In the Genesis creation story, it is after woman and man have tasted the fruit of the tree of the knowledge of good and evil and hidden their nakedness that God declares that “they have become like us.” To be able to lie is in a sense divine, implying a capacity to imagine reality as it is not yet. If used appropriately, this capacity can make the world a better place.

What is going on here, I think, is that Gunderson (whose quotes from scripture imply he’s religious, which may explain his dualism) is espousing a form of “free won’t.” That is, while our positive, truthful statements may reflect the activity of our brain, the fact that we can tell untruths somehow means that the Ghost in the Machine is overriding what we’d normally do. (Benjamin Libet, who did the first experiments showing that decisions can be predicted in the brain before they come to consciousness, didn’t believe in free will per se but did in “free won’t”: the idea that we can decide to change our minds by overriding a conscious decision.)

But this shows nothing of the sort. The decision about whether to be truthful is simply built into our neurons, and often as an adaptive mechanism (By “adaptive,” I mean something that we think will be good for us, not necessarily something that’s evolved—though I think Robert Trivers is right that a certain amount of deceit and self-deception are evolutionary adaptations.) When someone says, “Do I look too fat in these clothes?”, it’s to your benefit to say “no”. That can be a lie, but why on Earth does it show that that decision about how to answer is independent of the physics of our brain? If you say “yes,” is that just the product of our brain-machine?

Gunderman steps further into this argument—one that any sensible person can see through—at the end of his piece, when he simply makes his flat assertion again:

In reality, of course, lying is not the fault of the brain but the person to whom the brain belongs. When someone tells a lie, he or she is not merely incorrect but deceptive. People who lie are deliberately distorting the truth and misleading someone in hopes of gain, placing their purposes above the understanding and trust of the person to whom they lie.

Of course many of our truthful statements are made in hopes of gain, placing our own purposes above those of others.. So how does the fact that we sometimes use deception prove dualism? Gunderman goes on (my emphasis):

Even in the era of functional neuro-imaging, there is no lie detector that can tell with certainty whether subjects are telling the truth. There is no truth serum that can force them to do so. At the core of every utterance is an act of moral discernment that we cannot entirely account for except to say that it reflects the character of the person who does it.

Lying is not a matter of physical law, but of moral injunction. It is less about chemistry than character. It reflects not merely what we regard as expedient in the moment but who we are at our core. Ironically, while it is less momentous to act well than to be good, we are in the end little more than the sum of all the moral compromises we have made or refused to make.

This is why we abhor the deceptive conduct of narcissists, crooks and politicians, and why we esteem so highly the characters of people who manage to tell the truth even when it is especially inconvenient to do so. Such acts are morally blameworthy or exemplary precisely because we recognize them as the products of human choice, not physical necessity.

Why does telling a lie show a nonphysicality of the mind in a way different from telling the truth? This is the main question, and Gunderman doesn’t answer it.

I think he’s straying into religious territory here. For every aspect of our character comes from our brain, whether we’re lying or not. And all data show that that character depends on the brain, for character can be profoundly altered by brain injuries, surgery, experience of the world, or drugs. Dragging in the claim “we are in the end little more than the sum or all the moral compromises we have made or refused to make” suggests a religious theme, one based on moral choice, which to many religionists means dualistic free will. If we don’t choose how we behave, but our brain chooses for us, what does “moral choice” even mean?  Just because most people are dualists, and think that at any time we do have a choice about how we behave (we don’t), doesn’t mean that we can accept conventional wisdom for reality. “Right” and “wrong” acts are to be praised and condemned for the good of society, but we shouldn’t accept that common notion that we could at a given time choose to behave either good or ill.

Gunderman’s argument is so tortured, so unsupported by evidence, that I suspect it’s motivated by religion. That’s just a guess, but anyone who drags in scripture and morality to prove that the mind is disconnected from the brain has to be working on premises that aren’t scientific.

drgunderman_photo-copy
Gunderman

Funding partners for The Conversation, which funded Gunderman’s misguided essay,.

screen-shot-2016-12-25-at-6-53-24-am

h/t: jj

Patricia Churchland on the effects of neurobiology on criminal law

August 25, 2016 • 11:30 am

Scientific American has a new article, “20 big questions about the future of humanity“, in which twenty well known scientists prognosticate about our collective fate. It’s not clear whether the questions were generated by the scientists themselves or by the magazine, but most of them, and the answers, don’t inspire me much. It’s not that I think the answers are bad, I just think that predictions of this sort—will sex become obsolete? will humans survive the next 500 years? when and where will we find extraterrestrial life?—are shots in the dark, and the answers not that enlightening. After all, the extraterrestrial question is simply a big fat unknown.

But one question and answer, called to my attention by reader John O., intrigued me for obvious reasons. The respondent is the well known philosopher Patricia Churchland. Here’s the question and her answer, and the bold bit in the answer is my own emphasis.

Will brain science change criminal law?

In all likelihood, the brain is a causal machine, in the sense that it goes from state to state as a function of antecedent conditions. The implications of this for criminal law are absolutely nil. For one thing, all mammals and birds have circuitry for self-control, which is modified through reinforcement learning (being rewarded for making good choices), especially in a social context. Criminal law is also about public safety and welfare. Even if we could identify circuitry unique to serial child rapists, for example, they could not just be allowed to go free, because they would be apt to repeat. Were we to conclude, regarding, say, Boston priest John Geoghan, who molested some 130 children, ‘It’s not his fault he has that brain, so let him go home,’ the result would undoubtedly be vigilante justice. And when rough justice takes the place of a criminal justice system rooted in years of making fair-minded law, things get very ugly very quickly.”
     —Patricia Churchland, professor of philosophy and neuroscience at the University of California, San Diego

This seems to me both wrongheaded and very superficial, especially when you consider that punishment is part of criminal law. But at least she’s a determinist and a naturalist.  We can argue (not this time!) about what this means for conceptions of free will, but I think it’s almost a given that a philosophy involving determinism (either hard determinism or compatibilism) will have implications for criminal law different from those coming from a philosophy of dualism.  

That’s certainly the case in practice, for the concept of whether someone could have done otherwise, versus whether he was “compelled” by uncontrollable circumstances in a criminal situation, has played a big role in our judicial system. If you’re considered mentally incompetent, for example, or have a brain tumor that makes you aggressive, or don’t “know right from wrong”, your punishment can vary drastically. If you’re considered mentally ill, you may be hospitalized; if you do know “right from wrong” (even if your circumstances allow you to know it but not act on that knowledge) you will be put in a pretty bad prison situation; and if there are extenuating circumstances that may have influenced your behavior (like an abused woman killing her abuser), your sentence may be light—or you may be even set free.

Under determinism, nobody has a choice of how to act, in other words, there are always “extenuating circumstances” in the form of environmental and genetic factors that caused you to transgress. The way the justice system deals with these factors will, of course, differ from person to person; but it’s vitally important to realize that no criminal had a free choice about what he did. (I’m using “he” here since most criminals are male.) And we can’t deny that lots of punishments are based not on deterrence, rehabilitation, or public safety, but on pure retribution: a vile sentiment that presupposes that someone could have done otherwise.

Even Sean Carroll, a compatibilist, realizes the implications of neuroscience on our justice system. As I quoted him the other day from his new book The Big Picture:

To the extent that neuroscience becomes better and better at predicting what we will do without reference to our personal volition, it will be less and less appropriate to treat people as freely acting agents. Predestination will become part of our real world.

Now I’m not sure I agree with Sean that predicting behavior has anything to do with treating people as “freely acting agents,” for we already know that they’re not freely acting agents. Prediction has to do with your strategy for “punishing” the offender (it affects recidivism and public safety); perhaps that’s what Sean means, but it’s not clear.

Further, Churchland goes badly wrong when she thinks that determinism is solely about understanding why someone does something, and then exculpating them when we do. That’s ludicrous. We need to prevent an offender from reoffending if they’re freed, which means rehabilitation; we need to protect the public even if we do understand why someone commits a crime (what if their neurons make them psychopathic?); and we need to deter others by example from committing crimes. (Deterrence is certainly compatible with determinism: seeing someone get punished affects your brain, often making you less likely to transgress.) I have no idea how Churchland draws a connection between understanding the correlates of behavior and letting people go free, and then—vigilante justice! We already know that “criminal law is about public safety and welfare,” and no determinist thinks otherwise. Determinists are not a group of people hell-bent on freeing criminals!

At any rate, the more we learn about brain function, the more we’ll be able to understand those factors that compel people to behave in a certain way when faced with the appearance of choice. And when we know that, we’ll be better able to treat them. But as we learn more about the brain, my hope is that we will be less and less willing to punish people on the assumption that they made the “wrong choice”,  avoid retribution, and begin to design a system of punishment that not only protects society and deters others, but, above all, fixes the problems, both social and neurological, that lead people to break the law.

 

Has the evolution of consciousness been explained?

July 27, 2016 • 11:00 am

Michael Graziano is a neuroscientist, a professor of psychology at Princeton University, and, on the side, writes novels for both children and adults. His speciality is the neurology and evolutionary basis of consciousness, about which he’s written several pieces at The Atlantic.

His June 6 piece, “A new theory explains how consciousness evolved“, attempts to trace how consciousness (which I take to be the phenomenon of self-awareness and agency) could arise through evolution. This is a very good question, although resolving it will ultimately require understanding the “hard problem” of consciousness—the very fact that we are self-aware and see ourselves as autonomous beings. We’re a long way from understanding that, though Graziano is working on the neuroscience as well as the evolution.

In the meantime, he’s proposed what he calls the “Attention Schema Theory,” or AST, which is a step-by-step tracing of how consciousness might have arisen via evolutionary changes in neuronal wiring. To do this, as Darwin did when trying to understand the stepwise evolution of the eye, you need to posit an adaptive advantage to each step that leads from primitive neuronal stimuli (like the “knee reflex”) to full-fledged consciousness of the human sort.

That, of course is difficult. And we’re not even sure if the neuronal configurations that produced consciousness were really adaptive for that reason—that is, whether the phenomenon of consciousness was something that gave its early possessors a reproductive advantage over less conscious individuals.  It’s possible that consciousness is simply an epiphenomenon—something that emerges when one’s brain has evolved to a certain level of complexity. If that were the case, we wouldn’t really need to explain the adaptive significance of consciousness itself, but only of the neural network that produced it as a byproduct.

Now I haven’t read Graziano’s scholarly publications about the AST; all I know is how he describes it in the Atlantic piece. But, as I’ve already said, if you’re describing some complex science in a popular article, at least the outline of that science should be comprehensible and make sense. And that’s what I find missing in the Atlantic article. Graziano lucidly describes the steps by which a lineage could become more complex in its sensory system, with each step possibly enhancing reproduction. But when he gets to the issue of consciousness itself—the phenomenon of self-awareness—he jumps the shark, or, rather, dodges the problem.

Here are the steps he sees in the AST, and when each step might have occurred in evolution.

1.) Simple acquisition of information through neurons or other sensory organs. This could have happened very early; after all, bacteria are able to detect gradients of light and chemicals, and they were around 3.5 billion years ago.

2.) “Selective signal enhancement,” the neuronal ability to pay attention to some environmental information at the expense of other information. If your neuronal pathways can compete, with the “winning signals” boosting your survival and reproduction, this kind of enhancement will be favored by selection. This will confer on animals the ability to adjudicate conflicting or competing signals, paying attention to the most important ones. Since arthropods but not simpler invertebrates can do this, Graziano suggests that this ability arose between 600 and 700 million years ago.

3.) A “centralized controller for attention” that could draw one’s “overt attention” among inputs from several different sensory systems (for example, you might want to go after the smell of food rather than toward the darkness, as in that moment it’s better to get food than to hide). This, says Graziano, is controlled by the part of the brain called the tectum, which evolved about 520 million years ago.

The tectum, Graziano adds, works by forming an “internal model” of all the different sensory inputs. As he says,

The tectum is a beautiful piece of engineering. To control the head and the eyes efficiently, it constructs something called an internal model, a feature well known to engineers. An internal model is a simulation that keeps track of whatever is being controlled and allows for predictions and planning. The tectum’s internal model is a set of information encoded in the complex pattern of activity of the neurons. That information simulates the current state of the eyes, head, and other major body parts, making predictions about how these body parts will move next and about the consequences of their movement. For example, if you move your eyes to the right, the visual world should shift across your retinas to the left in a predictable way. The tectum compares the predicted visual signals to the actual visual input, to make sure that your movements are going as planned. These computations are extraordinarily complex and yet well worth the extra energy for the benefit to movement control. In fish and amphibians, the tectum is the pinnacle of sophistication and the largest part of the brain. A frog has a pretty good simulation of itself.

I’m still not sure what this “internal model” is: the very term flirts with anthropomorphism. If it’s simply a neuronal system that prioritizes signals and feeds environmental information to the brain in an adaptive way, can we call that a “model” of anything? The use of that word, “model,” already implies that some kind of rudimentary consciousness is evolving, though of course such a “model” is perfectly capable of being programmed into a computer that lacks any consciousness.

4.) A mechanism for paying “covert” as well as “overt” attention. Covert attention is stuff that we attend to in our brains without directly paying attention to it. An example is focusing your hearing on a specific conversation nearby and ignoring extraneous sounds. Of course the very concept of “paying selective attention” sort of implies that we have some kind of consciousness, for who is doing the “paying”?

The part of the brain that controls covert attention, says Graziano, is the cortex. That evolved with the reptiles, about 300 million years ago.

And here’s where the problem with the article lies, for Graziano subtly, almost undetectably, says that with this innovation we’ve finally achieved consciousness. His argument is a bit tortuous, though. First he gives a thought experiment that implies cortex = consciousness, then undercuts that thought experiment by saying that that that doesn’t really explain. consciousness. He then reverses direction again, bringing consciousness back to center stage. It’s all very confusing, at least to me.

Here’s the part where consciousness comes into his piece. Graziano starts with crocodiles, which have a selectively attentive cortex, and describes a Gedankenexperiment that explictly suggests consciousness:

Consider an unlikely thought experiment. If you could somehow attach an external speech mechanism to a crocodile, and the speech mechanism had access to the information in that attention schema in the crocodile’s wulst, that technology-assisted crocodile might report, “I’ve got something intangible inside me. It’s not an eyeball or a head or an arm. It exists without substance. It’s my mental possession of things. It moves around from one set of items to another. When that mysterious process in me grasps hold of something, it allows me to understand, to remember, and to respond.”

But then Graziano takes it back, for he realizes that selective attention itself could be a property of neuronal networks, and doesn’t imply anything about the self-awareness and sense of “I” and “agency” that we call consciousness. (Note that the words “I’ve got something intangible inside me” is an explicitly conscious thought.) But in denying the intangibility of consciousness, he simultaneously affirms his presence. Here’s where the rabbit comes out of the hat:

The crocodile would be wrong, of course. Covert attention isn’t intangible. It has a physical basis, but that physical basis lies in the microscopic details of neurons, synapses, and signals. The brain has no need to know those details. The attention schema is therefore strategically vague. It depicts covert attention in a physically incoherent way, as a non-physical essence. And this, according to the theory, is the origin of consciousness. We say we have consciousness because deep in the brain, something quite primitive is computing that semi-magical self-description. Alas crocodiles can’t really talk. But in this theory, they’re likely to have at least a simple form of an attention schema.

But an “attention schema” isn’t consciousness, not in the way that we think of it. Nevertheless, Graziano blithely assumes that he’s given an adaptive scenario for the evolution of consciousness, an evolution that’s only enhanced because you also have to model the consciousness of others—what Dan Dennett calls “the intentional stance.” Graziano:

When I think about evolution, I’m reminded of Teddy Roosevelt’s famous quote, “Do what you can with what you have where you are.” Evolution is the master of that kind of opportunism. Fins become feet. Gill arches become jaws. And self-models become models of others. In the AST, the attention schema first evolved as a model of one’s own covert attention. But once the basic mechanism was in place, according to the theory, it was further adapted to model the attentional states of others, to allow for social prediction. Not only could the brain attribute consciousness to itself, it began to attribute consciousness to others.

So here he’s finessed the difficulty of self-awareness by simply asserting that once you have mechanisms for providing both covert and overt attention, you have consciousness. I don’t agree (though of course I’ve read only this article). Why couldn’t a computer do exactly the same things, but without consciousness? In fact, they do those things, as in self-driving cars.

Graziano goes on to say that figuring out what other members of your species do, based on the notion that they have consciousness, is itself a sign of consciousness. And again I don’t agree. A computer can have an “intentional stance,” using a program and behavioral cues to direct its own behavior, without consciousness. The “hard problem”—that of self-awareness—has been circumvented, assumed without a good reason.

Graziano finishes by talking about semantic language, something that’s unique to humans and surely does require consciousness (I think! Maybe I’m wrong!). But that’s irrelevant, for the evolution of consciousness has already been assumed.

I admire Graziano for realizing that if consciousness, which is closely connected with our sense of agency and libertarian “free will”, evolved, there may be an adaptive explanation for it. He doesn’t consider that consciousness may be an epiphenomenon of neural complexity, which is possible.

I myself think consciousness and agency are indeed evolved traits, traits whose neuronal and evolutionary bases may elude our understanding for centuries. I take a purely evolutionary view rather than a neuroscientific view, for I’m not a neuroscientist. And using just evolution, one can think of several reasons why consciousness and agency might have been favored by selection. I won’t reiterate these here as I discuss them at the end of my “free will” lectures that you can find on the Internet.  And I always say that the problem of agency is unsolved. It still is, as it is for consciousness.

Graziano is making progress with the neuroscience, but the AST is still a long way from being a good theory of how consciousness evolved.

Does evolution lead us to perceive reality, or is it all an illusion?

July 21, 2016 • 11:15 am

Donald D. Hoffman is a highly respected Professor of Cognitive Science at the University of California at Irvine.  He’s developed a “formal theory of conscious agents” that he describes in a new Atlantic article—or rather in an interview with Amanda Gefter called “The case against reality“. And for the life of me I can’t figure out what the man is trying to say. I haven’t read his more formal academic work, but if they’re presenting his theory in a public place like the Atlantic, it seems that what he’s saying should be clear. Yet what I read is either unclear, or, when it’s clear, seems wrong. You should read the short interview yourself, but here are the points I take from it. It’s all a mess, and seems a bit like a gemisch of quantum woo, evolutionary misunderstandings, and postmodernism. If there’s a substantive and important point in the piece, I’ve been too dense to see it.

I’ve indented bits of the interview below. I address three claims.

1). There is no external reality. Quantum mechanics has proved that. Hoffman seems to think that because quantum mechanics has disproved local realism for some particles (that is, has disproved the claim that a photon or electron, for instance, has a certain nature and is in a certain place, regardless of whether we know it), so it has disproved local realism for everything, including macro objects. What we see isn’t reality, or even an approximation of reality: it is all illusion molded by natural selection.

Experiment after experiment has shown—defying common sense—that if we assume that the particles that make up ordinary objects have an objective, observer-independent existence, we get the wrong answers. The central lesson of quantum physics is clear: There are no public objects sitting out there in some preexisting space. As the physicist John Wheeler put it, “Useful as it is under ordinary circumstances to say that the world exists ‘out there’ independent of us, that view can no longer be upheld.”

. . . Gefter: If snakes aren’t snakes and trains aren’t trains, what are they?

Hoffman: Snakes and trains, like the particles of physics, have no objective, observer-independent features. The snake I see is a description created by my sensory system to inform me of the fitness consequences of my actions. Evolution shapes acceptable solutions, not optimal ones. A snake is an acceptable solution to the problem of telling me how to act in a situation. My snakes and trains are my mental representations; your snakes and trains are your mental representations.

We’ll get to the “fitness consequences”—the selective pressures—in a minute, but I want to document Hoffman’s view of reality, which Gefter seems to accept:

Hoffman: . . . I have a space of experiences, a space G of actions, and an algorithm D that lets me choose a new action given my experiences. Then I posited a W for a world, which is also a probability space. Somehow the world affects my perceptions, so there’s a perception map P from the world to my experiences, and when I act, I change the world, so there’s a map A from the space of actions to the world. That’s the entire structure. Six elements. The claim is: This is the structure of consciousness. I put that out there so people have something to shoot at.

Gefter: But if there’s a W, are you saying there is an external world?

Hoffman: Here’s the striking thing about that. I can pull the W out of the model and stick a conscious agent in its place and get a circuit of conscious agents. In fact, you can have whole networks of arbitrary complexity. And that’s the world.

Gefter: The world is just other conscious agents?

Hoffman: I call it conscious realism: Objective reality is just conscious agents, just points of view. Here’s a concrete example. We have two hemispheres in our brain. But when you do a split-brain operation, a complete transection of the corpus callosum, you get clear evidence of two separate consciousnesses. Before that slicing happened, it seemed there was a single unified consciousness. So it’s not implausible that there is a single conscious agent. And yet it’s also the case that there are two conscious agents there, and you can see that when they’re split. I didn’t expect that, the mathematics forced me to recognize this. It suggests that I can take separate observers, put them together and create new observers, and keep doing this ad infinitum. It’s conscious agents all the way down.

And one more bit, showing that brains aren’t real, either:

Hoffman: The idea that what we’re doing is measuring publicly accessible objects, the idea that objectivity results from the fact that you and I can measure the same object in the exact same situation and get the same results — it’s very clear from quantum mechanics that that idea has to go. Physics tells us that there are no public physical objects. So what’s going on? Here’s how I think about it. I can talk to you about my headache and believe that I am communicating effectively with you, because you’ve had your own headaches. The same thing is true as apples and the moon and the sun and the universe. Just like you have your own headache, you have your own moon. But I assume it’s relevantly similar to mine. That’s an assumption that could be false, but that’s the source of my communication, and that’s the best we can do in terms of public physical objects and objective science.

. . . Neurons, brains, space … these are just symbols we use, they’re not real. It’s not that there’s a classical brain that does some quantum magic. It’s that there’s no brain! Quantum mechanics says that classical objects—including brains—don’t exist.

I don’t know where to begin with this. First of all, just because a subatomic particle doesn’t have an intrinsic property until we measure it, and that property is dependent on how we measure it, doesn’t mean that macro objects don’t have properties or, as Hoffman implies, don’t exist. You can, after all, measure the momentum of a car, or of a stationary chair, with great accuracy. It seems to me that Hoffman is using a form of Chopra-ist woo here: claiming not only that certain claims about quantum mechanics extend all the way up to macro objects (yes, they do, but classical mechanics is adequate for macro phenomena, and that includes existence claims), but also that those objects don’t exist outside of consciousness. In fact, Hoffman claims, like Chopra, that the only real thing that exists is consciousness:

As a conscious realist, I am postulating conscious experiences as ontological primitives, the most basic ingredients of the world. I’m claiming that experiences are the real coin of the realm. The experiences of everyday life—my real feeling of a headache, my real taste of chocolate—that really is the ultimate nature of reality.

I find this problematic in several ways. If the brain is an illusion and doesn’t really exist, where does consciousness come from? After all, if we fiddle with the Object Formerly Known as the Brain, and ablate certain parts of it, or give it chemicals, then consciousness goes away. Further, all observers agree on that. Isn’t it strange, if reality is only an illusion constructed by our consciousness, that giving ketamine to a brain removes its consciousness? Why do we all still perceive the same objects then, and agree that the chemical has the same effect? Or, if the Moon is a figment of our consciousness (Chopra has maintained exactly that!), why, when some scientists observe a Rover landing on the Moon, do other scientists perceive exactly the same thing? After all, that concurrence couldn’t reflect anything molded by natural selection—our fitness doesn’t depend on Moon landings. Surely that must say something about an external reality.

But on to evolution:

2). We don’t perceive reality accurately because evolution provides us not with an accurate take on reality, but with a series of illusions that enhance our fitness [reproductive output]. Again, maybe I’m missing something, but if external reality is solely a result of our consciousness (which comes from a nonexistent brain), then why are we even subject to natural selection? That already seems contradictory, but perhaps I’m not understanding Hoffman. But what I do understand is his argument, which is flawed, why evolution gives us a take on the world that doesn’t even come close to reality:

Gefter: People often use Darwinian evolution as an argument that our perceptions accurately reflect reality. They say, “Obviously we must be latching onto reality in some way because otherwise we would have been wiped out a long time ago. If I think I’m seeing a palm tree but it’s really a tiger, I’m in trouble.”

Hoffman: Right. The classic argument is that those of our ancestors who saw more accurately had a competitive advantage over those who saw less accurately and thus were more likely to pass on their genes that coded for those more accurate perceptions, so after thousands of generations we can be quite confident that we’re the offspring of those who saw accurately, and so we see accurately. That sounds very plausible. But I think it is utterly false. It misunderstands the fundamental fact about evolution, which is that it’s about fitness functions—mathematical functions that describe how well a given strategy achieves the goals of survival and reproduction. The mathematical physicist Chetan Prakash proved a theorem that I devised that says: According to evolution by natural selection, an organism that sees reality as it is will never be more fit than an organism of equal complexity that sees none of reality but is just tuned to fitness. Never.

Now I’ll admit right off the bat that natural selection occasionally confers traits that, in some circumstances, distort reality. We may, for example, have been selected to think that we’re brighter or better than we are, because having that illusion gives us a confidence and power that might enhance our fitness. As Steve Pinker has written:

. . . beliefs have a social as well as an inferential function: they reflect commitments of loyalty and solidarity to one’s coalition. People are embraced or condemned according to their beliefs, so one function of the mind may be to hold beliefs that bring the belief-holder the greatest number of allies, protectors, or disciples, rather than beliefs that are most likely to be true. Religious and ideological beliefs are obvious examples.

. . . publicly expressed beliefs advertise the intellectual virtuosity of the belief-holder, creating an incentive to craft clever and extravagant beliefs rather than just true ones. This explains much of what goes on in academia.

. . . the best liar is the one who believes his own lies. This favors a measure of self-deception about beliefs that concern the self…

And our everyday experience with objects can sometimes mislead us when evolved traits create optical illusions, like the famous “checker shadow illusion.” Futher, we know that our sensory system is imperfect and limited by our biological constitution, so that we miss things that other creatures can see, like the ultraviolet patterns perceived by birds and butterflies. Finally, natural selection will foster our consciousness of those animals, those environmental factors, and those traits that have the highest potential effect on our fitness. But that’s not the same thing as saying that our consciousness actually molds the appearance of those organisms and traits.

But I maintain that, in general, natural selection will favor a fairly accurate take on reality (assuming there is a reality), because the more accurately we perceive nature, the higher fitness we will have. Hoffman gives one example where he says we’re selected to have illusions, but I don’t find it particularly convincing:

Gefter: You’ve done computer simulations to show this. Can you give an example?

Hoffman: Suppose in reality there’s a resource, like water, and you can quantify how much of it there is in an objective order—very little water, medium amount of water, a lot of water. Now suppose your fitness function is linear, so a little water gives you a little fitness, medium water gives you medium fitness, and lots of water gives you lots of fitness—in that case, the organism that sees the truth about the water in the world can win, but only because the fitness function happens to align with the true structure in reality. Generically, in the real world, that will never be the case. Something much more natural is a bell curve—say, too little water you die of thirst, but too much water you drown, and only somewhere in between is good for survival. Now the fitness function doesn’t match the structure in the real world. And that’s enough to send truth to extinction.

I find that baffling. Why wouldn’t the fitness function be one like this: “we want enough water to drink, and to sate our band of hominins, but we don’t want to go jumping in huge ponds of water if we can’t swim.” I think Hoffman has gotten it all backwards: our consciousness doesn’t shape external reality to increase our fitness; rather, our fitness depends on accurately perceiving external reality. There are some exceptions. I’m convinced, for example, that natural selection molds our tastes, which are qualia, to conform to what’s good for us. That’s why we like fats and sweets so much. As I’ve always said, a rotting carcass probably tastes like heaven to a vulture. But in most cases we want to see things as they are, for our fitness depends on that. If we could mold reality through consciousness to match our fitness, we would be able to see all dangerous insects as highly visible: snakes and spiders, for example, would be perceived as bright red or orange. We should be able to evolve our color-vision system to enhance our fitness. But that’s a shade on the teleological side, and we just can’t do that.

Yes, many dangerous animals are cryptic. That proves that we can’t change our perception of reality willy-nilly—that there is an external reality out there (animals often enhance their own fitness by being cryptic, so they win the perception battle!).  We can’t often mold the way we see reality to match our fitness functions.

I won’t go on, as this is already too long, but I wanted to add one more claim by Hoffman.

3). Neuroscience hasn’t progressed because neuroscientists haven’t taken into account the quantum nature of brain function and neural activity. 

Gefter: It doesn’t seem like many people in neuroscience or philosophy of mind are thinking about fundamental physics. Do you think that’s been a stumbling block for those trying to understand consciousness?

Hoffman: I think it has been. Not only are they ignoring the progress in fundamental physics, they are often explicit about it. They’ll say openly that quantum physics is not relevant to the aspects of brain function that are causally involved in consciousness. They are certain that it’s got to be classical properties of neural activity, which exist independent of any observers—spiking rates, connection strengths at synapses, perhaps dynamical properties as well. These are all very classical notions under Newtonian physics, where time is absolute and objects exist absolutely. And then [neuroscientists] are mystified as to why they don’t make progress. They don’t avail themselves of the incredible insights and breakthroughs that physics has made. Those insights are out there for us to use, and yet my field says, “We’ll stick with Newton, thank you. We’ll stay 300 years behind in our physics.”

There is by no means universal agreement that quantum-mechanical phenomena, as opposed to classical mechanics, are important on the level of brains and perception.  There are good arguments, in fact, that we can use simple classical mechanics—assuming an external reality—when doing neuroscience. At any rate, I would take issue with the claim that our failure to grasp quantum mechanics has impeded the study of consciousness, or has slowed progress in neuroscience.

If you can figure out a really important point in Hoffman’s article, do point it out below.

h/t: Peter