Charles Murray finds God, loses rationality, gets criticized by Steven Pinker and Michael Shermer

October 27, 2025 • 9:30 am

About two weeks ago I called attention to a new book by Charles Murray, a political scientist at the American Enterprise Institute, famous (or infamous) for his book coauthored with Richard Herrnstein, The Bell Curve (1994).  Murray apparently had long neglected the god-shaped hole in his being, but eventually found God (implying the Christian God), and wrote a book about his conversion to belief, Taking Religion Seriously (click on book cover below to go to its publisher):

Murray followed with an excerpt in the Free Press called “I thought I didn’t need God. I was wrong.”  As I mentioned in my piece about the FP article, Murray relied heavily on God-of-the-gaps arguments, finally filling his “God-sized hole” (yes, he uses those words), by encountering difficult questions whose answers, he averred, pointed toward the existence of divinity. These questions are familiar: they include “Why is there something instead of nothing?” and what accounts for “the mathematical simplicity of many scientific phenomena—most famously E = mc2″?

Murray finally settled on a Quaker-ish god:

Quaker teachings are also helpful in de-anthropomorphizing God. They emphasize that God is not a being with a location. He is everywhere—not just watching from everywhere but permeating the universe and our world.

But if God is everywhere, the god-shaped hole must be pretty damn big!  Of course of all the gods in all the world’s religions, Murray settled on the one for which there can be no evidence. (As Victor Stenger pointed out, most gods can be investigated empirically.)

Well, so be it. Murray is free to adopt his superstition, so long as he doesn’t bother anybody else with it. Unfortunately, he has: not only issuing a book, but also the Free Press excerpt above and now an op-ed in the Wall Street Journal.  Here he adduces adduces another hard question—consciousness—as evidence for a human “soul”, ergo God.

Click below to read it if you subscribe to the WSJ, or find Murray’s misguided piece archived for free here.

Of course Murray is not the first person to use the phenomenon of consciousness as evidence for a “soul”—something he actually never defines. But for evidence beyond consciousness he gloms onto the supposed phenomena of near-death experiences and “terminal lucidity”, defined below.

A few excerpt from Murray’s piece, which he starts by saying he used to be a materialist. And then. . . .

I’ve been back-pedaling. Writing “Human Accomplishment” (2003) forced me to recognize the crucial role transcendent belief had played in Western art, literature and music—and, to my surprise, science. Watching my wife’s spiritual evolution from agnosticism to Christianity, I saw that she was acquiring insights I lacked. I read C.S. Lewis, who raised questions I couldn’t answer. I scrutinized New Testament scholarship and was more impressed by the evidence supporting it than that discrediting it.

I’m curious what that evidence is, since there are no contemporary accounts—and there should be—of Jesus’s miracles, crucifixion, and resurrection. (This, by the way, makes me think that Murray is a secret Christian.) And then he pulls out his “evidence” for God.

Example: A central tenet of materialism is that consciousness exists exclusively in the brain. I first encountered claims to the contrary in the extensive literature on near-death experiences that grew out of Raymond Moody’s “Life After Life” (1975). The evidence now consists of dozens of books, hundreds of technical articles and thousands of cases. I read about Ian Stevenson’s cross-national studies of childhood memories of previous lives. He assembled a database of more than 3,000 cases, and more has been accumulating in the University of Virginia’s Division of Perceptual Studies.

The evidence for both near-death experiences and childhood memories of previous lives is persuasive in terms of the credibility of the sources and verified facts, but much of it is strongly suggestive instead of dispositive. It doesn’t reach the standard of proof Carl Sagan popularized: “Extraordinary claims require extraordinary evidence.” This led me to seek a subset of cases that exclude all conceivable explanations except that consciousness can exist independent of the brain.

But Murray is most impressed by “terminal lucidity”:

Certain near-death experiences approach that level, but the most robust, hardest-to-ignore evidence comes from a phenomenon called terminal lucidity: a sudden, temporary return to self-awareness, memory and lucid communication by a person whose brain is no longer functional usually because of advanced dementia but occasionally because of meningitis, brain tumors, strokes or chronic psychiatric disorders.

Terminal lucidity can last from a few minutes to a few hours. In the most dramatic cases, people who have been unable to communicate or even recognize their spouses or children for years suddenly become alert and exhibit their former personalities, complete with reminiscences and incisive questions. It is almost always followed by complete mental relapse and death within a day or two.

The phenomenon didn’t have a name until 2009, but case studies reach back to detailed clinical descriptions from the 19th century. Hospices, palliative-care centers, and long-term care wards for dementia patients continued to observe the condition during the 20th century but usually treated it as a curious episode that didn’t warrant a write-up. With the advent of social media, reports began to accumulate. We now have a growing technical literature and a large, systematic sample compiled by Austrian psychologist Alexander Batthyány.

Two features of the best-documented cases combine to meet Sagan’s standard: The subjects suffered from medically verified disorders that made their brains incapable of organized mental activity; and multiple observers, including medical personnel, recorded the lucidity.

A strict materialist explanation must posit a so-far-unknown capability of the brain. But the brain has been mapped for years, and a great deal is known about the functions of its regions. Discovering this new feature would be akin to finding a way that blood can circulate when the heart stops pumping.

Given the complexity of the brain, is it surprising that we don’t fully understand what it’s capable of? Murray assumes that we do, and so has abandoned a materialistic explanation of consciousness. He ends by adducing the divine once again:

We are identifying anomalies in the materialist position that must eventually lead to a paradigm shift. Science will have to acknowledge that even though conventional neuroscience explains much about consciousness under ordinary circumstances, something else can come into play under the extreme conditions of imminent death.

The implications are momentous. Astrophysicist Robert Jastrow observed that for a scientist trying to explain creation, the verification of the big-bang theory “ends like a bad dream”: “As he pulls himself over the final rock, he is greeted by a band of theologians who have been sitting there for centuries.” Neuroscientists who have been trying to explain consciousness may have to face their own bad dream: coming to terms with evidence for the human soul.

“MUST eventually lead to a paradigm shift”?  Murray is pretty damn sure that our ignorance of the brain and its capabilities will lead us to a pantheistic God (or a Christian one; it’s not clear)! And what on earth does Murray mean by “a soul”? Is it this undefined “soul” that somehow permeates the brain, making us conscious and sometimes producing terminal lucidity. He doesn’t say, and I don’t feel like reading his book to find out. After all, if he’s advancing an argument for God, the Free Press and Wall Street Journal article should suffice to summarize Murray’s arguments.

A few of us, including Steve Pinker and Michael Shermer, were discussing Murray’s conversion and his “evidence”.  Steve emailed a short rebuttal of Murray’s thesis, which he allowed me to publish here. It’s a good attack on the “soul of the gaps” argument:

Pinker (I put in one link):

Let’s assume for the moment that the reports of terminal lucidity are factually accurate. At best Murray would be making a “soul of the gaps” argument: There’s something we don’t understand, therefore the soul did it. But when it comes to the brain and its states of awareness, there’s lots we don’t understand. (Why do you wake up in the middle of the night for no reason? Why can’t you fall asleep even when you’re exhausted?)

The brain is an intricate, probabilistic, nonlinear dynamic system with redundancies, positive and negative feedback loops, and multiple states of transient stability. If circuit A inhibits circuit B, and if A deteriorates faster than B, then B can rebound. If A and B each excites itself while inhibiting the other, they can oscillate unpredictably. Now multiply these and other networks by several billion. Should we be surprised if uneven deterioration in the brain results in some quiescent circuit popping back into activity?

Contra Murray, these dynamics are nowhere near being understood by neuroscientists, since they may be the most complex phenomena in the universe. Yet we can be sure that with 86 billion neurons and a trillion synapses, the brain has enough physical complexity to challenge us with puzzles and surprises, none of them requiring a ghost in the machine. A graduate student in computational neuroscience with a free afternoon could easily program an artificial neural network which, when unevenly disabled, exhibited spontaneous recovery or unpredictable phase transitions.

All this assumes there is a phenomenon to explain in the first place. Claims of “terminal lucidity” consist of subjective recollections by loved ones and caregivers. But we know that people are extraordinarily credulous about the cognitive abilities of entities they interact with, readily overinterpreting simple responses as signs of nonexistent cogitation. The first primitive chatbot, Eliza, simulated a therapist in the 1960s using a few dozen canned responses (e.g., “I had an argument with my mother” “Tell me more about your mother”), yet people poured their hearts out to it. With so-called Facilitated Communication, therapists and patients were convinced they were liberating the trapped thoughts of profoundly autistic children with the use of a keyboard, oblivious to the fact that they were manipulating the children’s hands. When there’s desperation to commune with a loved one, any glimmer of responsiveness can be interpreted as lucidity, exaggerated with each recall and retelling. What Murray did not report was any objective indicator of coherence or lucidity, like an IQ test, or a standard bedside neurological battery, or a quiz of autobiographical memory with verifiable details.

A great irony in the attempt to use rigorous scientific reasoning to support some theory of an immaterial soul is that the theory itself (inevitably left unspecified) is utterly incoherent.  If a dybbuk can re-enter a ravaged brain as a gift to loved ones longing for a last goodbye, why are just a few people blessed with this miracle, rather than everyone? Why did the soul leave in the first place, sentencing the loved ones to years of agony? Why can’t the soul just stay put, making everyone immortal? What about when the deterioration is gradual, as when my disoriented grandmother thought she was lost and searching for her parents in the country she had left sixty years before, bursting into tears every time we told her her parents were dead? Was she missing a soul? Was the God who blessed others with a last lucid goodbye punishing her (and us) for some grievous sin?

The theory that the mind consists of activity in the brain, that the brain has a complexity we don’t yet understand (though we understand why we don’t understand it), and that the brain, like any complex entity, is vulnerable to damage and deterioration, has none of these problems.

Michael Shermer is also skeptical, as he evinced on his podcast below with Murray about the book. In the podcast Shermer also cites this post on terminal lucidity by Ariel Zeleznikow-Johnston, which is doubtful about the phenomenon but says it needs to be studies neurologically, along with other phenomena associated with death. In the interim, Zeleznikow-Johnston mentions observer bias (a “will to believe”) and ignorance as materialistic explanations of terminal lucidity.

Murray and Shermer’s discussion of terminal lucidity, in which Shermer offers a naturalistic explanation, begins 1 hour 24 minutes in the podcast below.

And I’ll leave it at that, but will add a quote from a letter by the theologian Dietrich Bonhoeffer, who was executed by the Nazis:

. . . . Weizsäcker’s book on the world view of physics is still keeping me busy. It has again brought home to me quite clearly how wrong it is to use God as a stop-gap for the incompleteness of our knowledge. If in fact the frontiers of knowledge are being pushed further and further back (and that is bound to be the case), then God is being pushed back with them, and is therefore continually in retreat. We are to find God in what we know, not in what we don’t know;. . . . .

The journal Nature touts “two-eyed seeing” (the supposed advantage of combining modern scientific knowledge with indigenous “ways of knowing”)

February 6, 2025 • 10:20 am

The 1953 paper in Nature by Watson and Crick positing a structure for DNA is about one page long, while the Wilkins et al. and Franklin and Gosling papers in the same issue are about two pages each. Altogether, these five pages resulted in three Nobel Prizes (it might have been four had Franklin lived).

Sadly, such concision has fallen by the way now that ideology has invaded the journal. This new paper in Nature (below), a perspective that touts the scientific advantage to neurobiology of combining indigenous knowledge with modern science—the so-called “two eyed seeing” metaphor contrived by two First Nations elders in Canada 21 years ago—is 10.25 pages long, more than twice as long as the entire set of three DNA papers.  And yet it provides nothing even close to the earlier scientific advances.  That’s because, as you might have guessed, indigenous North Americans do not have a science of neurobiology, or ways of looking at the field that might be helpfully combined with what we already know.  What the authors tout at the outset isn’t substantiated in the rest of the paper.

Instead, the real point of the paper is that neuroscientists should treat indigenous peoples properly and ethically when involving them in neurobiological studies. In fact, the paper calls “Western” neuroscientists “settler colonialists,” which immediately tells you where this paper is coming from.  Now of course you must surely behave ethically if you are doing neuroscience, towards both animals and human subjects or participants, but this paper adds nothing to that already widespread view.  And it gives not a single example of how neuroscience itself has been or could be improved by incorporating indigenous perspectives.

The paper is a failure and Nature should be ashamed of wasting over ten pages—pages that could be devoted to good science—to say something that could occupy one paragraph.

Click below to read the paper, which is free with the legal Unpaywall app, or find the pdf here,

My heart is sinking as I realize that I have to discuss this “paper” after reading it twice, but let’s group its contentions under some headings (mine, though Nature‘s text is indented):

What is “two-eyed seeing”? 

This Perspective focuses on the integration of traditional Indigenous views with biomedical approaches to research and care for brain and mental health, and both the breadth of knowledge and intellectual humility that can result when the two are combined. We build upon the foundational framework of Two-Eyed Seeing1 to explore approaches to sharing sacred knowledge and recognize that many dual forms exist to serve a similar beneficial purpose. We offer an approach towards understanding how neuroscience has been influenced by colonization in the past and efforts undertaken to mitigate epistemic, social and environmental injustices in the future.

The principle of Two-Eyed Seeing or Etuaptmumk was conceived by Mi’kmaq Elders, Albert and Murdena Marshall, from Unama’ki (Cape Breton), Nova Scotia, Canada, in 20041 (Fig. 1). It is considered a gift of multiple perspectives, treasured by many Indigenous Peoples, which is enabled by learning to see from one eye with the strengths of Indigenous knowledge and ways of knowing, and from the other eye with the strengths of non-Indigenous knowledge and ways of knowing. It speaks not only to the importance of recognizing Indigenous knowledge as a distinct knowledge system alongside science, but also to the weaving of the Indigenous and Western world views. This integration has attained Canada-wide acceptance and is now widely considered an appropriate approach for researchers working with Indigenous communities.

It is, as you see, a push to incorporate indigenous “ways of knowing” into modern science—in this case neuroscience, though there’s precious little neuroscience in the paper. The paper coiuld have been written using nearly any area of science in which there are human subjects. And, in fact, we do have lots of papers about how biology, chemistry, and even physics can be improved by indigenous knowledge (“two-eyed seeing” is simply the Canadian version of that trope).

And as is so often the case in this kind of paper, there are simple, almost juvenile figures that don’t add anything to the text. The one below is from the paper. Note that modern science is called “Western”, a misnomer that is almost always used, and is meant to imply that the knowledge of the “West” is woefully incomplete.

Isn’t that edifying?

What is two-eyed seeing supposed to accomplish?  Some quotes:

Here we argue that the integration of Indigenous perspectives and knowledge is necessary to further deepen the understanding of the brain and to ensure sustainable development of research4 and clinical practices for brain health5,6 (Table 1 and Fig. 2). We recognize that, in some parts of the world, the term Indigenous is understood differently. We are guided by the United Nations Permanent Forum on Indigenous Issues that identifies Indigenous people as

[…] holders of unique languages, knowledge systems and beliefs and possess invaluable knowledge of practices for the sustainable management of natural resources. They have a special relation to and use of their traditional land. Their ancestral land has a fundamental importance for their collective physical and cultural survival as peoples. Indigenous peoples hold their own diverse concepts of development, based on their traditional values, visions, needs and priorities.

. . . There are many compelling reasons for neuroscientists who study the human brain and mind to engage with other ways of knowing and pursue active allyship, and few convincing reasons to not. Fundamentally, a willingness to engage meaningfully with a range of modes of thought, world views, methods of inquiry and means of communicating knowledge is a matter of intellectual and epistemic humility11. Epistemic humility is defined as “the ability to critically reflect on our ontological commitments, beliefs and belief systems, our biases, and our assumptions, and being willing to change or modify them”12. It shares features with interdisciplinary thinking within Western academic traditions, but it stands to be even more enlightening by providing entirely new approaches to understanding. Epistemic humility is an acknowledgement that all interactions with the world, including the practice of neuroscience, are influenced by mental frameworks, experiences and both unconscious and overt biases.

“Humility” and “allyship” are always red-flag words, and they it is supposed to apply entirely to the settler-colinialist scientists, not to indigenous people.

Why is “one eyed” modern science harmful?  Quotes:

Brain science has largely drawn on ontological and epistemological cultural ways of being and knowing, which are dominantly held in Western countries, such as those in North America and Europe. In cross-cultural neuroscience involving Indigenous people and communities, both epistemic and cultural humility call for an understanding of the history of colonialism, discrimination, injustice and harm caused under a false umbrella of science; critical examination of the origins of current and emerging scientific assessments; and consideration of the way culture shapes engagement between Western and Indigenous research, as well as care systems for brain and mental health.

. . . Why, then, is such engagement with Indigenous ways of knowing not more widespread in human neuroscience research and care? There would seem to be a litany of reasons: ongoing oppression and marginalization of Indigenous peoples in many societies and scientific communities, individual and systemic epistemic arrogance in which only the Western way of knowing is perceived to be of value, lack of knowledge of other knowledge systems, lack of relationships with Indigenous partners that has been fuelled in part by the exclusion and marginalization of Indigenous scholars in academia, challenges to identifying ways of decolonizing or Indigenizing a particular area of study and fear of consequences for making mistakes or causing offence9,15, among others.

. . . Given existing power imbalances, Western knowledge largely dominates the world in which Indigenous peoples reside and, as a result, there is often no choice as to whether to engage with it. In contrast, non-Indigenous peoples have the privilege to choose whether to engage with Indigenous knowledge systems. Although significant learning about Indigenous knowledge systems for settler colonialists remains, full reciprocity is not necessarily a requirement.

Here we see the singling out of power imbalances, the emphasis on colonialism, and the supposed denigration of valuable “indigenous knowledge systems” (which aren’t defined)—all  of which are part of Critical Social Justice ideology. But note the first sentence above: the implication that “two-eyed seeing” is supposed to actually improve brain science itself.

On neuroethics. In fact, the authors give no examples where it does that. Instead, the concentration of the paper is on “neuroethics”.  I talked to my colleague Peggy Mason, a neuroscientist here, about neuroethics, and she told me that it comes in two forms. The first one, which Peggy finds more interesting, is looking at ethical questions through the lens of neuroscience. One example is determinism, and in Robert Sapolsky’s new book Determined you can see how he uses neuroscience to arrive at his deterministic conclusions and their ethical implications.

The other form of neuroethics is the one used in this paper: how to ethically deal with animals and people used in neuroscience studies. These are, in effect, “reserach ethics”, and have been a subject of discussion in recent decades.  As the paper shows above, the real “revolution” in neuroscience touted in the title is simply the realization by those pesky settler-eolonialist neuroscientists that they must exercise sensitivity and empathy towards indigenous people (the implication is that they are uncomprehending and patronizing).

The next section shows the scientific vacuity of melding two types of knowledge: the real “two-eyed seeing” objective.

How has two-eyed seeing improved our understanding of neuroscience? No convincing examples are given in the paper, but here are a few game tries:

Historically, Indigenous peoples have been largely excluded from brain and mental health science, or included but never benefited from the scientific advancements. There are also ample examples, in the brain and mental health sciences and elsewhere, in which the cultural beliefs of Indigenous peoples were patently disrespected. A distinct example is the Havasupai Tribe case, where scientists at Arizona State University in the USA used blood samples they had collected from the Havasupai people to conduct unconsented research on schizophrenia, inbreeding and human population migration20. The Havasupai people, who have strong beliefs about blood and its relation to their sense of identity, spiritual connection and cultural cohesion, were advised that the blood samples were being collected for purposes of conducting diabetes research. The community filed two lawsuits against the university upon learning about the misuse of their blood samples for research questions they do not support.

In another stark example, results from an international genomics study on the genetic structure of ‘Indigenous peoples’ [sic] recruited in Namibia21 were compared to results of a study of the ‘Bantu-speaking people of southern Africa’22,23. The Namibian people were the Indigenous San (including the!Xun, Khwe and ‡Khomani) and Khoekhoe people who include the Nama and Griqua, first to be colonized in southern Africa21. Among numerous missteps in the research, published supplementary materials contained information entirely unrelated to genomics and other information about the San that was unconsented, private, pejorative and discriminatory.

These examples of violations of research ethics in neuroscience and genomics highlight the need for Two-Eyed Seeing to ensure individual and professional scientific integrity.

Neither of these are examples of improvements in understanding neuroscience via “two-eyed seeing”. One is about the proper and ethical way to collect blood from indigenous people; the other is about genetic differences between African populations.

Can we do better? How about an example from studies of mental health?

Other successful studies among the amaXhosa people in South Africa in 2020 exemplify the embodiment of cultural humility and trust-building. Gulsuner et al.29 and Campbell et al.30 demonstrated the importance of inviting people with lived experience of a mental health condition, brain and mental health professionals, members of the criminal justice system, local hospital staff as well as traditional and faith-based healers to provide education about severe mental illness and local psychosocial support structures to promote recovery. Through co-design, implementation and evaluation, the researchers assessed the effects of the co-created mental health community engagement in enhancing understanding of schizophrenia and neuropsychiatric genomics research as it pertains to this disorder30. They collaboratively presented mental health information and research in a culturally sensitive way, both respecting the local conceptualization of mental health and guarding against the possible harms of stigma31. They incorporated cultural practices, such as song, dance and prayer, with the guidance of key community leaders and amaXhosa people that included families affected by schizophrenia, to foster a process of multidirectional enlightenment and, in effect, Two-Eyed Seeing.

Again we see the emphasis on cultural sensitivity, which of course I agree with, but whether and how this method helped us understand how to cure schizophrenia and improve “neuropsychiatric genomics research” is not explained. There may be something there, but the authors fail to tell us what.

Finally, the authors relate the sad story of Lia Lee, a severely epileptic Hmong child in California whose treatment was difficult (she was in a vegetative state for 26 of her 30 years after her last seizure), for the doctors couldn’t communicate with the parents (see here and here) . Treatment was further impeded because the Hmong parents, who really loved Lia deeply, also believed that epilepsy was a sign that she was spiritually gifted, and so were conflicted and erratic in giving her the prescribed medication.  This is an example where some indigenous beliefs are harmful to treatment, just as in some cultures that mistreat people who are mentally ill because they think they are possessed by supernatural powers. Two-eyed seeing is not always good for patients!  From the paper:

Epilepsy serves as a poignant example of how a dual perspective can enrich the spirituality of health and wellbeing, and where collisions with biomedicine can lead to tragic consequences. One example can be taken from the book The Spirit Catches You and You Fall Down, in which author Anne Fadiman51 documents the story of Lia Lee, a Hmong child affected with Lennox–Gastaut syndrome. Lia’s parents attributed the symptoms of her seizures to the flight of her soul in response to a frightening noise—quab dab peg (the spirit catches you and you fall down; translated as epilepsy in Hmong–English dictionaries) and, although concerned, were reluctant to intervene because they viewed its symptoms as a form of spiritual giftedness. Lia’s doctors were faced with limited therapeutic choices, challenges of communication, and a general lack of cultural competence. Exacerbated by disconnects and failures of both traditional and Western healthcare, responsive options and years of effort were eclipsed in a perfect storm of mistrust and misunderstanding.

Since the 1990s when the book was written, closing gaps in health equity, reducing the marginalization of vulnerable and historically neglected populations such as Indigenous peoples and promoting individual and collective autonomy have become a focus in both neuroscience research and clinical care.

Fadiman’s book is read widely in medical schools, used to promote cultural sensitivity towards patients.  That’s fine (though it couldn’t have helped Lia), but again it doesn’t help us understand neuroscience itself.

What are some of the indigenous practices said to contribute to neuroscience?  Several are mentioned, but have nothing to do with neuroscience. Here’s one:

 . . . ,. there remains significant potential integrating Indigenous theories around the brain and mind. For example, while the Kulin nations conceptualize distinct philosophies of yulendj (knowledge/intelligence), toombadool (learning/teaching) and Ngarnga (understanding/comprehension), views of the mind and brain tend to not be static and individualistic, but holistic, dynamic and interwoven symbiotically within the broader environment. The durndurn (brain) is not just a singular organ, but a part of the body that contains some aspects of a murrup (spirit), within the pedagogy of a broader songline.

This concept of a songline is present across many Indigenous cultures35. Although songlines can present as dreaming stories, art, song and dance, their most common use is as a mnemonic. Such is the success of using songlines in memory that it has allowed oral history to accurately survive tens of thousands of years—with accuracy often setting precedent for scientific verification. The breadth of their use would allow the common person to memorize thousands of plants, animals, insects, navigation, astronomy, laws, geological features and genealogy. Whether conceived as songlines, Native American pilgrimage trails, Inca ceques or Polynesian ceremonial roads, all use similar Indigenous methods of memorization36. This aligns with modern neuroscience findings that emphasize the capacity of the brain for complex memory processes and the role of mnemonic techniques in enhancing memory retention. Moser, Moser and O’Keefe were awarded the 2014 Nobel Prize in Physiology or Medicine for research that grounded the relationship between memory and spatial awareness when establishing that entorhinal grid cells form a positioning system as a cognitive representation of the inhabited space. Elevated hippocampal activity when utilizing spatial learning encourages strong memorization through associative attachment, and these techniques are readily used by competitive memory champions. Two-Eyed Seeing songlines for the mind and brain build capacity in facilitating a respectful implementation of traditional memorization techniques in broader contemporary settings37.

Songs and word of mouth allow indigenous people to pass knowledge along. That’s fine, except that knowledge passed on this way may get distorted. Writing—the “settler-colonialist” way of preserving knowledge—is much better and more reliable. It also allows for mathematical and statistical analysis. Again, there is nothing in the two-eyed seeing that improves neuroscience, at least nothing I can see.

There’s a lot more in this long, tedious, and tendentious paper, but I won’t bore you. I do think it would make a great pedagogical tool for neuroscience students, who can evaluate the paper’s claims at the same time as discerning the ideological slant of the paper (as well as its intellectual vacuity).  We’ve come to a pretty pass when one of the world’s two best scientific journals publishes pabulum like this in the interest of sacralizing indigenous people. Yes, indigenous people can contribute knowledge (“justified true belief”) to the canons of science, but, as we’ve seen repeatedly, that knowledge is usually scanty, overblown, and largely irrelevant to modern science. But Social Justice has stuck its nose in the tent science, and papers like this are the result. . .

A true story that is mine

July 25, 2024 • 9:15 am

I sent this email to Matthew last night:

Tuesday night I read something about J D. Bernal in Hitchens’s “God is Not Great,” which I was rereading, and I remembered that everybody called Bernal by a nickname that testified to his wisdom.  I turned out the light and unsuccessfully tried to remember it for a while, then fell asleep. “I’ll think of it tonight,” I told myself.

Sure enough, I woke up at about 3 a.m. and the first thing that popped into my mind was “SAGE”. That was, of course, his nickname.  Clearly my cranial neurons had been turning it over while I slept.  And of course this happens to all of us: we can’t think of something and much later it suddenly comes to us.  Clearly the brain was working on it in the interim.

The brain is truly a wondrous organ!

J. D. Bernal was a polymath who pioneered the study of molecular shape using X-ray crystallography. I should add that to me this is evidence for determinism. Seeing that name activated a program in my brain to dig out his nickname (which had been stored there for several decades since I read his biography), and the program kept running while I was sleeping.

I’m sure readers have similar or even weirder stories. (Matthew says that this happens to him all the time when he can’t think of a word for a crossword puzzle, but then it comes to him after he takes a break for a while and goes away.

James Gleick favorably reviews a book arguing that humans have libertarian free will

January 14, 2024 • 10:00 am

The idea that we have libertarian free will, in the real sense of “being able to make any one of several decisions at a given time”, has made a comeback in the pages of The New York Review of Books, a magazine that never quite recovered from the death of editor Robert B. Silvers in 2017. It was once the magazine to read for thoughtful analyses of books, but it’s gone downhill.  I had a subscription on and off, but quit a while back.

But I digress. In the latest issue, the respected author and historian of science James Gleick reviews a recent book on free will, Free Agents: How Evolution Gave us Free Will by Kevin Mitchell.  I haven’t read the book, so all I can do is reprise what Gleick says about the book, which is that Mitchell’s case for libertarian free will is convincing, and that determinism—or “naturalism” as I prefer to call it, since I take into account the inherent unpredictability of quantum mechanics—is not all there is to our actions and behaviors. Mitchell, says Gleick, maintains that natural selection has instilled humans with the ability to weigh alternatives and make decisions, not only apparent decisions but real ones, decision that involve us weighing alternatives, thinking about the future, and then making make one of several possible decisions even at the moment you decide. In other words, determinism doesn’t rule all of our behaviors and decisions. Apparently, this is libertarian free will: facing a restaurant menu, with everything else in the universe the same (a classic scenario), you could have ordered something other than what you did.

The problem is that Gleick never defines “free will” in this way; he only implies that Mitchell accepts libertarian free will, and then tries to show how evolution gives it to us.

But I’m getting ahead of myself: click on the screenshot below to read:

here

Gleick argues that life without libertarian free will is pointless. I maintain that this is incorrect—that the point of our life is the gratification we get from our actions, and we don’t need libertarian free will for that. All we need is a sense of satisfaction. You don’t even really need that if you define “point” post facto as “doing what you felt you had to do.”  But, say compatibilists like Dennett—and compatibilists are all physical determinists—we need to have some conception of free will, even if what we do is determined, for society would fall apart without it. And Gleick agrees:

Legal institutions, theories of government, and economic systems are built on the assumption that humans make choices and strive to influence the choices of others. Without some kind of free will, politics has no point. Nor does sports. Or anything, really.

. . . If the denial of free will has been an error, it has not been a harmless one. Its message is grim and etiolating. It drains purpose and dignity from our sense of ourselves and, for that matter, of our fellow living creatures. It releases us from responsibility and treats us as passive objects, like billiard balls or falling leaves.

One senses from these statements that the choices we make are not merely apparent choices, conditioned by the laws of physics, but real ones: choices that we didn’t have to make. In other words, we have libertarian, I-could-have-done-otherwise free will.

That construal of free will is buttressed by Gleick’s characterization of Mitchell’s argument as showing that we have purpose, and that purpose (again, not explicitly defined), is the proof that we have libertarian free will:

Agency distinguishes even bacteria from the otherwise lifeless universe. Living things are “imbued with purpose and able to act on their own terms,” Mitchell says. He makes a powerful case that the history of life, in all its complex grandeur, cannot be appreciated until we understand the evolution of agency—and then, in creatures of sufficient complexity, the evolution of conscious free will.

And this purpose is apparently an emergent property from natural selection, not only not predictable from physics, but somehow incompatible with physical law, which, are, says Gleick, are only descriptions of the universe and not really “laws” that the substance of our bodies and brains must obey:

This is why so many modern physicists continue to embrace philosophical determinism. But their theories are deterministic because they’ve written them that way. We say that the laws govern the universe, but that is a metaphor; it is better to say that the laws describe what is known. In a way the mistake begins with the word “laws.” The laws aren’t instructions for nature to follow. Saying that the world is “controlled” by physics—that everything is “dictated” by mathematics—is putting the cart before the horse. Nature comes first. The laws are a model, a simplified description of a complex reality. No matter how successful, they necessarily remain incomplete and provisional.

The incompleteness apparently creates the gap where you can find libertarian free will.

And the paragraphs below, describing the results of natural selection, seem to constitute the heart of the book’s thesis:

Biological entities develop across time, and as they do, they store and exchange information. “That extension through time generates a new kind of causation that is not seen in most physical processes,” Mitchell says, “one based on a record of history in which information about past events continues to play a causal role in the present.” Within even a single-celled organism, proteins in the cell wall respond chemically to changing conditions outside and thus act as sensors. Inside, proteins are activated and deactivated by biochemical reactions, and the organism effectively reconfigures its own metabolic pathways in order to survive. Those pathways can act as logic gates in a computer: if the conditions are X, then do A.

“They’re not thinking about it, of course,” Mitchell says, “but that is the effect, and it’s built right into the design of the molecule.” As organisms grow more complex, so do these logical pathways. They create feedback mechanisms, positive and negative. They make molecular clocks, responding to and then mimicking the solar cycle. Increasingly, they embody knowledge of the world in which they live.

The tiniest microorganisms also developed means of propulsion by changing their shape or deploying cilia and flagella, tiny vibrating hairs. The ability to move, combined with the ability to sense surroundings, created new possibilities—seeking food, escaping danger—continually amplified by natural selection. We begin to see organisms extracting information from their environment, acting on it in the present, and reproducing it for the future. “Information thus has causal power in the system,” Mitchell says, “and gives the agent causal power in the world.”

We can begin to talk about purposeFirst of all, organisms struggle to maintain themselves. They strive to persist and then to reproduce. Natural selection ensures it. “The universe doesn’t have purpose, but life does,” Mitchell says.

My response to this is basically “so what?” Natural selection is simply the differential reproduction of gene forms, which, when encased in an organism, can leave more copies when they give that organism the ability to survive and reproduce.  Organisms thus evolve to act as if they have purpose. But that “purpose” is simply anthropormorphizing the results of the mindless process of natural selection.  So, when we decide to go hunting for food, or get pleasure from being with a mate, we can say that those embody our “purpose”. But there’s nothing in all this that implies that, at a given moment, we can make any number of decisions independent of physics.

But, Gleick implies, there is a way we can do this: by leveraging the “random fluctiations” in our brains:

It’s still just chemistry and electricity, but the state of the brain at one instant does not lead inexorably to the next. Mitchell emphasizes the inherent noisiness of the system: more or less random fluctuations that occur in an assemblage of “wet, jiggly, incomprehensibly tiny components that jitter about constantly.” He believes that the noise is not just inevitable; it’s useful. It has adaptive value for organisms that live, after all, in an environment subject to change and surprise. “The challenges facing organisms vary from moment to moment,” he notes, “and the nervous system has to cope with that volatility: that is precisely what it is specialized to do.” But merely adding randomness to a deterministic machine still doesn’t produce anything we would call free will.

That’s correct, though what Mitchell or Gleick mean as “random fluctuations in the brain” is undefined. Robert Sapolsky argues, in his recent book Determined: A Science of Life Without Free Will, that there are no “random” fluctuations in the brain: neurons interact with each other according to the principles of physics.  To have true free libertarian will, those neurons would have to fire in different ways under exactly the same conditions in the brain. Sapolsky spends a lot of time convincingly showing that this cannot be the case. Ergo, no brain fluctuations.

But, as Gleick says above, randomness alone doesn’t give us agency. Still, under Mitchell’s model it’s essential for free will. And this is the big problem, for how does one’s “will” harness that randomness to come up with decisions that are independent of physical processes? Gleick:

Indeed, some degree of randomness is essential to Mitchell’s neural model for agency and decision-making. He lays out a two-stage model: the gathering of options—possible actions for the organism to take—followed by a process of selection. For us, organisms capable of conscious free will, the options arise as patterns of activity in the cerebral cortex, always subject to fluctuations and noise. We may experience this as “ideas just ‘occurring to you.’” Then the brain evaluates these options, with “up-voting” and “down-voting,” by means of “interlocking circuit loops among the cortex, basal ganglia, thalamus, and midbrain.” In that way, selection employs goals and beliefs built from experience, stored in memory, and still more or less malleable.

Ergo we have to have the brain’s “randomness”, which is neither defined nor, at least according to Sapolsky, doesn’t exist. Then one harnesses that randomness to come up with your decisions:

Mitchell proposes what he calls a “more naturalized concept of the self.” We are not just our consciousness; we’re the organism, taken as a whole. We do things for reasons based on our histories, and “those reasons inhere at the level of the whole organism.” Much of the time, perhaps most of the time, our conscious self is not in control. Still, when the occasion requires, we can gather our wits, as the expression goes. We have so many expressions like that—get a grippull yourself togetherfocus your thoughts—metaphors for the indistinct things we see when we look inward. We don’t ask who is gathering whose wits.

Well, we can always confabulate “reasons” for what we do, but, in my view, the whole process of pondering is simply the adaptive machinery of your brain, installed by natural selection, taking in environmental information and spitting out a solution that’s usually “adaptive”.  And because different people’s brains are wired differently (there is, after all, genetic and developmental variation), people tend to have somewhat different neuronal programs, so they behave in somewhat different ways, often predictable. This is what we call our “personalities”: the programs that are identified with different bodies. “Pondering” is not something we do freely; it’s what’s instilled in our brains by natural selection to produce adaptive behavior. We ponder just as a chess-playing computer ponders: working through programs until one produces the best available solution (in the case of a computer, to make a move that best insures you’ll win; in the case of a human, to make a move that gives the most “adaptive” result).

In none of this, however, do I detect anything other than giving the name “free will” to neuronal processes that we get from natural selection, and spitting out decisions and behaviors that could not have been otherwise in a given situation. (That situation, of course, includes the environment, which influences our neurons.) In none of this do I see a way that a numinous “will” or “agency” can affect the physical workings of our neurons. And in none of this can i see a way to do something differently than what you did.

In the end, and of course I haven’t read Mitchell’s book, Gleick doesn’t make a convincing case for libertarian free will. Yes, he can make a case for “compatibilist” free will, depending on how you define that (“actions that comport with our personalities,” “decisions not made under compulsion,” etc.). But as I’ve emphaszied, all compatibilists are at bottom, determinists (again, I’d prefer “naturalists”). Remember, determinism or naturalism doesn’t mean that behaviors need be completely predictable—quantum indeterminacy may act, though we’re not sure it acts on a behavioral level—but quantum indeterminacy does not give us “agency”.  “Compatibilist” free will still maintains that, at any given moment, we cannot affect the behaviors that flow from physics, and we cannot do other that what we did. It’s just that compatibilists think of free will as something other than libertarian free will, and there are as many versions of compatibilism as there are compatibilist philosophers.

I can’t find in this review any basis for libertarian free will—not in natural selection, not in the “random” fluctuations of the brain, not in the fact that different people have different personalities and may act differently in the same general situation. You can talk all you want about randomness and purpose and “winnowing of brain fluctuations,” but until someone shows that there’s something about our “will” that can affect physical processes, I won’t buy libertarian free will. Physicist Sean Carroll doesn’t buy it, either. He’s a compatibilist, but argues this:

There are actually three points I try to hit here. The first is that the laws of physics underlying everyday life are completely understood. There is an enormous amount that we don’t know about how the world works, but we actually do know the basic rules underlying atoms and their interactions — enough to rule out telekinesislife after death, and so on. The second point is that those laws are dysteleological — they describe a universe without intrinsic meaning or purpose, just one that moves from moment to moment.

The third point — the important one, and the most subtle — is that the absence of meaning “out there in the universe” does not mean that people can’t live meaningful lives.

(See also here.)

We are physical beings made of matter. To me that blows every notion of libertarian free will out of the water. I’ll be curious to see how Mitchell obviates this conclusion.

 

h/t: Barry

A new movie about free will, and it’s worth watching

February 12, 2023 • 10:50 am

It must have been at least two years ago when a group of young but eager filmmakers came to my lab in Chicago to spend several hours filming my lucubrations about free will for a movie they were making. I didn’t hear much about the project after that, and assumed that it had died, but no: I just heard that the movie, “Free Will? A Documentary” was out. It’s two hours long, very absorbing for those of us interested in this question, but you’ll have to pay to see it. (As an interviewee, I got a free viewing.)

You can watch the short trailer on YouTube by clicking below; the notes say this:

Free Will? A Documentary is an in-depth investigation featuring world renowned philosophers and scientists into the most profound philosophical debate of all time: Do we have free will?

Featuring physicist Sean Carroll, philosopher Daniel Dennett, writer Coleman Hughes, neuroscientist Heather Berlin, and many more.

The website for the film is here; it was directed by Mike Walsh, produced by Jeremy Levy and Mitch Joseph, and the cinematography is by Matteo Ballatta. They did an extremely professional job, complete with animations, movies, photos of the relevant scientific papers, and so on. You can rent it from either Vimeo or Amazon for only $2.99 (“rentals include 30 days to start watching this video and 48 hours to finish once started”), or buy it to watch permanently for ten bucks. I enjoyed the hell out of it, and if you want to watch it via rental, three bucks is a pittance, especially because it’s as long as and as well produced as any documentary you can see in theaters. And it has a lot of food for thought. I put a few notes below.

The trailer:

The movie is largely a series of talking heads: nearly everyone who’s ever weighed in on free will is here (a notable exception is Robert Sapolsky). You can see physicist Sean Carroll, Massimo Pigliucci, Trick Slattery, Gregg Caruso, Derk Pereboom, Coleman Hughes (new to me on this topic, but very good), and neuroscientist Heather Berlin (also new to me, and also very good). And of course there’s Dan Dennett, who gets more airtime than anyone else, perhaps because he’s the most well known philosopher to deal with free will (he’s written two big books about it), but also because he speaks with vigor, eloquence, and his trademarked confidence. I appear in a few scenes, but the concentration is on philosophers.

On the whole, the film accepts naturalism, giving little time to libertarian “you could have done otherwise” free will.  There are two libertarians shown, though: psychologist Edwin Locke (an atheist) and Rick Messing (an observant Jew and, I think, a rabbi). I don’t find them convincing, for, as Carroll points out, the laws of physics have no room for an immaterial “agency” that interacts with matter (our brains and bodies). I would have liked to see a full-on religious libertarian, some fundamentalist who insists that we all have free will because God gave it to us. (Remember, most people are libertarians.)

But everyone else interviewed is a naturalist, all believing that at any one moment you have only one course of action. Whether that can be made compatible with some conception of free will, as do “compatibilists” like Dennett, is a subject of some discussion in the film. But there are also hard determinists like Caruso and me who spurn compatibilism. In fact, at the end of the film several people, including Dennett, suggest that the free will “controversy” between naturalists one hand (i.e., “hard determinists” who accept quantum indeterminacy as well) and compatibilists on the other is a purely semantic issue, and perhaps we should jettison the idea of free will altogether. With naturalism settled as true and libertarianism held only by a few philosophers and a lot of religious people, getting rid of the term would make the debate purely philosophical. That’s fine with me, for once you accept naturalism, one can begin dealing with the important social consequences, including how to judge other people in both life and the legal system.

There’s a good discussion of the science, including the Libet and more recent Libet-like experiments (I find them fascinating, and a good argument for naturalism, but libertarians try to find ways around them). The filmmakers do neglect a wealth of information and neurological phenomena that also support naturalism (e.g., confabulation explaining actions caused by brain operations on conscious subjects, the fact that we can remove and restore consciousness, or trick people into thinking they are exercising agency when they aren’t, and vice versa). That’s one of only three quibbles I have with the film. Another is the failure to connect libertarian free will to Abrahamic religions, of which it’s an essential part—a connection that accounts for why more than half of people surveyed in four countries accept libertarian free will. Finally, the philosophers talk a lot about “desert”, which means that, in a retrospective view of your actions, you deserve praise or blame, but the film never defines the term (if they did, I missed it).

But I think they’ve done as good a summary of the issues involved as is possible in two hours, and have neatly woven together in “chapters” the conflicting ideas of people from all camps, letting the academics do all the talking. (There’s a wee bit of necessary narration.) I would recommend that those of you who like to talk about free will on this site ante up the measly three bucks and rent the movie. (The site for renting or buying it from Amazon or Vimeo is here.)

There are eleven “chapters” of the film, which I’ll list to whet your appetite:

  1. What is free will?
  2. The problem of free will
  3. Libertarian free will
  4. Compatibilism
  5. Free will skepticism (includes “hard determinism”)
  6. The great debate: responsibility
  7. Neuroscience
  8. Physics
  9. The “morality club” (i.e., do we need free will be to morally responsible?)
  10. Free will and the law (I think this section should have been longer, but I do get some say in the movie about this issue)
  11. Should we stop using the term “free will”?

Now if you go to the movies for escapism or to see happy endings, this isn’t the film for you. It’s aimed at people who want to see a serious but eloquent intellectual discussion that involves philosophy, physics, ethics, and neuroscience. And the filmmakers did a terrific job, amply fulfilling their goals. Remember, you can’t even get a latte at Starbucks for three dollars, but for that price you can have a heaping plate of brain food!

A meta-analysis of many studies shows no long-term consequences of giving up belief in free will

June 13, 2022 • 9:45 am

One of the reasons that compatibilism is so popular, besides buttressing the comforting idea that we can make a variety of conscious choices at any time (well, that’s the way we feel), is that there’s a widespread belief that if you accept determinism (“naturalism”) as opposed to free will, it will be bad for society. (I prefer to use “naturalism” to mean “one’s actions purely reflect the laws of physics” rather than the more common “determinism”, because some of the laws of physics are indeterministic.) If you think you can’t make more than one choice at any one time, so the argument goes, you become mired in nihilism and irresponsibility, bound to act on your merest impulse, immoral or not.  In other words, the argument for keeping free will claims that naturalists who ascribe our actions solely to physical laws become irresponsible cheaters who cannot be trusted, and free will is thus a vital form of social glue that keeps society cohesive.

Here, for example, are two statements by the doyen of compatibilism, my pal Dan Dennett (sorry, Dan!):

There is—and has always been—an arms race between persuaders and their targets or intended victims, and folklore is full of tales of innocents being taken in by the blandishments of sharp talkers. This folklore is part of the defense we pass on to our children, so they will become adept at guarding against it. We don’t want our children to become puppets! If neuroscientists are saying that it is no use—we are already puppets, controlled by the environment, they are making a big, and potentially harmful mistake. . . . we [Dennett and Erasmus] both share the doctrine that free will is an illusion is likely to have profoundly unfortunate consequences if not rebutted forcefully.

—Dan Dennett, “Erasmus: Sometimes a Spin Doctor is Right” (Erasmus Prize Essay).

and

If nobody is responsible, not really, then not only should the prisons be emptied, but no contract is valid, mortgages should be abolished, and we can never hold anybody to account for anything they do.  Preserving “law and order” without a concept of real responsibility is a daunting task.

—Dan Dennett, “Reflections on Free Will” (naturalism.org)

But you can be a “hard determinist” and still believe in responsibility!

These views are often based on an early study by Vohs and Schooler (2008), which “primed” students by reading them an anti-free-will passage written by Francis Crick, with another group reading a “control” passage that was neutral.  Not only were the anti-free-will readers less likely to accept free will right after the readings, but they also tended to cheat more in a psychological test given immediately thereafter. To me, this is a thin basis on which to make a blanket statement about the long-term effect of denying free will on society.

Since that  2008 study, however, there have been many similar experiments testing whether such “priming” can not only affect belief in free will, but whether it can promote a variety of antisocial behaviors. Some studies have attempted to replicate the results of others and failed to do so; these, ironically, include the landmark study of Vohs and Schooler.

I should note that, as the authors of the paper below show, there are many people (including me, though I’m not cited) who feel that there are healthy effects of naturalism, including having more empathy for others and a reduced feeling of “retributive justice” (i.e., people should get punished because they made the wrong choice).

The present study by Genshow et al. (click on screenshot below; pdf is here, reference at bottom) is an attempt to combine all existing studies of this type using meta-analysis. They had two big questions:

  • Research Question 1: Can belief in free will be experimentally manipulated?
  • Research Question 2: Does this have any downstream consequences?

“Downstream” means “after the manipulation”, and not “permanent”!

The answers, as you can see if you read the long paper are yes, belief in free will can be experimentally manipulated, though the effects aren’t large, and no, the consequences of such manipulation, if any, don’t last long.  The authors thus conclude this:

Taken together, there is a debate about whether anti–free will viewpoints should be discussed in the public media. Our findings suggest that the influence on society may be weaker than previously assumed. In this respect, we would like to argue that discussions about the implications of believing in free will should distinguish between scientific facts and philosophical speculations (Schooler, 2010) as well as acknowledge methodological limitations of the cited research (Racine et al., 2017).

In other words, you can promote compatibilistic free will for a variety of reasons (i.e., it comports with our personal understanding of what “free will” means), but not because belief in naturalism will somehow erode society.

First, some clarification.  The authors analyzed 84 studies. Of these, 72 were subject to meta-analysis to see if “priming” affected belief in free will. (These studies included 124 experiments, of which 31 were published and 93 unpublished.) Further, 44 of the studies that showed successful manipulation of free will were tested to see if there were effects that lasted (these comprised 67 experiments, 43 published and 24 unpublished.)

What do the authors mean by “free will”.  Apparently the classic contracausal or libertarian “you-could-have-chosen-otherwise” free will:

. . . belief in free will reflects a much broader belief about choice and freedom (e.g., “Do I have a choice? Can I freely choose to do otherwise?”).

They construe the opposite of free will to be “determinism” though, as I said above, purely physical indeterminism, like quantum effects, could affect what one does at any given moment but still not reflect conscious choice and not be part of classical “free will”. (You can’t “choose” to affect the movement of an electron.) I will use “naturalism” instead of the authors’ “determinism”. Though they don’t talk about pure physical indeterminism, it doesn’t affect the results of their studies.

They used two methods to measure the effect of reading on free will belief; both gave the same results.

They also analyzed two other aspects of experiments. The first involved four ways of conducting the “priming’ : reading two statements alone, doing that as well giving a verbal reprise, seeing a video about free will or a neutral one, or reading a variety of statements that were either “control” or “anti-free will”. None of the experiments involved reading any pro-free will statements, probably because most people already accept libertarian free will and there’s not much room to manipulate that belief. It turns out that the most effective way to erode belief in free will is a combination of the two readings plus a verbal summary by the experimenter.

Second, the authors analyzed experiments in which the subjects were asked themselves to summarize or rewrite the messages given to them right after they were primed. It turned out that this form or conscious repetition also increased the erosion of belief in free will due to the experimental manipulation.

The results. 

a. “Can belief in free will be experimentally manipulated?” The meta-analysis showed that over all the experiments, priming did significantly erode acceptance of free will, though not by a huge amount. So yes, beliefs can be affected.  When acceptance of “naturalism” (what the authors call “determinism”) was also tested, it increased, though not as much as acceptance of free will declined.

b. “Does this have any downstream consequences?” But how long do these effects last? When erosion of belief in free will occurs in these studies, is it permanent, or does it last only over the experimental period? The “experimental period” appears to last between a day and a week, so it’s by no means permanent. And by “downstream” effects they include experiments where antisocial tendencies were tested right after the priming studies, and where the priming was separated from the measurement of antisocial behavior by another, unrelated test. I didn’t look at every experiment, but most appear to do the antisocial tests right after priming, so the effects can only be said to be temporary—a few hours to a week.

The social behaviors tested are shown in Table 1 of the paper, and include measurements of cheating, helping, aggression, conformity, gratitude, punishment, prejudice, moral actions, cooperation, punishment and moral judgments, victim blaming, and other tests. Again, this was a meta-analysis, so all these “antisocial” behaviors were taken into consideration in a single analysis.

Finally, their main method of seeing if there was an effect over all the studies on social attitudes involved “p curve analysis”, which I’ve never used but the authors describe like this:

In the first step, we ran a p-curve analysis across all dependent variables. While the aim of estimating a population effect size makes a meta-analysis unsuited to evaluate diverse sets of dependent variables, this is not the case for p-curve. Rather than estimating a population effect size, p-curve investigates whether a set of statistically significant findings contains evidential value by testing whether the distribution of p-values is consistent with the existence of a true effect (Simonsohn et al., 2014). Importantly, if confirmed, this does not mean that all included studies show a true effect. Instead, it merely implies that at least one study does (Simonsohn et al., 2014). As such, p-curve can be applied to diverse findings as long as they form a meaningful whole (Simonsohn et al., 2015).

And they analyzed a subset of the results involved “anti- or prosocial behaviors”:

In a second step, we ran meta-analyses on internally coherent sets of dependent variables. Upon reviewing the literature, one clear set arose—namely, antisocial versus prosocial behavior (for an overview, see Table 1). Hence, we pooled together the studies in this set and subjected them to a meta-analysis testing whether manipulating belief in free will influences social behavior. However, pro- and antisocial behavior is still a relatively broad and unspecific dependent variable. Therefore, in a third and final step, we also ran meta-analyses on three specific dependent variables that have been used in at least five experiments: conformity, punishment, and cheating.

The upshot: there was no statistically significant effect in either analysis. The p-value distribution suggests that not a single study had a “true effect.” Now if you use the psychologists’ way of measuring significance (p < 0.1), there is an overall level of significance for the effect on behavior, but using the biologists’ p value for significance (p < 0.05), the overall result became nonsignificant. And using either criterion, when you eliminate the single experiment that had the largest “downstream” effect, the whole effect on behavior becomes nonsignificant (p = 0.128).  The effect on anti-social behavior appeared to be significant, but was seen only in published and not unpublished studies, as might be expected. Further, when those studies were eliminated that showed no effect on manipulating free will, the “downstream” effect disappeared, so it may have been some kind of artifact.

 

I should add that there was no attempt to correct for multiple tests of significance, which increases the chance that something will appear significant when it’s really not. Experimenters vary in how they do this correction, but some correction is always needed, and none was done in this study. That means that even the close-to-significant results, of which there were few, were probably not statistically significant. 

The authors conclude this:

In sum, the analysis showed that the effect of anti–free will manipulations on antisocial behavior was no longer significant after controlling for publication and small sample biases. This was true even when we only included studies that found a significant effect of the manipulation on belief in free will and indicates that there is insufficient evidence for the idea that manipulating belief in free will influences antisocial behavior.

Now there are caveats about all these results (i.e., the downstream effect could have been significant but missed, or there might be an unknown third variable that affected the results, and so on); and the authors describe these in detail. The profusion of caveats means that the authors look as if they’re almost apologetic for finding no effect given the widespread view that denying free will will ruin society.

But given that the effect of priming on eroding free will was weak, that there was no meaningful “downstream” effect of trying to make people reject free will, that there was no attempt to correct for multiple tests of significance (a statistical no-no), AND, especially, the “downstream” effects were measured within a week of the initial priming (usually on the very same day), there’s simply no reason to play Chicken Little and say that we must believe in free will because otherwise society will fall to pieces. How can one possibly make statements about the long-term effects on society of rejecting free will and embracing naturalism without a proper test of that hypothesis? I repeat what the authors say above:

Taken together, there is a debate about whether anti–free will viewpoints should be discussed in the public media. Our findings suggest that the influence on society may be weaker than previously assumed. In this respect, we would like to argue that discussions about the implications of believing in free will should distinguish between scientific facts and philosophical speculations (Schooler, 2010) as well as acknowledge methodological limitations of the cited research (Racine et al., 2017).

And even if pure naturalism be true, and that most people’s belief in libertarian view be wrong, should we really hide that truth from people for the good of society? It reminds me of the Little People’s Argument for Religion: “we of course aren’t religious, but society needs religion to function properly.” It also reminds me of The Little People’s Argument for Creationism, encapsulated in what might be an apocryphal anecdote. It recounts how the wife of the Bishop of Worcester reacted when told that Mr. Darwin suggested that people had descended from apes. Mrs. Bishop of Worcester supposedly said:

“My dear, descended from the apes! Let us hope it is not true, but if it is, let us pray it will not become generally known.“

And that is the same argument many make for the prevalence of the laws of physics, which to many of us rules out libertarian free will. Further, if you think that nobody attacks naturalism or supports some form of free will because they decry naturalism’s supposedly bad social consequences, you’re wrong. I quoted Dan above, and I could give more quotes. To me, it’s almost never of value to hide the truth about reality as a way to preserve social harmony.

Yes, you can embrace compatiblistic free will even if you think libertarian free will has no consequences for society, but if that’s the way you think, I ask you this: “Why did the authors of this paper go to all the trouble to do the analysis?”

___________

Genschow O, Cracco E, Schneider J, et al. 2022. Manipulating belief in free will and its downstream consequences: A meta-analysis. Personality and Social Psychology Review. June 2022. doi:10.1177/10888683221087527

Massimo Pigliucci: “Free will is incoherent”

February 9, 2022 • 10:30 am

I’ve had my differences with Massimo Pigliucci, but when he says something I agree with, I give him praise (see my kudos here for his admirable critique of panpsychism). So I’m always puzzled when he has to work in a slur against me when we do have our differences.

In this case we don’t seem to have any differences on the topic of free will, but he still insists on characterizing Sam Harris and me as “philosophically naive anti-free will enthusiasts”.  Massimo’s insults usually come in such a form: asserting his superior credentials in either biology or philosophy.  I’m not going to respond by calling him names. It is the argument I want to deal with.

That aside—and the “naive” bit did upset me a tad—Pigliucci argues in the article below that the concept of free will is “incoherent”.  By “incoherent”, he apparently means that “free will in the pure libertarian sense cannot exist because it violates the laws of physics.”  But of course that’s the argument I’ve been making all along, so in fact we agree.  Perhaps the word “incoherent” has a philosophical meaning I don’t fathom (I am, after all, philosophically naive)’ but if people do realize that the libertarian (“I-could-have-chosen-otherwise”) concept of free will adhered to by most people and a large proportion of religious believers cannot be true, I will be happy.

Do note that for a long time I’ve lumped physical determinism together with pure indeterminism (as in quantum mechanics) as “naturalism”. It’s naturalism that puts paid to the libertarian concept of free will, not just determinism.  “Contracausal” free will (another name for “libertarian free will”) would violate the laws of physics, and so can be dismissed. As Sean Carroll showed, there is no way that immaterial “will” can influence physical objects, and we already understand the physics of everyday life. Libertarian free will is not part of everyday life.

Anyway, click below to read Pigliucci’s short essay in “Philosophy as a way of life”:

Massimo’s argument seems no different from one I’ve been making for years (it’s not of course my argument; I’m parroting the naturalists who preceded me). A quote:

“Free” will, understood as a will that is independent of causality, does not exist. And it does not exist, contra popular misperception, not because we live in a deterministic universe. Indeed, my understanding is that physicists still haven’t definitively settled whether we do or not. Free will doesn’t exist because it is an incoherent concept, at least in a universe governed by natural law and where there is no room for miracles.

Consider two possibilities: either we live in a deterministic cosmos where cause and effect are universal, or randomness (of the quantum type) is fundamental and the appearance of macroscopic causality results from some sort of (not at all well understood) emergent phenomena.

If we live in a deterministic universe then every action that we initiate is the result of a combination of external (i.e., environmental) and internal (i.e., neurobiological) causes. No “free” will available.

If we live in a fundamentally random universe then at some level our actions are indeterminate, but still not “free,” because that indetermination itself is still the result of the laws of physics. At most, such actions are random.

Either way, no free will.

Note that, as I’ve also maintained (but some readers here don’t) that the popular view of free will is wrong because it violates the laws of physics, including both the deterministic ones and the truly indeterminate but statistical quantum-mechanical ones. (Note that Newtonian mechanics is a special case of quantum mechanics, but determinism suffices for much of everyday life, like sending rockets to the Moon.)

So where is the incoherence here? Massimo’s argument appears to be this (my take):

a. The universe is governed by the laws of physics. The brain is part of the universe and behavior (including “choice”) comes from our brain

b. If the laws are deterministic, we can’t have free will

c. If the laws are indeterministic, we can’t have free will, either, because, according to libertarians, our behaviors are not completely “random” or capricious.

d. Since deterministic and indeterministic laws are all we have, there is no free will, which is seen as independent of the laws of physics.

If that’s “incoherent”, I don’t see why. It’s not a purely philosophical deduction, because determinism and indeterminism are empirical phenomena..  And I’m happy that Massimo agrees that there’s no free will the way most people use the term. At least he doesn’t assert, as compatibilists do, that the popular notion of free will is really a sophisticated Dennett-ian one. Surveys show that that is not true: it’s the libertarian one that both Pigliucci and I say is nonsensical. (Or, in his case, “incoherent.”)

In the rest of the article, Pigliucci discusses the meaning of the Libet experiment as well as interesting newer experiments in which brains are monitored when more complex decisions are made. It turns out that for a simple random decision, like pressing a button or deciding whether to add or subtract, as in Libet’s study, the brain gives a signal before the actor consciously decides what to do. And that signal predicts with substantial accuracy what the actor will do. The predictability has increased as brain monitoring has improved.

Massimo says this:

Libet also asked participants to watch the second hand of a clock and report its position at the exact moment they felt the conscious will to move their wrist. The idea was to explore the connection between the RP [the “readiness potential” detected in the brain before the actor’s decision comes to his/her consciousness] and conscious decision making.

The results were clear, and have been confirmed multiple times since, using different and improved experimental protocols. Unconscious brain activity, measured by the RP, preceded the conscious decision to move the wrist by at least half a second, with more recent studies putting that figure up to two full seconds.

This was interpreted as to mean that the participants had in fact decided to move their wrist quite some time before they became conscious of their decision. The implication being that consciousness had nothing to do with the decision itself, but was rather an after-the-fact interpretation by the subjects.

Philosophically naive anti-free will enthusiasts like Sam Harris and Jerry Coyne, among others, eventually started using the Libet experiments as scientific proof that free will is an illusion. But since free will is incoherent, as I’ve argued before, we need no experiment to establish that it doesn’t exist. What Libet’s findings seemed to indicate, rather, is the surprising fact that volition doesn’t require consciousness.

I don’t in fact remember using the Libet experiments as “scientific proof that free will is an illusion.” You can rule out libertarian free will, as I do when I talk about the subject, from the laws of physics alone, using exactly the same argument that Pigliucci does. What I do say is that insofar as the popular conception of free will requires a conscious decision, it doesn’t seem to work, as consciousness is temporally  decoupled from choice, which can be predicted  with substantial but not perfect accuracy from brain scans before the conscious choice is made.. Again, we have pilpul: a distinction without a difference.

The stuff in Massimo’s piece that interested me was his discussion of a paper that I haven’t read for a while:

Enter a pivotal paper published by U. Maoz, G. Yaffe, C. Koch, and L. Mudrik in 2019 in the journal eLife Neuroscience and entitled “Neural precursors of decisions that matter — an ERP study of deliberate and arbitrary choice.”

The authors set up a series of conditions that allowed them to distinguish between what was happening in the brains of people asked to engage in arbitrary decision making (similar to the original Libet experiment) or in deliberate choices (the latter characterized by different degree of difficulty).

The results were highly informative. They did detect the RP, but only in association with arbitrary, not deliberate decision making. In other words, Libet’s results do not extend to situations when people engage in conscious decisions, and therefore it has nothing to do with the debate on volition.

Maoz and collaborators also built a theoretical model that was able to nicely match the experimental results. On the basis of their model, they suggest that — contra the common view regarding the RP — where arbitrary decisions are concerned “the threshold crossing leading to response onset is largely determined by spontaneous subthreshold fluctuations of the neural activity.” That is, the RP goes up and down randomly until it crosses a threshold that leads to action, in the case of the original experiment, the flicking of the wrist.

Maoz et al.’s model also suggests that two different neural mechanisms may be responsible for arbitrary vs deliberate decision making.

That’s interesting, though Maoz et al., while able to make a model of what happened, couldn’t suss out what the “decision” would be using their models or the measurement methods (EEG). That doesn’t of course mean that researchers with more knowledge of the brain, couldn’t eventually find a way to predict what decision a person could make before it’s made.

But even if it’s made at the very last second, it doesn’t matter. The whole process of deliberation and “decision” in complex tasks is analogous to the working of a giant computer made of meat. There are inputs, they work through the neurons, and we spit out an “output”: a decision. That decision is still not “free” in the sense that it could have been “made” (via volition) in a different way. That decision still reflects the deterministic or fundamentally indeterministic laws of physics, and is not made independently of them. If the decision could have been otherwise, it could only be because an electron jumped a different way, not because the actor willed a different outcome. (We still don’t know, and I doubt, that quantum-mechanical indeterminacy really does show its effects in the way people behave.)

But the end, as Massimo says, “Either way, no free will.”

Neuroscience and our understanding of how we act as we do is a hard but fascinating subject, and experiments like the one above are essential in understanding behavior. But we’re a very, very long way from working out the physical basis for “choice.”

Massimo ends his article this way:

Research like the one conducted by Maoz and colleagues opens fascinating insights into a real scientific question: how do human beings make conscious decisions? The other question, regarding free will, is a non-issue because free will cannot possibly exist in a universe with laws of nature and no miracles. It follows that there is nothing at all that neuroscience can say about it.

I don’t agree. Free will is not a non-issue, and we know that because many people accept it. For them it is an issue! They accept it because they don’t understand physics, because they embrace duality, or because they believe in God and miracles. You can’t dismiss all those people, for they are the ones who make and enforce laws and punishments based on their misunderstanding that we have libertarian free will. They are the ones who put people to death because, they think, those criminals could have chosen not to pull the trigger.

I agree that there’s little that neuroscience can say about free will, but it can say this: Because neurons are material objects that obey the laws of physics, we cannot have free will. That is not “nothing”!

The rest is commentary—and a lot of hard work.

Oh, and Massimo, if you’re reading this, could you just be civil and lay off the insults? I may be philosophically naive, but I can still understand what you’re saying and can still learn from your arguments.