The New Yorker’s hit job on Elizabeth Loftus

April 25, 2021 • 9:30 am

I doubt that psychologist and memory expert Elizabeth Loftus knew that, when Rachel Aviv of the New Yorker interviewed her for a recent profile, Aviv had a hit job in mind. I say this because Aviv makes statements in the piece (click on screenshot below; I think access is free) implying that she, Aviv, believes in the dubious and largely discredited concept of repressed and recovered memory; and Loftus has spent much of her life doing research that caused the discrediting.

When I first saw the piece’s title, I thought, “Wow! The New Yorker is doing some real science pieces now.”  Indeed, the title seems to be about Loftus’s work, which I knew a bit about. I had met Loftus (her friends call her “Beth”) at the 2016 American Humanist Association Meeting in Chicago, where I spoke and she received the Isaac Asimov Science Award for scientific work that advanced humanist values. After the award, Loftus gave a talk on the fallacies of memory, a talk I found quite impressive. (As you’ll see from the TED talk below, she’s a very good speaker.) At the conference dinner, I sat beside Loftus and we had a delightful conversation, which was also a bibulous one because, as I recall, we’d each had more than our share of wine.  But after I wrote the preceding sentence, I looked up my emails from Loftus after the dinner and found one that said “they should have given us wine”, implying that my memory of being tipsy with her was false! What I wrote was an example of the kind of false memory she works on!

Click below to read the New Yorker piece:

Here’s Loftus speaking about her work. You’ll learn a lot more from this 17½-minute talk than you will from Aviv’s piece.

So I looked forward to reading Aviv’s piece, hoping to learn more about Loftus’s work on memory.  As Aviv notes, Loftus is “the most influential female psychologist of the twentieth century, according to a list compiled by the Review of General Psychology.”  She’s written 24 books and more than 600 papers. I haven’t read any of those works, but know Loftus from her talks and from what I’ve read about her, and so anticipated learning a lot more about memory from the New Yorker.

Oy, was I mistaken! For despite the piece’s title, it has almost nothing about Loftus’s accomplishments, which are many. Instead, Aviv concentrates on Loftus testifying at the trial of Harvey Weinstein, at the appeal of Jerry Sandusky and in legal proceedings of other miscreants—while noting that Loftus has only ever refused a single invitation to testify in anyone’s defense, for she testifies about the known science, not the defendant’s actions. That action alone demonized her, as it did Ronald Sullivan, a Professor of Law at Harvard who was kicked out of his position as a Harvard “faculty dean” at Winthrop House because he also worked for Weinstein’s defense. Because of this, Loftus was also deplatformed at New York University and snubbed by her colleagues at UC Irvine, where she’s a professor.

As someone who worked on the DNA evidence at O. J. Simpson’s trial, and testified about DNA evidence for public defenders in trials for rape and murder, I object to this kind of demonization. (I didn’t take money for any case after the first one I worked on in Chicago.) The job of the defense is to make the prosecution prove its case beyond a reasonable doubt, and if the prosecution is making statements that are scientifically questionable, including, as we see above, using eyewitness evidence, which can be deeply fallible, the defense’s job is to call those statements into question. Everyone deserves a fair trial, including those accused of the most odious crimes, as well as those who are wealthy.

I digress, I suppose, but I see Aviv’s repeated mentions of Loftus’s testimony for Weinstein as an attempt to smear her. There are too many mentions to think otherwise.

But it’s worse, for while Loftus’s work is barely mentioned, you’ll see that Aviv concentrates on Loftus’s personal trials: in particular, her relationship with her late mother.

Virtually the entire article is devoted to Loftus’s childhood and adolescence, and a large part of that to a single Skype call Loftus had with her two brothers, largely about their mother, Rebecca.  A depressive, Rebecca died, most likely by suicide, when Loftus was a teenager. Loftus is still wounded by this loss. Worse, Elizabeth has very few concrete memories of her mother, and cannot decide whether her mother drowned accidentally or as a deliberate suicide attempt. Elizabeth had a heart-to-heart talk with her mother the night before she was found drowned, and that makes her feel even worse.  Aviv mentions several times that the Skype call, which Aviv was party to, made Loftus cry, and that also appears in the last sentence. I don’t think that’s accidental.

After I finished this peculiar article, I wondered why Aviv concentrated so much on Loftus’s thoughts about her mother and not on her work. When Loftus is asked whether her work on memory somehow grew out of her attempts to remember her mother, she denies it, for all three of her degrees were in mathematical psychology and had nothing to do with memory. Loftus hit upon the memory work only after she started a job at The University of Washington and came upon police records of car crashes, which piqued her interest in memory.

The rest is part of the history of psychology, but Aviv isn’t interested in that. She’s obsessed with Loftus’s scant remembrance of her mom, and Loftus’s doubts about whether she did kill herself. It goes on and on and on, and the article becomes not only boring, but pointless.

When I was puzzled about this, I asked my friend Fred Crews—former professor and chair of English at UC Berkeley, well known critic of Freud and his ideas, student and critic of recovered memory therapy, and a friend of Loftus— if he’d read the piece. He said he had, and had not only found it dreadful, but also had an explanation for its slant. I quote him with permission:

To be brief,  Aviv subscribes to Freud’s original bad idea: People repress traumatic memories, and psychotherapists can coax them into recalling them. With that conviction, Aviv regards Loftus less as a memory scientist than as someone who lets abusers off the hook. In that case, the only interesting question is biographical: how did Loftus acquire this undesirable peculiarity? The result, in Aviv’s prose, is what I would call a “friendly libel.” We are meant to empathize with Loftus’s personal trial, but insofar as we do so, we impugn her testimony as a neutral expert witness.

That assessment seems fair to me, and explains Aviv’s neglect of Loftus’s work in favor of her “personal trial”. In fact, Aviv even impugns some of Loftus’s work, noting that one of her famous studies, on car crash memories, had a sample size of only 24. I can’t comment on that, but sample size alone does not invalidate the study. Does Aviv know enough science to raise such a criticism?

Loftus has sacrificed a lot for her work, and although she is highly influential, and to my mind has largely laid to rest the idea that traumatic memories can be repressed and then recovered through therapy, she is still demonized by the kind of people who think that anybody who testifies in court for an odious person is to be shunned. Loftus moved to Irvine because her position at the University of Washington became untenable when she was criticized for asking questions about a woman who said she’d been abused by her mother.

Enough. I want to close by reproducing, again with permission, a letter Crews wrote to the New Yorker criticizing Aviv’s execrable hit job on Loftus.  The magazine didn’t publish it, for the New Yorker which has a reputation for allowing only very mild criticism of its pieces and deep-sixing any highly critical letters.

The letter notes that Aviv appears to buy into the idea of repressed but recoverable memories of sexual abuse broached (and later rejected) by Freud. Aviv seems to think that Freud made a mistake when he reversed course and decided that the “repressed” events never happened, but were confected by the patients and manifested as hysteria.

According to Rachel Aviv, Sigmund Freud “realized that his patients had suppressed memories of being sexually abused as children.” In subsequently disavowing that realization, Aviv adds, Freud “walked away from a revelation” of the prevalence of child sexual abuse. Later, Aviv writes that in the 1980s and 90s Ellen Bass—the coauthor of The Courage to Heal—and other theorists were “careful not to repeat Freud’s mistake.” And then Aviv refers again to “Freud’s female patients, whose memories of abuse were believed and then . . . discredited.”

This version of events became popular in 1984 with Jeffrey Masson’s book The Assault on Truth, which argued that classical psychoanalysis was founded on a cowardly retreat by Freud from the truth of his “seduction” patients’ molestations. But Freud scholars have known since the 1970s that this account is wrong.

In the brief period of his “seduction theory,” Freud maintained that hysteria is invariably caused by the repression of traumatic abuse memories from early childhood. Although he later claimed that his hysterics had spontaneously told him (in error) about having been molested, the reverse was true. He told them so, because his theory demanded it. Nearly all of his patients at the time disputed Freud’s claim, even scoffing at its absurdity. Freud finally abandoned the “seduction” etiiology because his colleagues, too, regarded it as “a scientific fairy tale” (Krafft-Ebing). They were entirely right. But in the hands of Bass and other modern proponents of “recovered memory,” a theory that collapsed in its own time was rehabilitated for very risky ends.

If you want to see what a charlatan Freud was, I’d highly recommend Fred’s book Freud: The Making of an Illusion.

Full disclosure: here’s a picture I asked someone to take of Loftus and me after the AHA dinner (see my post here). I may not be a completely unbiased observer, but read the NYer piece for yourself and see if you don’t find it weird.

The New Yorker continues to largely ignore or denigrate science, mired as it is in a woke perspective and a view that the humanities are valid “ways of knowing”. Aviv’s piece is a particularly good example of how the magazine misses the boat when it comes to science, obliquely trying to denigrate an influential scientist by concentrating on her life and her own traumas rather than on her peer-reviewed work.

Should mental-health professionals diagnose Trump as mentally ill?

October 3, 2020 • 12:30 pm

It’s one thing for us to call Trump a narcissist or a sociopath, but it’s another thing entirely when a group of mental-health professionals argue that Trump should not be allowed to debate—or should be impeached—because he’s sick in the head.

Psychiatrists generally refrain from diagnosing people whom they haven’t examined, adhering to what’s called the “Goldwater Rule”. That rule, put into place by the American Psychiatric Association, came into being in 1973 when a group of over 1000 psychiatrists questioned Barry Goldwater’s fitness for office based on their long-distance diagnosis. Other Presidents, including Clinton, have also been diagnosed as mentally ill by the pros.

After the diagnosing of Trump started in 2016, the APA issued a statement in January, 2018 that reaffirmed the Goldwater Rule:

Today, the American Psychiatric Association (APA) reiterates its continued and unwavering commitment to the ethical principle known as “The Goldwater Rule.” We at the APA call for an end to psychiatrists providing professional opinions in the media about public figures whom they have not examined, whether it be on cable news appearances, books, or in social media. Armchair psychiatry or the use of psychiatry as a political tool is the misuse of psychiatry and is unacceptable and unethical.

The ethical principle, in place since 1973, guides physician members of the APA to refrain from publicly issuing professional medical opinions about individuals that they have not personally evaluated in a professional setting or context. Doing otherwise undermines the credibility and integrity of the profession and the physician-patient relationship. Although APA’s ethical guidelines can only be enforced against APA members, we urge all psychiatrists, regardless of membership, to abide by this guidance in respect of our patients and our profession.

A proper psychiatric evaluation requires more than a review of television appearances, tweets, and public comments. Psychiatrists are medical doctors; evaluating mental illness is no less thorough than diagnosing diabetes or heart disease. The standards in our profession require review of medical and psychiatric history and records and a complete examination of mental status. Often collateral information from family members or individuals who know the person well is included, with permission from the patient.

“The Goldwater Rule embodies these concepts and makes it unethical for a psychiatrist to render a professional opinion to the media about a public figure unless the psychiatrist has examined the person and has proper authorization to provide the statement,” said APA CEO and Medical Director Saul Levin, M.D., M.P.A. “APA stands behind this rule.”

I generally agree, for professionals should behave professionally. Doctors don’t diagnose patients without an exam, and psychiatrists are doctors. As an article in the Canadian Medical Association Journal (CMAJ) noted,  

. . . One reason for The Goldwater Rule is the likelihood of error in a diagnosis made at a distance. A proper diagnosis requires much more than “a review of television appearances, tweets, and public comments,” the American Psychiatric Association noted in its statement. “The standards in our profession require review of medical and psychiatric history and records and a complete examination of mental status. Often collateral information from family members or individuals who know the person well is included, with permission from the patient.”

You can say we already know enough to agree that Trump is mentally ill, but remember, if you want to assert that in court, the perp has to be examined by mental-heath professionals. Courts won’t accept diagnoses without direct examinations.

Now some mental-health professionals say that there’s a “duty to warn” that overrides the Goldwater Rule, a “duty to warn” about the effect of Trump not just on the well being of America, but on the well being of Americans themselves, making them unstable, liable to suicide, and so on. And so a group of 27 mental-health professionals, including psychiatrists, issued a statement last October warning about Trump. An excerpt from that:

Efforts to bring Duty To Warn into the spotlight have been ongoing since Trump first stepped into the political ring. We are joined by mental health professionals from various field including, but not limited to, psychiatry, psychology, medicine, public health, public policy, and social work; in every field, professionals have been voicing their concern about the president’s instability.

We Are Mandated Reporters
Mental-health professionals are mandated reporters with a duty to warn our patients and the community around us if we feel there is a potential danger.  In this case, we collectively feel there is a duty to warn the public of the threat Donald Trump poses both to our nation and the planet.

It is our duty to notice when an individual is a danger to themselves and/or others.

What about the Goldwater Rule?

“The Goldwater Rule is not absolute. We have a ‘Duty to Warn,’ about a leader who is dangerous to the health and security of our patients.” Mental-health professionals are “sufficiently alarmed that they feel the need to speak up about the mental-health status of the president.”

CMAJ counters:

Last October, when a group of 27 mental health professionals, including psychiatrists, published a book arguing that the current US president’s mental state was a danger to the nation, they said they were honouring another medical principle: the duty to warn. The idea behind “duty to warn” is that if you are in a position to know about a danger and have time to alert others, you should do so. Psychiatrists, for instance, are allowed to break doctor–patient confidentiality if they suspect a patient is about to harm a third party.

But part of that duty rests on having done a proper evaluation, according to Dr. David Goldbloom, a psychiatry professor and senior medical adviser for the Centre for Addiction and Mental Health. “You are intervening to abrogate fundamental civil freedoms,” he said. “You can’t do that from having read an article or watched television.”

Of course, we know that Trump is a danger to the country simply because of his statements and actions, and that seems to me independent of whether he has an official DMC diagnosis by professionals.

But Yale psychiatrist Bandy Xenobia Lee, in an interview with Salon (of course), says that it’s her duty to warn people about Trump’s instability.

Lee has a history of trying to publicize her views that Trump is mentally ill; see the section on this in Wikipedia, which also describes her lobbying Congress. That section says this:

in 2017 [Lee] was editor of The Dangerous Case of Donald Trump, a book of essays alleging that Trump suffers from psychological problems that make him dangerous.

. . . In an interview she also said, “whenever the Goldwater rule is mentioned, we should also refer to the Declaration of Geneva, established by the World Medical Association 25 years earlier, which mandates physicians to speak up if there are humanitarian reasons to do so. This Declaration was created in response to the experience of Nazism.”

And it’s possible that some of this has to do with, yes, inequalities in American society:

Lee then stated in an interview with Salon in May 2017 that Trump suffers from mental health issues that amount to a “state of emergency” and that “our survival as a species may be at stake.” She also discussed her political views, linking what she sees as increasing inequality in the United States to a deterioration in collective mental health.

She continues her efforts in the Salon interview (click to read):

First, she argues that Trump shouldn’t be allowed to debate:

Trump spent most of the debate heckling and interrupting, mixed with some blatant lying. How would you assess his debate performance?

The huge error was in allowing the debate to happen in the first place. “How was his debate performance?” is the wrong question to start. A debate presupposes mental health. We cannot pretend to have one when management of psychological impairment is what is warranted. The majority of the country may be horrified at what he is doing, but we continue to help the disorder in every way possible by treating his behavior as normal. It applies first to the politicians, then to the media and then to pundits who do not come out and honestly say: “This is beyond anything I have seen and beyond what I can understand — can we consult with experts?” And experts, for a psychological matter, would be mental health experts. Perhaps even specialists of personality disorders or sociopathy would be necessary, given the severity.

I’m not sure people treated his behavior as normal; the media was full of people saying that he seemed unhinged. Having someone like Lee weigh in that he’s mentally ill and shouldn’t have been allowed to debate adds little to that; in fact, I thought the debate was salutary in one sense: Americans got to see how unhinged Trump is. If they want to elect him after that, well, they’ll get what they deserve.

One gets the feeling, throughout this interview and in Lee’s other writings, that part of the reason for her crusade goes beyond her view that an unleashed Trump will harm America; it may well also involve her blatant dislike of his politics. In that respect she goes over the top in emphasizing the psychological toll of Trump on America, a toll that presumably should have mandated his impeachment:

The reinterpretation of the “Goldwater rule,” as happened at the onset of this presidency, has been exceedingly harmful, in my view, for silence in the face of grave dangers facilitates conditions for atrocities. Last month, we created a blow-by-blow account of how we exactly foretold the president’s mismanagement of the coronavirus pandemic, based on his psychological makeup. We could not effectively convey this in advance, because the public was led to believe that the “Goldwater rule,” which is a guild rule applying only to 6% of practicing mental health professionals, was universal, or worse yet, some kind of law. But in truth, to change a guideline whose purpose is to protect public health to protect a public figure at the expense of public health violates all core tenets of medical ethics.

Yet Lee has been broaching the Goldwater Rule for a long time (I don’t know how she gets away with this if she’s a member of the APA), and yet nothing has happened to Trump despite her books and her many interviews, all making the same point.

And she may well be right that Trump meets the ever-shifting psychiatric criteria for mental illness. I’m no professional, but Trump’s behavior seems way, way out of line—the tails of the human behavioral distribution. Still, I’m not comfortable with professionals giving a professional opinion by observing Trump the same way we do: scrutinizing his tweets, his press conferences, his debate performance. The man is out of control. But don’t psychiatrists need to talk to a patient before they tell the world he’s nuts? The effect of Trump on people is obvious, and you don’t need to be a mental health professional to see that his Presidency is risky to America. Having Dr. Lee tell us that, in our professional opinion, he’s nuts, adds nothing to our fear of the man.

In fact, if people tried to remove Trump from office, or prevent him from debating, based on Lee’s opinion that Trump is mentally ill, it wouldn’t work. People would just laugh at the attempt, and impeachment on the grounds of mental incapacitation wouldn’t do, either, at least not with a Republican Senate.

I can see where Lee is coming from: she’s a forensic psychiatrist and presumably sees nuances in Trump’s behavior that we don’t see. But we don’t need nuances—we know all we need to know, and if a liberal psychiatrist says Trump is certifiably a bull-goose loony, that will have no effect in swaying his supporters. We already have the means to stop Trump, and we can exercise it in the next four weeks by casting our ballots against him.

Bandy Lee and her book.

h/t: Randy

What is it like to be Trump?

September 9, 2020 • 9:00 am

Most of you probably know about Thomas Nagel’s famous article, “What is it like to be a bat?” (article here), which denies a materialistic understanding of consciousness based on our inability to understand what a bat’s consciousness is really like. While philosophers have argued over Nagel’s thesis, there’s little doubt that, at least for the present, we have no way of getting inside a bat’s head to answer his title question.  (I often wonder, while tending the ducks at Botany Pond, what it’s like to be a duck.)

While a bat’s mind is inaccessible for the nonce, that’s also true of any other creature, including other humans. We don’t know what it’s like to be Christopher Walken, for instance. But we can be pretty sure, based on the fact that the neuronal wiring and acculturation of humans in our society is fairly similar, and because we also can get self-reports from people, that the consciousness of our fellow hominins is pretty similar to ours. There are of course exceptions: people in vegetative or comatose states, people with severe mental illnesses, and so on.

Speaking of the latter, when I woke up in the middle of the night last night (I don’t sleep well during the pandemic), it suddenly struck me that I have no idea what it’s like to be Donald Trump—in a way that’s similar to Nagel’s question. More than most other humans, Trump’s inner life is largely inaccessible to me.  That is, his behavior and mentation seems so alien compared to those of other people, that I have no idea what’s going on in that depilated noggin. Surely, though, he thinks that although he appears narcissistic, erratic, and foolish to most of us, he thinks he’s just fine—tremendous, as he says. He’s a “stable genius.” He surely thinks that it’s other people who are the problem.

The disparity between how Trump describes himself and how he comes across is greater than that of most people, though all of us have a self-image somewhat at odds with how we seem to others. It’s just that in Trump this disparity seems huge. And I wonder if others have entertained this same question.

As a determinist, I can’t fault Trump for making the wrong choices about what he does and what he says, or about who he’s become. That’s all a product of his genes and his environment and he never really had a choice in the ” libertarian free will” sense. But of course we can—and should—call him out for his behavior, because, though influencing the man himself is a lost cause, we might influence others to vote against him.

What is it like to be a Trump? I doubt that it’s pleasant given his obsessive monitoring of how people regard him and his frequent bursts of anger and invective. But I’m sure that if you asked him, he’d respond that he’s “perfect”, that “there’s nobody on Earth happier than I.”

So go the 2 a.m. thoughts during a pandemic.

 

Source

Freud: Charlatan of the mind

March 24, 2020 • 10:00 am

About fifteen years ago, I decided to read Freud. After all, he was touted as one of the three greatest thinkers of our time, along with Einstein and Marx (all Jewish men), and while I found Marx boring, I could at least try to read Freud. And I did: I read a lot of Freud, including his major books on dream analysis, the psychopathology of everyday life, The Future of an Illusion, his book on jokes, his General Introduction to Psychoanalysis, and many of his famous case studies, like “Little Hans” and the “Wolf Man.”

I was appalled. As a scientist, I recognized that his works were tendentious in the extreme. He wasn’t following the data, but massaging the data to conform to his preconceptions. In other words, he was ridden with confirmation bias. In fact, I couldn’t find a single idea in his works that was new (the “unconscious” had been suggested by others), and a lot of ideas that were complete crap (e.g., the Oedipus complex). In the end, I couldn’t figure out why he was regarded as such a great thinker. While psychoanalysis was touted by Freud as a “science,” there was no science in it: it was in fact the opposite of science—pseudoscience based on faith (a religion, really) and, ultimately, on Freud’s ambition to be famous.

Then I discovered that a professor named Fred Crews, once chairman of English at UC Berkeley, had devoted a lot of his writing to criticizing Freud in an objective but hard-hitting way. He had several articles on Freud in The New York Review of Books (e.g., here and here), as well as two excellent books on Freud, which I show below (click on screenshot to go to the Amazon site):

And this more recent book (2017):

The second book, involving years of diligent scholarship, is delightful though distressing, for you’ll discover the true mendacity of this ambitious, preening, and narcissistic man. Crews, once a literary critic adhering to the school of New Criticism, writes extremely well (this is a biography of the early Freud), and simply takes Freud to pieces.

Although some critics dissed the book, I couldn’t find a single critique that took issue with Crews’s painstakingly-accrued facts about Freud’s life. These critics seemed to be of the pro-Freud school—that group of people who, even if they decry psychoanalysis, can’t bear to hear that the Emperor had no clothes. At any rate, I would urge you to consider the second book for your quarantine reading. It’s a page-turner.

Now LiveScience has an article about Freud with a provocative title (click on screenshot):

The spoiler is given in the title, but there are a few pungent quotes from Crews:

“Statistically, it’s conceivable that a man can be as dishonest and slippery as Freud and still come up with something true,” Crews said. “I’ve tried my best to examine his theories and to ask the question: What was the empirical evidence behind them? But when you ask these questions, then you eventually just lose hope.”

As damning an assessment as that is, it wasn’t always like this for the founding father of psychoanalysis, who wrote that mental health problems could be cured by bringing unconscious thoughts back into the conscious realm. In his own time, Freud enjoyed celebrity status as a leading intellectual of the 20th century.

Chief among Freud’s overflow of opinions was the “Oedipus complex,” the hypothesis that every young boy wants to have sex with his mother and so wants to murder his father, whom he sees as a rival. But there’s a catch. The boy also has the foresight to realize that his father is simultaneously his protector. Presented with this challenging scenario, the child is forced to repress his homicidal cravings.

“It’s just about the craziest idea that anyone ever had,” Crews said. When people asked about young girls, Freud hastily came up with another idea, the Electra complex. “It’s just a cut-and-paste job. Suddenly, the little girl wants to have sex with her father,” Crews said. “It’s completely ludicrous.”

I wrote a post about the second book when it came out, and referred to a podcast with Crews. That’s still available, and you can listen to it or download it by clicking on the screenshot, where you’ll get 51 minutes of food for thought:

 

h/t: Bill

Tom Chivers has a theory about the latest Dawkins kerfuffle

February 19, 2020 • 1:00 pm

Tom Chivers is a journalist and science writer who, like me,  was taken aback by the negative reactions to Richard Dawkins’s recent tweet about eugenics. (Remember? Richard said eugenics would “work” in the sense of changing population means in humans, but immediately added that he was against it.) Now, at UnHerd, Chivers has proposed a “theory” to explain the dichotomous reaction. (It did seem pretty dichotomous, with lots of people understanding what Richard was trying to say but a big number demonizing Dawkins for “favoring eugenics.” There were a few, like me, who understood what Richard was saying but thought he should have said it in a longer piece rather than vomiting it out on Twitter. Or not said it at all.)

First, an earlier tweet from Chivers in which he expressed the rudiments of his idea:

 

Click on the screenshot below to read Chiver’s theory, which is his:

So Chivers’s idea, which is his, is that there are two types of people: the “high-decouplers”, which, in a statement like Dawkins’s, can easily separate the “is”s from the “ought”s. They can see that he’s making a statement about the malleability of human traits to artificial selection and, at the same time, realize that this doesn’t mean Dawkins favors such intervention.

Then there are the “low-decouplers”, which couple Dawkins’s “is” statement with his “ought” statement. (I’d prefer to call the groups “couplers” and “uncouplers”.) These people embed Richard’s “eugenics would work” statement in a political and cultural milieu, and are unable to separate them. Ergo Richard, by saying “eugenics works”, is somehow justifying Nazism. That isn’t an exaggeration, as you can see if you’ve followed the pushback.

As an example of a low-decoupler, I posted a tweet from a scientist who called Richard a “clown” who was “supporting eugenics” and deserved to be denounced. When I asked in my post if that scientist actually read what Richard wrote, I was denounced by the person (a woman) as a “sexist asshat”. (The exact wording was “So in addition to ‘Fuck eugenics’ and “Fuck dawks,’ I’d like to add, Fuck Jerry Coyne you sexist asshat”.) That, I realized after reading Chiver’s piece, was double “low-decoupling”: not only was the person unable to decouple Dawkins’s “is” from his “oughts”, but was unable to decouple my mild criticism of her from the presumption that I was a “sexist” (and an asshat, too). What would imply I was a “sexist” beyond her own sex?

So here’s Chivers’s take (a quote, not the full piece):

I have a rule that I try to stick to, but which I break occasionally. That rule is “never say anything remotely contentious on Twitter”. No good ever comes of it. Arguments that need plenty of space and thought get compressed into 280 characters and defended in front of a baying audience; it is the worst possible medium for serious conversations.

. . . The analyst John Nerst, who writes a fascinating blog called “Everything Studies”, is very interested in how and why we disagree. And one thing he says is that for a certain kind of nerdy, “rational” thinker, there is a magic ritual you can perform. You say “By X, I don’t mean Y.”

Having performed that ritual, you ward off the evil spirits. You isolate the thing you’re talking about from all the concepts attached to it. So you can say things like “if we accept that IQ is heritable, then”, and so on, following the implications of the hypothetical without endorsing them. Nerst uses the term “decoupling”, and says that some people are “high-decouplers”, who are comfortable separating and isolating ideas like that.

Other people are low-decouplers, who see ideas as inextricable from their contexts. For them, the ritual lacks magic power. You say “By X, I don’t mean Y,” but when you say X, they will still hear Y. The context in which Nerst was discussing it was a big row that broke out a year or two ago between Ezra Klein and Sam Harris after Harris interviewed Charles Murray about race and IQ.

. . .That’s what I think was going on with the Dawkins tweet. Dawkins thought he’d performed the magic ritual – “It’s one thing to deplore eugenics on ideological, political, moral grounds. It’s quite another to conclude that it wouldn’t work in practice” =  “By X, I don’t mean Y.” He is a nerdy, high-decoupling person, a scientist, used to taking concepts apart.

But many people reading it are not high-decouplers; they hear “eugenics” and “work” and immediately all of the history, from Francis Galton to Josef Mengele, is brought into the discussion: you can’t separate the one from the other.

. . . But I think the decoupling thing makes me understand a bit more why Dawkins’s tweet got people so angry. Sometimes the ritual fails, and the spirits break through the warding circle.

Chivers also explains that Dawkins’s tweet, which seemed to appear out of nowhere, was actually aimed at Andrew Sabisky, a nasty piece of work and a former advisor to Boris Johnson (he appears to have just resigned over racist remarks).

At any rate, Chivers gives some other examples of quotes from people who were demonized because the proper decoupling wasn’t done. Some of those quotes are harder to parse, and Chivers seems to have some sympathy with the victims.  As he says “I think the decoupling thing makes me understand a bit more why Dawkins’s tweet got people so angry.”

Well, the “coupling/decoupling” dichotomy is useful, I think, but hasn’t helped me understand more deeply why Dawkins’s tweet got people so angry. They were angry because they deliberately misinterpreted what he said—either that or they couldn’t read or were just thick. What puzzles me is why so many people were and are so eager to demonize Dawkins. Jealousy is one reason, I suppose, but I don’t think that quite covers it. After all, there are psychological reasons for a seeming inability to decouple that the theory doesn’t cover.

Chivers argues that it’s easier for scientists to decouple because they’re “used to taking things apart,” but I don’t buy that, either. It is those who are enraged by Dawkins—and they include many scientists, who have demonized him for his tweet)—or are determined to bring him down, who can’t decouple in this case. How bright do you have to be to understand that Richard was talking about the efficacy of artificial selection and not that it should be used in humans? Is that so hard—especially when Richard immediately explained what he meant in other tweets?

I will probably use the designations of “couplers” and “decouplers” in the future, as it’s good shorthand for people who link (or don’t link) things that shouldn’t be linked. But I don’t think that giving these groups names helps us understand them or their motivations any better.

Thoughts and prayers: what are they worth?

September 18, 2019 • 9:15 am

Everyone knows about the “thoughts and prayers” sent out after tragedies as a quotidian feature of the daily news. And all of us nonbelivers disparage not only the use of prayers (shown in a Templeton-funded study to not have any effect on healing after surgery), but also the uselessness of thoughts—unless conveyed directly to the afflicted person instead of dissipated in the ether.

But an anthropologist and an economist wanted to know more: what is the value of thoughts and prayers (t&p)? That is, how much would somebody in trouble actually pay to receive a thought, a prayer, or both? And would it matter if that afflicted person was religious or just a nonbeliever? Or whether the person offering t&p was religious? So the study below was published in the Proceedings of the National Academy of Sciences (click on screenshot below; pdf here; reference at bottom).

I suppose that, to an economist, the psychic value of getting thoughts or prayers (t&p) from strangers can be measured in dollars, and I’ll leave that for others to discuss. At any rate, the results are more or less what you think: Christians value t&p, nonbelievers don’t.

What Thunström and Noy did was to recruit 436 residents of North Carolina, the state hit hardest last year by Hurricane Florence. Those who were not affected by the hurricane (about 70% of the sample) had experienced another “hardship”. They were then given a standard sum of money (not specified) for participating in a Qualtrics survey, and an additional $5 to be used in the t&p experiment. Among the 436 participants, some were self-identified as Christian, while another group, either denying or unsure of God’s existence, were deemed “atheist/agnostic”. (The numbers in each group weren’t specified.)

The experiment also included people offering thoughts and prayers: people who were recruited to actually give them to those who were afflicted. These people included Christians, the nonreligious, and one priest who was “recruited from the first author’s local community.” Each offerer received a note detailing the travails of an afflicted person, and instructing them to offer either a thought or a prayer (it’s not clear whether the names of the afflicted were included in the note, but of course God would know).

To value the thoughts and prayers, the afflicted were offered two alternatives, among which a computer decided: an intercessory gesture that they’d pay for, or the absence of a gesture that they’d pay for. Payments could be positive (you’d have to actually give money), or negative (you’d pay to not have the gesture). The amount you’d pay varied, says the paper, between $0 and $5—the amount given for participating in the study, and subjects stated this “willingness to pay (WTP) before the computer made the choice.

The experiment isn’t described very well, and there’s no supplementary information, but I’ve taken some other details from second-hand reports of the studies, with the reporters apparently having talked to the authors. At any rate, here are the results, indicated in how much money people would give up for t&p, including both Christians (dark bars) and atheists/agnostics (light bars). Since atheists/agnostics wouldn’t be praying, the only alternative people were offered to receive that group were “thoughts”.

(from paper) The value of thoughts and prayers from different senders (95% confidence intervals displayed; n = 436).

Christians would always give up an amount of money significantly greater than zero for both thoughts and prayers, except when the thinker was a nonreligious stranger, to whom they’d pay $1.52 not to receive thoughts (dark bar below zero). Since the authors are social scientists, they use a significance level of 0.1 (“hard scientists” use at most 0.05, and the latter is significantly different from zero using the more lax criterion but not the one that scientist would use).

Christians would of course offer the most money ($7.17) for prayers from a priest, less money ($4.36) for prayers from a Christian stranger, and still less ($3.27) for thoughts from a Christian stranger, though this doesn’t appear to be significantly different from the price for prayers from the Christian stranger (the statistical comparison isn’t given).

In contrast, atheists/agnostics don’t give a rat’s patootie about t&p. In fact, they’d pay money to have priests or Christians not offer them thoughts and prayers, as you can see from the three light bars to the left, which are all below zero. What surprised me is that the nonbelievers would pay more to avoid prayers from a Christian stranger than from a priest ($3.54 versus $1.66 respectively), while they’d pay an intermediate amount ($2.02) to avoid getting thoughts from a religious stranger (these are all significantly different from zero). Finally, as you’d expect, nonbelievers don’t give a fig for thoughts from other nonbelievers, as we’re not superstitious. These nonbelievers would pay 33¢ to get thoughts from nonbelieving strangers.

There’s another part of the experiment in which participants were asked to give their level of agreement or disagreement to the statement, “I may sometimes be more helped by others’ prayer for me than their material help.” This “expected benefits index” (EBI) explains a great deal of the variation in the amount of money people were willing to pay for prayers and thoughts (or not pay for prayers and thoughts).

What does this all mean? To me, nothing more than the obvious: religious people value thoughts and prayers more than do nonreligious people. Moreover, religious people do not value thoughts from nonbelievers, and nonbelievers give negative value to thoughts or prayers from Christians, and no value to thoughts from fellow nonbelievers. That’s not surprising.

What is a bit surprising is that Christians would sacrifice money to get thoughts and prayers, and would pay just about as much for thoughts from other Christians than for prayers from other Christians. (Prayers from priests, however, were most valuable, showing that the Christians really do believe that priests have a power to help them more than do everyday Christians). I was also surprised that nonbelievers would pay money to avoid thoughts and prayers from Christians. Since we think these are ineffectual, why pay to avoid them?

In general, I saw the study as weak, and afflicted by a failure to fully describe the methods as well as the use of an inflated level of statistical significance (0.1).  All that it really confirms is that Christians think that thoughts and prayers really work; i.e., that they believe in the supernatural. But we knew that already. I am in fact surprised that this study was published in PNAS, which is regarded as a pretty good scientific journal.

_______________________

ThunströmL. and S. Noy 2019. The value of thoughts and prayers

Trigger warnings: new study says they don’t work for traumatized people

July 16, 2019 • 11:15 am

Reader Gregory called my attention to a tweet from a Ph.D. student at Harvard calling attention to a new paper that was just put on the internet. It shows that, for those who survived trauma, there’s no evidence that trigger warnings reduce anxiety, and may even be harmful.

While there’s no indication that the paper has yet been accepted for publication, preprints are increasingly appearing before they show up in professional journals, and this is one of them. You can download it by clicking on the link below:

 

While trigger warning have been promoted for creating “inclusivity” and reducing trauma, critics decry them for impeding education and instilling victimization in people, especially survivors of trauma. But the pro-trigger-warning arguments have been based on what people feel is right, not on data. In fact, as the authors conclude from a useful summary of previous work, there is no evidence that putting trigger warnings before exposure to potentially damaging material reduces anxiety at all, and some evidence that the warnings may exacerbate anxiety. (The authors’ table of relevant literature is very useful.)

These earlier studies, however, were limited to people who hadn’t experienced trauma—neglecting the very people at whom trigger warnings are aimed. After all, the goal of such warnings is to avoid re-traumatizing people by exposing them without warning to subjects that damaged them psychologically, from incidents that even gave them PTSD.

I won’t go into detail about the rather complicated methodology of the paper, which uses numerous analyses, controls, and measurements. Suffice it to say that the subjects (451 of them) were surveyed for things like degree of trauma (all had been traumatized), psychological diagnoses, nature of their trauma, various demographic variables, and so on. They were then exposed to three types of readings, with or without (control) trigger warnings: “neutral” readings (like a character description from Moby-Dick), “mildly distressing” (like a description of a battle but without gore), or “markedly distressing” (characterized as things like “graphic scenes of violence, injury or death; e.g., the murder scene from Dostoevsky’s Crime and Punishment”). The trigger warnings that were issued half the time were these:

TRIGGER WARNING: The passage you are about to read contains disturbing content and may trigger an anxiety response, especially in those who have a history of trauma. 

The authors’ null hypothesis was that the effect of trigger warnings—effects on things like anxiety or perceived centrality of trauma in your personality—was zero.

In general, the results either supported the null hypotheses or were “ambiguous”, but almost always in the direction of trigger warnings increasing rather than reducing anxiety. This study did not replicate those from an earlier paper showing that trigger warnings actually undermined individuals’ sense of vulnerability and their sense that others were vulnerable. In other words, they didn’t support an earlier study showing that trigger warnings were harmful.

In general, then, the authors showed that there’s not much of an effect of trigger warnings one way or the other, and the “way” they work is usually counterproductive but insignificantly so. Nor did the type of trauma an individual experienced have any effect on this conclusion.

There were, however, two effects—and counterproductive ones. The first is this:

We found substantial evidence that giving trigger warnings to trauma survivors caused  them to view trauma as more central to their life narrative. This effect is a reason for worry. Some trigger warnings explicitly suggest that trauma survivors are uniquely vulnerable (e.g., ” …especially in those with a history of trauma”). Even when trigger warnings only mention content, the implicit message that trauma survivors are vulnerable remains (why else provide a warning?). These messages may reinforce the notion that trauma is invariably a watershed event that causes permanent psychological change.

And there’s also a bit of evidence that trigger warnings increase anxiety for trauma suffers who also have PTSD.

These results are clearly counterproductive to the aims of having trigger warnings. Because of these results, and mainly because there was no evidence that trigger warnings had any of the intended effects, the authors conclude that “If there is no good reason to deploy [trigger warnings] in the first place, we need not require strong evidence of harm before abandoning them.”  If they’re not helpful, and can even do some harm, as well as imbue people with a stronger victimhood narrative, then why use them? According to this paper, we shouldn’t.

Will this paper have any effect? Will colleges abandon trigger warnings? I wouldn’t count on it. Since when has evidence ever changed the mind of the woke? That said, even if the traumatized aren’t helped by such warnings, I would probably still let students know if I were showing something gory in class, like someone being beheaded. Fortunately, because I taught evolutionary biology I never had to show stuff like that, nor issue any trigger warnings.

 

Bill Nye screws up when tackling a question about free will

June 11, 2019 • 9:15 am

The wheels fell off the Science Guy juggernaut a long time ago, but Bill Nye still tries to heave the ungainly cart forward, desperately trying to remain relevant. As you know, I haven’t been a fan of his “comeback,” for his attempts to sell science have been ham-handed and embarrassing. (See some of my criticism here.)

Here, in a video made in 2016,  he goes way out of his depth to answer a reader’s question on The Big Stink. I came across this while watching videos about free will by genuinely smart people, and then cringed while I watched this one.

In this video an inquisitive man named Thomas asks Nye whether he, Thomas, has free will, which the inquisitor interprets as libertarian free will: “neural causation”, with an independent ego in control of one’s thoughts and actions. He’s clearly asking about libertarian free will because he sets his notion in free will in contrast to physical determinism. Thomas also asks whether the “uncertainty principle” may allow us to have some free will.

Listen to Nye’s answer. Here’s how he screws up—virtually every sentence is irrelevant or dumb:

  • Nye says we have “free will up to a point”, but doesn’t say what that point is.
  • He talks about the evolutionary drives to mate and eat have something to do with free will, but that’s completely irrelevant.
  • He says that there is heritability of behavior: “Members of the same family tend to do the same things.” Well, that may be some evidence against free will if there’s genetic determination to “decisions”, but Nye doesn’t make that connection; he seems befuddled.
  • Nye says, “I know I have made decisions based on things that happened around me that I wouldn’t have made without being informed by history or what I’d noticed. I know I have. Now if that turns out not to be true, I’d be very surprised.” How is this relevant? Under either libertarian free will (which I and nearly all scientists reject), compatibilism (a concept of “free will” that accepts determinism) or pure “hard determinism”, your actions will be influenced by your environment, including your interactions with others. Environmental influence of actions is irrelevant to the question of determinism.
  •  Nye starts riffing on the uncertainty principle, but never mentions the important caveat that even if our behaviors are in part purely indeterministic because of quantum indeterminacy, that doesn’t give us any agency or free will.  What does the random position of an electron have to do with “will”?
  • Nye continues by saying, correctly, that “our brains are chemical reactions, and chemical reactions, at some level depend on quantum mechanics.” But then he adds, “At some level, there is randomness in what we think, because we’re made of chemicals that have randomness.” But there is no evidence that our thoughts and behaviors are influenced at all by quantum phenomena.
  • Nye then says that “Human behavior is generally predictable”. So what? That would be true under either pure determinism or libertarianism.
  • He winds up by saying that we may very soon understand the nature of consciousness via the construction of complicated computers. “As long as they’re plugged in—the computers—carry on.” How embarrassing! Nye never tries to connect consciousness with free will. His random emissions of thought remind me of George W. Bush, or worse.

One gets the impression here that Nye doesn’t have a clue how to answer Thomas’s question, which I could answer without all this irrelevant piffle. (Of course, readers who are either libertarians or compatibilists would disagree with me.) But Nye doesn’t even have a viewpoint here: he doesn’t even say whether or not he thinks we have free will. I think inquisitor Thomas is far more aware of the issue than is Nye.

Nye also says he’s a scientist, which is debatable since he was an engineer and hasn’t done any science since at least 1986. I’m not a big credential critic, but Nye keeps saying he’s a scientist as a way of gaining credibility. I don’t even say I’m a scientist any more because I no longer do science. Nye is a science popularizer, and no longer a good one.

This guy really needs to hang it up. As far as I can see, the Science Guy is no longer promoting science, but only himself.

Is decreasing empathy causing increased disruption on campus?

April 25, 2019 • 9:00 am

This short paper on the NPR website (click on screenshot) describes research suggesting that the empathy of young Americans has decreased over the past fifty years. I’m not familiar with this research, but will provisionally assume that the results described are correct.

Here are a few quotes (it’s a short article):

. . . more than a decade ago, a certain suspicion of empathy started to creep in, particularly among young people. One of the first people to notice was Sara Konrath, an associate professor and researcher at Indiana University. Since the late 1960s, researchers have surveyed young people on their levels of empathy, testing their agreement with statements such as: “It’s not really my problem if others are in trouble and need help” or “Before criticizing somebody I try to imagine how I would feel if I were in their place.”

Konrath collected decades of studies and noticed a very obvious pattern. Starting around 2000, the line starts to slide. More students say it’s not their problem to help people in trouble, not their job to see the world from someone else’s perspective. By 2009, on all the standard measures, Konrath found, young people on average measure 40 percent less empathetic than my own generation — 40 percent!

It’s strange to think of empathy – a natural human impulse — as fluctuating in this way, moving up and down like consumer confidence. But that’s what happened. Young people just started questioning what my elementary school teachers had taught me.

Their feeling was: Why should they put themselves in the shoes of someone who was not them, much less someone they thought was harmful? In fact, cutting someone off from empathy was the positive value, a way to make a stand.

Author Rosin describes some neurological studies of empathy, and notes that what seems to trigger it most strongly is a human conflict in which you favor one side over the other.  This, of course, is intensified by tribalism, in which you don’t have to think very hard about which side to empathize with. Fritz Breithaupt, a professor at Indiana University who studies empathy, says if you embrace the empathy born of tribalism, “basically you give up on civil society at that point. You give up on democracy. Because if you feed into this division more and more and you let it happen, it will become so strong that it becomes dangerous.”

Breihaupt’s solution to this tribalistic empathy seems bizarre, however,

In his book [The Dark Sides of Empathy, to be released June 15], Breithaupt proposes an ingenious solution: give up on the idea that when we are “empathizing” we are being altruistic, or helping the less fortunate, or in any way doing good. What we can do when we do empathy, proposes Fritz, is help ourselves. We can learn to see the world through the eyes of a migrant child and a militia leader and a Russian pen pal purely so we can expand our own imaginations, and make our own minds richer. It’s selfish empathy. Not saintly, but better than being alone.

Maybe it’s better than being alone, but it surely doesn’t inspire the kind of helpful action that is thought to be a benefit of empathy. Yes, empathy can be divisive and increase tribalism, but it can also increase charity. I would favor a less tribalistic empathy but also a striving to see the point of view of your opponents in other “tribes.”

NPR also has a 52-minute show on this topic that you can hear by clicking on the screenshot below (the page also has a transcript). I haven’t yet listened as I just discovered it and must be off.

When I read Rosin’s piece, I started thinking that if the decline in empathy among students is as real and as substantial as touted above, it may help explain the bizarre entitlement and aggrieved behavior of college students like those I’ve described at Middlebury, Williams, and Evergreen State.  For if you consider the inevitable demands that these students make of their college administration, they are rarely about improving society as a whole. Rather, they are about the personal comfort and well-being of the complaining students: demands meant to improve their own local situation. The students want courses that suit their needs and ethnicities, more therapists, weekend trips to the city, segregated housing, free food in the dining halls, and so on. While these demands may cite things like “universal structural racism,” they ask not for a change in society as a whole, but at their college.

This seems to me to contrast with the rebelling college students of the Sixties. The demands back then were more universal and less restricted to the local situation or to the students’ own welfare. The demands were for an end to racism in the country as a whole, an end to nuclear weapons, an end to the Vietnam War, and so on. You didn’t hear demands for therapists, free trips, or free food.

I realize that I may sound like a grumpy old man here, but I do perceive this difference, and wonder what readers think—particularly those of a certain age who have experienced college culture over the past fifty years. For it is the combination of increasing tribalism and decreasing empathy that could well produce the kind of demands and entitled behavior of college students that we’ve seen lately. These demands see the college itself, and its white “structural racism”, as the enemy, and foster a kind of tribalism that manifests itself in demands for “affinity” (segregated) housing in which each ethnic group (presumably Blacks, Asian-Americans, and Hispanics) gets to live by itself. (Imagine what would happen if white students were to demand that kind of housing!) There may be empathy in there, but it’s surely tribalistic empathy and an unwillingness to even engage the “enemy” with civil discourse.

At any rate, there are other theories as well, such as those of Jon Haidt and Greg Lukianoff in their recent book The Coddling of the American Mind: How Good Intentions and Bad Ideas Are Setting Up a Generation for Failure.

As I wrote last November:

[Lukianoff and Haidt’s] worry is that students have absorbed what they call the Three Great Untruths, and these are what’s driving the bizarre behavior on campus. Those untruths are these, each exemplified with a motto:

1.)  We young people are fragile (“What doesn’t kill you makes you weaker.”)

2.) We are prone to emotional reasoning and confirmation bias (“Always trust your feelings.”)

3.) We are prone to “dichotomous thinking and tribalism” (“Life is a battle between good people and evil people.”)

The book is, then, an exposition of this thesis, an exploration of why students have become this way when they were different twenty years ago, and, finally, suggested remedies to buttress the emotional strength of students, make them think more rationally, and stop them from living in a Manichean world of Good People versus Bad People.

The causes of this behavior are, say the authors, sixfold: the rapid growth of campus bureaucracy that gives students someone to complain to, and is itself self-perpetuating; the rising rates of depression and suicide in young people; the lack of unsupervised play in kids (parents don’t let kids roam free much these days); a culture of “safetyism” in parents, who have grown overprotective and micromanaging in the face of an environment that’s far safer than it used to be; increased political polarization in America; and the transformation of students’ desire for “justice” into an ideology that demands equal outcomes rather than equal opportunities (this is the form of social justice that Lukianoff and Haidt decry).

The talk about the tribalism, of course, but not so much about the decrease in empathy. However, overprotectiveness and political polarization could well help erode empathy.

h/t: Wayne

New study: Belief in free will doesn’t make you act better

November 1, 2018 • 11:30 am

Is belief in free will necessary, as many claim, to keep society harmonious? The idea behind that claim is that if you’re a determinist, you’re going to be immoral, criminal, or nihilistic. But is there data supporting that claim?

A couple of previous studies have found a positive association between “prosocial” (i.e., good) behavior and either belief in free will or “priming” with passages promoting free will (vs. passages promoting determinism). But, as I wrote last year, some of these have problems:

One of the famous papers used to justify compatibilism was published by Kathleen Vohs and Jonathan Schooler in Psychological Science, “The value of believing in free will: Encouraging a belief in determinism increases cheating.” But that paper is problematic. Besides its design flaws (i.e., “cheating” was tested shortly after students read passages either promoting or denigrating free will, with no long-term monitoring of behavior), it’s also failed to be replicated at least twice (see here,  here and here). And there’s at least one paper showing that accepting determinism makes you more empathic and less vindictive, which isn’t that surprising if you don’t think people are able to “decide’ whether to do good or bad things.

Further, that same post reports a study showing that belief in free will is associated throughout the world with belief in strong criminal punishment. There are other studies that show an association between prosociality and belief in free will, but nearly all of them test “nice” behavior only over a short period, usually before the subjects leave the psychology lab! Nevertheless, these studies are touted by free willies as well as compatibilists as supporting the claim that belief in some sort of free will is essential for a smoothly running society.

A new paper in Social Psychological and Personality Science (free access here with legal Unpaywall app, pdf here, reference at bottom and access by clicking on title below) aimed to determine whether those who believe in free will not only show momentary increases in prosociality, but have “nicer” personalities. As authors Damien Crone and Neil Levy state (also giving the conclusion):

The overwhelming majority of studies of the FWB–moral [JAC: FWB is “free will belief”] behavior association involve undermining FWBs and observing momentary lapses in moral behavior, with (to our knowledge) only one study testing the association between dispositional FWBs and moral behavior (Baumeister et al., 2009). As the opening quotes suggest, these findings have been collectively interpreted as implying that people with situationally or dispositionally low FWBs exhibit similar deficits in moral behavior. However, there is little data directly addressing the question of whether free will believers are generally nicer people. Here, we report four studies (combined N = 921) originally concerned with possible mediators and/or moderators of the FWB–moral behavior association. Unexpectedly, we found no association between FWBs and moral behavior.

So have a look at the paper. I’ll summarize its results only briefly.

The authors did four large studies (sample sizes ranged from 197 to 294) measuring various beliefs about free will and then the degree of the subjects’ “prosocial” and “antisocial” behaviors.  The tests involved these assessments:

  • Degree of belief in determinism, free will, fatalism, dualism, and so on. There will several kinds of tests.
  • Measures of prosocial behavior. These involved various games in which people were given the chance to be magnanimous and charitable. The games were called “charity dictator games”. Other tests were used as well, including allocating money among themselves and various designated charities
  • Measures of antisocial behavior. This involved cheating in reporting the results of rolling a die, with the benefit to the thrower given after the die was thrown.
  • A “moral identity” test in which individuals were asked to identify themselves with a person having nine “moral” traits (compassion, fairness, etc.)

The results can be stated concisely:

  • In 3 of the 4 tests, moral identity was positively associated with generosity and negatively associated with cheating, so being a self-identified good person means that you behaved better in the lab tests.
  • But Free Will Beliefs showed no significant correlation with either prosocial or antisocial behavior; in fact, the correlations were negative, with free-willies showing less generosity, although the associations were not statistically significant.
  • In a meta-analysis of the four studies, while moral identity was again positively correlated with prosocial performance in the lab tests, free will beliefs were negatively (but nonsignificantly) correlated with “generosity”.

The upshot is that this study, which had the power to detect correlations of 0.1, provided no support for the view that belief in free will is associated with better behavior, and belief in determinism with bad or antisocial behavior. That in turn means that there is no credibility to the assertion that belief in some form of free will, either dualistic or compatibilistic, is necessary to keep society well oiled. As the authors conclude:

. . . our findings suggest that the association between FWBs and moral behavior may be greatly overstated, with effects being smaller than previously reported or confined to specific contexts, subpopulations, or behaviors. As a result, we believe that there is good reason to doubt that FWBs have any substantial implications for everyday moral behavior. More research is required before actively discouraging free-will skepticism out of fear of moral degeneration.

I reject free will of both forms on scientific as well as philosophical grounds: the former for dualism and the latter for compatibilism. So even if there were an association in these lab tests between belief in free will and good behavior, I’d still put my lot in with the data. It’s time to stop claiming that belief in free will, like belief in God, is necessary for societies to function well.

h/t: Diana MacPherson

___________

Crone, D. L., & Levy, N. L. (2018). Are Free Will Believers Nicer People? (Four Studies Suggest Not). Social Psychological and Personality Sciencehttps://doi.org/10.1177/1948550618780732