Should Ph.D.s call themselves “doctor” in everyday life?

December 13, 2020 • 1:00 pm

UPDATE: At the libertarian website Reason, legal scholar Eugene Volokh has a different take, based partly on what he sees as the overly lax and non-scholarly nature of Jill Biden’s Ed.D.

_____________________

This week’s kerfuffle involves a writer at the Wall Street Journal, Joseph Epstein, taking Jill Biden to task for calling herself “Dr. Biden”—and allowing Joe Biden’s campaign to call her that—when her doctorate was in education (she has two master’s degrees as well). In other words, she’s a Ph.D. In the article below (click on screenshot, or make a judicious inquiry if you can’t access it), Epstein argues that only medical doctors should call themselves “doctor”, and advises Jill Biden to ditch her title.


I have to say that Epstein’s article, which has been universally attacked for being sexist and misogynistic, is indeed patronizing and condescending (Epstein has an honorary doctorate, but not an “earned” one). I’d be loath to call it sexist on those grounds alone, but the tone of the article, and the words he uses, do seem sexist. Here are two excerpts:

Madame First Lady—Mrs. Biden—Jill—kiddo: a bit of advice on what may seem like a small but I think is a not unimportant matter. Any chance you might drop the “Dr.” before your name? “Dr. Jill Biden ” sounds and feels fraudulent, not to say a touch comic. Your degree is, I believe, an Ed.D., a doctor of education, earned at the University of Delaware through a dissertation with the unpromising title “Student Retention at the Community College Level: Meeting Students’ Needs.” A wise man once said that no one should call himself “Dr.” unless he has delivered a child. Think about it, Dr. Jill, and forthwith drop the doc.

As for your Ed.D., Madame First Lady, hard-earned though it may have been, please consider stowing it, at least in public, at least for now. Forget the small thrill of being Dr. Jill, and settle for the larger thrill of living for the next four years in the best public housing in the world as First Lady Jill Biden.

The use of the word “kiddo,” and the reference to her as “Dr. Jill” does seem sexist, though of course there’s “Dr. Phil” (Ph.D., clinical psychology) and a whole host of other doctors, including M.D. medical experts on the evening news, who are called by their first name. (“Thanks, Dr. Tim”.) Those are usually terms of affection, though, while “Dr. Jill” is clearly not meant affectionately. And why the denigration for the title of her thesis? Finally—”kiddo”? Fuggedabout it. The undoubted truth that women’s credentials have historically been impugned also would lead one to see Epstein’s piece as falling into that tradition.

I sure as hell wouldn’t have written that article, and, as somebody suggested in the pile-on, would Epstein have written it about a man? Where’s his critique of “Dr. Phil”?

The fracas is described in a piece by Matt Cannon in Newsweek and the piece below in the New York Times. I haven’t been able to find a single article about Epstein’s op-ed piece that doesn’t damn it to hell for sexism, and, in fact, although he was a long-term honorary emeritus lecturer at Northwestern, that University criticized his piece (official statement: “Northwestern is firmly committed to equity, diversity and inclusion, and strongly disagrees with Mr. Epstein’s misogynistic views”). His picture has also been removed from Northwestern’s website, showing that he’s toast.  Were Epstein at the University of Chicago, my school wouldn’t have made any official statement, as it’s not 100% clear that his statement was motivated by misogyny, much as the article suggests it.

But that leaves the question “should anyone with a Ph.D. call themselves ‘doctor'”? My answer would be “it’s up to them.”

But I have to say that I have never been able to call myself “Doctor Coyne” except as a humorous remark or in very rare situations that I can’t even remember. I will allow other people to call me “Doctor Coyne.”, but as soon as I have a relationship with them, the “Doctor” gets dropped for “Jerry.” My undergraduates would usually call me “Professor Coyne”, or sometimes “Doctor Coyne,” and that was okay, for being on a first-name basis with them effaces the mentor/student relationship that is useful when teaching. But to my grad students I was always “Jerry.”

It is true that I worked as hard, or even harder, than do medical students to earn the right to be called “Doctor”, taking five years of seven-days-a-week labor to get it, but somehow I don’t feel that I should get a lifetime honorific for that. I got a Ph.D. so I could become a professional evolutionist, not to command respect from people, many of whom might mistakenly think I was a medical doctor.  The New York Times quotes Miss Manners here:

Judith Martin, better known as the columnist Miss Manners, said her father, who had a Ph.D. in economics, insisted on not being called Dr. and implored his fiancée, Ms. Martin’s mother, to print new wedding invitations after the first version included the title.

“As my father used to say, ‘I’m not the kind of doctor who does anybody any good,’” Ms. Martin said in an interview on Saturday. “He didn’t feel it was dignified. I am well aware that this is a form of reverse snobbery.”

Still, Ms. Martin said, “I don’t tell people what to call themselves and I’m aware that women often have trouble with people who don’t respect their credentials.”

I’m pretty much on board with both her and her father here, though I’d take issue with saying my refusal to call myself “Doctor. Coyne” is reverse snobbery. Rather, it’s part of my lifelong desire not to be seen as better than other people just because I got a fancy education. I remember that when I got my first job at the University of Maryland, I was given an empty lab on the second floor of the Zoology Building. But it was in a box containing all the application folders for everyone who had applied for the job I got. After a few days of resisting, I peeked into my own folder to see my letters of recommendation. And I’ll always remember Dick Lewontin’s letter, which, though highly positive, added something like this, “If Jerry has any faults, is that he is too self-denigrating, always underselling himself.”  Well, that may be true, but it’s better to undersell yourself than oversell yourself! I’ve always detested the pomposity of accomplished academics. Other academics think it lends cachet to their books (even “trade books”) by using “Dr.” in the title. More power to them, but I could never bring myself to do that.

One other interesting point: the AP Style Manual agrees with Epstein about the use of “Dr.”  According to the Newsweek piece:

The AP stylebook, a writing guide used by major U.S. publications including Newsweek, also suggests that the term doctor should not be used by those with academic doctoral degrees.

Its latest edition reads: “Use Dr. in first reference as a formal title before the name of an individual who holds a doctor of dental surgery, doctor of medicine, doctor of optometry, doctor of osteopathic medicine, doctor of podiatric medicine, or doctor of veterinary medicine.”

It adds: “Do not use Dr. before the names of individuals who hold other types of doctoral degrees.”

So you could say Epstein was adhering to that rule, but the tone of his piece is snarky and condescending. The opprobrium he’s earned for it is largely deserved.

I suppose I adhere to the AP dictum on this website, too, as it seems weird to call my colleagues “Dr.”, but less weird to call medical doctors “Dr. X”.

(Epstein also denigrates honorary doctorates, for they’re not markers of scholarly achievement—except at the University of Chicago, which may be the only school in the U.S. that confers honorary degrees only on scholars—never to actors, cartoonists, sports figures, and so on. But I don’t know anybody who calls themselves “Dr.” with only an honorary doctorate.)

So if Jill Biden wants to be called “Dr. Biden,” it’s churlish to refuse—after all, she did earn the right to use it. And it’s a matter of simple civility to address people how they want to be addressed.

I have only one caveat here: nobody—be they medical doctors or Ph.Ds—should ever put “Dr.” before their names on their bank checks. That’s where I draw the line. It looks like a move of pompous one-upsmanship—like you’re trying to lord it over salespeople, cashiers, and bank tellers.

Andrew Sullivan: The genetic underpinnings of IQ means we shouldn’t value it so much, that we should ditch the meritocracy, and that we should become more of a communist society

September 12, 2020 • 11:30 am

Andrew Sullivan has devoted a lot of the last two editions of The Weekly Dish to the genetics of intelligence, perhaps because he’s taken a lot of flak for supposedly touting The Bell Curve and the genetic underpinnings of IQ.  Now I haven’t read The Bell Curve, nor the many posts Sullivan’s devoted to the genetics of intelligence (see the long list here), but he’s clearly been on the defensive about his record which, as far as I can see, does emphasize the genetic component to intelligence. But there’s nothing all that wrong with that: a big genetic component of IQ is something that all geneticists save Very Woke Ones accept. But as I haven’t read his posts, I can neither defend nor attack him on his specific conclusions.

I can, however, briefly discuss this week’s post, which is an explication and defense of a new book by Freddie DeBoer, The Cult of Smart. (Note: I haven’t read the book, either, as it’s just out.) You can read Sullivan’s piece by clicking on the screenshot below (I think it’s still free for the time being):

The Amazon summary of the book pretty much mirrors what Sullivan says about it:

. . . no one acknowledges a scientifically-proven fact that we all understand intuitively: academic potential varies between individuals, and cannot be dramatically improved. In The Cult of Smart, educator and outspoken leftist Fredrik deBoer exposes this omission as the central flaw of our entire society, which has created and perpetuated an unjust class structure based on intellectual ability.

Since cognitive talent varies from person to person, our education system can never create equal opportunity for all. Instead, it teaches our children that hierarchy and competition are natural, and that human value should be based on intelligence. These ideas are counter to everything that the left believes, but until they acknowledge the existence of individual cognitive differences, progressives remain complicit in keeping the status quo in place.

There are several points to “unpack” here, as the PoMos say. Here is what Sullivan takes from the book, and appears to agree with:

1.) Intelligence is largely genetic.

2.) Because of that, intellectual abilities “cannot be dramatically improved”.

3.) Because high intelligence is rewarded in American society, people who are smarter are better off, yet they don’t deserve to be because, after all, they are simply the winners in a random Mendelian lottery of genes fostering high IQ (I will take IQ as the relevant measure of intelligence, which it seems to be for most people, including Sullivan).

4.) The meritocracy is thus unfair, and we need to fix it.

5.) We can do that by adopting a version of communism, whereby those who benefit from the genetic lottery get taxed at a very high rate, redistributing the wealth that accrues to them from their smarts. According to DeBoer via Sullivan,

For DeBoer, that means ending meritocracy — for “what could be crueler than an actual meritocracy, a meritocracy fulfilled?” It means a revolutionary transformation in which there are no social or cultural rewards for higher intelligence, no higher after-tax income for the brainy, and in which education, with looser standards, is provided for everyone on demand — for the sake of nothing but itself. DeBoer believes the smart will do fine under any system, and don’t need to be incentivized — and their disproportionate gains in our increasingly knowledge-based economy can simply be redistributed to everyone else. In fact, the transformation in the economic rewards of intelligence — they keep increasing at an alarming rate as we leave physical labor behind — is not just not a problem, it is, in fact, what will make human happiness finally possible.

If early 20th Century Russia was insufficiently developed for communism, in other words, America today is ideal. . .

Sullivan adds that the moral worth of smart people is no higher than that of people like supermarket cashiers, trash collectors, or nurses. (I agree, but I’m not sure that smart people are really seen as being more morally worthy. They are seen as being more deserving of financial rewards.)

6.) Sullivan says that his own admitted high intelligence hasn’t been that good for him, and he doesn’t see it as a virtue:

For me, intelligence is a curse as well as a blessing — and it has as much salience to my own sense of moral worth as my blood-type. In many ways, I revere those with less of it, whose different skills — practical, human, imaginative — make the world every day a tangibly better place for others, where mine do not. Being smart doesn’t make you happy; it can inhibit your sociability; it can cut you off from others; it can generate a lifetime of insecurity; it is correlated with mood disorders and anxiety. And yet the system we live in was almost designed for someone like me.

This smacks a bit of humblebragging, but I’ll take it on face value. It’s still quite odd, though, to see a centrist like Sullivan, once a conservative, come out in favor of communism and radical redistribution of wealth. So be it. But do his arguments make sense?

Now Sullivan’s emphasis on the genetic basis of intelligence is clearly part of his attack on the extreme Left, which dismisses hereditarianism because it’s said to imply (falsely) that differences between groups, like blacks and whites, are based on genetic differences. It also implies (falsely) that traits like intellectual achievement cannot be affected by environmental effects or environmental intervention (like learning). Here Andrew is right: Blank-Slateism is the philosophy of the extreme left, and it’s misguided in several ways. Read Pinker’s book The Blank Slate if you want a long and cogent argument about the importance of genetics.

But there are some flaws, or potential flaws, in Sullivan’s argument, which I take to be point 1-5 above.

First, intelligence is largely genetic, but not completely genetic. There is no way for a given person to determine what proportion of their IQ is attributable to genes and how much to environment or to the interaction between the two: that question doesn’t even make sense. But what we can estimate is the proportion of variation of IQ among people in a population that is due to variation in their genes. This figure is known as the heritability of IQ, and can be calculated (if you have the right data) for any trait. Heritability ranges from 0 (all variation we see in the trait is environmental, with no component due to genetics) to 1 (or 100%), with all the observed variation in the trait being due to variation in genes. (Eye color is largely at this end of the scale.)

A reasonable value for the heritability of IQ in a white population is around 0.6, so about 60% of the variation we see in that population is due to variation in genes, and the other 40% to different environments experienced by different people as well as to the differential interaction between their genes and their environments. That means, first of all, that an appreciable proportion of variation in intelligence is due to variations in people’s environments. And that means that while the IQ of a person doesn’t change much over time, if you let people develop in different environments you can change their IQ in different ways—up or down. IQ is not something that is unaffected by the environment.

Related to that is the idea that a person’s IQ is not fixed at birth by their genes, but can be changed by rearing them in different environments, so it’s not really valid to conclude (at least from the summary above) that “academic potential cannot be dramatically improved”. Indeed, Sullivan’s summary of DeBoer’s thesis is that the difference in IQ between blacks and whites (an average of 15 points, or one standard deviation) is not due to genes, but to different environments faced by blacks and whites:

DeBoer doesn’t explain it as a factor of class — he notes the IQ racial gap persists even when removing socio-economic status from the equation. Nor does he ascribe it to differences in family structure — because parenting is not that important. He cites rather exposure to lead, greater disciplinary punishment for black kids, the higher likelihood of being arrested, the stress of living in a crime-dominated environment, the deep and deadening psychological toll of pervasive racism, and so on: “white supremacy touches on so many aspects of American life that it’s irresponsible to believe we have adequately controlled for it in our investigations of the racial achievement gap.”

Every factor cited here is an environmental factor, not a genetic one. And if those factors can add up to lowering your IQ by 15 points, on what basis does DeBoer conclude (with Sullivan, I think), that you cannot improve IQ or academic performance by environmental intervention? Fifteen points is indeed a “dramatic improvement”, which according to DeBoer, we’d get by simply letting black kids grow up in the environment of white people.  (I note here that I don’t know how much, if any, of that 15-point difference reflects genetic versus environmental differences; what I’m doing is simply asserting that even DeBoer notes that you can change IQ a lot by changing environments.)

Further, what you do with your intelligence can be further affected by the environment. If you’re lazy, and don’t want to apply yourself, a big IQ isn’t necessarily going to make you successful in society. So there is room for further improvement of people by proper education and instilling people with motivation. This doesn’t mean that IQ isn’t important as a correlate of “success” (however it’s measured) in American society—just that environmental factors, including education and upbringing, are also quite important.

What about genetic determinism and the meritocracy? It’s likely that many other factors that lead to success in the U.S. have a high heritability as well. Musical ability may be one of these, and therefore those who get rich not because they have high IQs, but can make good music that sells, also have an “unfair advantage”. What about good looks? Facial characteristic are highly heritable, and insofar as good looks can give you a leg up as a model or an actor, that too is an unfair genetic win. (I think there are data that better-looking people are on average more successful.) In fact, since nobody is “responsible” for either their genes or their environments, as a determinist I think that nobody really “deserves” what they get, since nobody chooses to be successful or a failure. Society simply rewards those people who have certain traits, and punishes those who have other traits. With that I don’t have much quarrel, except about the traits that are deemed reward-worthy (viz., the Kardashians).

This means, if you take Sullivan and DeBoer seriously, we must eliminate not just the meritocracy for intelligence, but for anything: musical ability, good looks, athletic ability, and so on. In other words, everybody who is successful should be taxed to the extent that, after redistribution, everyone in society gets the same amount of money and the same goods. (It’s not clear from Sullivan’s piece to what extent things should be equalized, but if you’re a determinist and buy his argument, everyone should be on the same level playing field.)

After all, if “the smart don’t need to be incentivized”, why does anybody? The answer, of course, is that the smart do need to be incentivized, as does everyone else. The failure of purely communist societies to achieve parity with capitalistic ones already shows that. (I’m not pushing here for pure capitalism: I like a capitalistic/socialistic hybrid, as in Scandinavia.)  And I wonder how much of Sullivan’s $500,000 income he’d be willing to redistribute.

If you think I’m exaggerating Sullivan’s approbation of communism, at least in theory, here’s how he ends his piece, referring to his uneducated grandmother who cleaned houses for a living.

My big brain, I realized, was as much an impediment to living well as it was an advantage. It was a bane and a blessing. It simply never occurred to me that higher intelligence was in any way connected to moral worth or happiness.

In fact, I saw the opposite. I still do. I don’t believe that a communist revolution will bring forward the day when someone like my grandmother could be valued in society and rewarded as deeply as she should have been. But I believe a moral revolution in this materialist, competitive, emptying rat-race of smarts is long overdue. It could come from the left or the right. Or it could come from a spiritual and religious revival. Either way, Freddie DeBoer and this little book are part of the solution to the unfairness and cruelty of it all. If, of course, there is one.

Let’s forget about the “spiritual and religious revival” (I wrote about that before), and realize that what we have here is a call for material equality, even if people aren’t morally valued as the same. And why should we empty the rat-race just of smarts? Why not empty it of everything that brings differential rewards, like writing a well-remunerated blog? In the end, Sullivan’s dislike of extreme leftism and its blank-slate ideology has, ironically, driven him to propose a society very like communism.

Are people becoming more talkative during the pandemic?

July 28, 2020 • 8:15 am

I’ve noticed in the last couple of months that people I talk to, either over the phone or in person, seem to have become much more loquacious, to the point where  it seems that 90% or more of the conversational airtime is taken up by one person’s words. (To be sure, I’m often laconic.) Now I haven’t quantified this, though I could do so, at least over the phone with a stopwatch. But subjectively, it seems to me a real temporal change.

The first thing to determine is whether the subjective change is an objective change. To determine that, I would have to have timed participation in conversations over the last year or so, and compared the conversational “pie” before and after lockdown. And I don’t have that data. 

In the absence of hard data, it’s possible that I’ve simply become more peevish and impatient, so that it only seems that people are monopolizing conversations more. And indeed, I think I have become more peevish, though I think many people have changed in this way as well.

But let’s assume it’s real: that the proportion of conversational time in a two-person chat has become more unequal since March.  If that’s the case, why?

The only explanation I can imagine is that people who are more socially isolated have become more eager to talk, and that’s manifested in a higher degree of conversational dominance. Of course if two such chatty people meet, it could be a festival of interruptions and “talking over,” but I tend to become monosyllabic, and this is exacerbated when I am peevish.  My philosophy has always been that in a conversation, you learn nothing by talking but only by listening.

At any rate, am I imagining this or have others noticed it?

A world survey: Do we need God to be moral?

July 24, 2020 • 8:45 am

A new study by the Pew organization (click on screenshot below or get full pdf here) surveyed 38,436 people in 34 countries across the globe, asking them questions about how important God or religion is to them and—today’s topic)—do you really need God to be moral.  The methods included both face to face and phone surveys.

The overall results aren’t that surprising: more religious countries and more religious people within countries think that “belief in God is necessary to be moral and have good values”, while richer countries (which are also less religious countries) tend to harbor respondents who don’t think faith is necessary for morality. And the proportion of those who see God as important in this respect is waning in most of Western Europe over time, though growing in Russia, Bulgaria, Japan and Ukraine).

The overall results show a pretty even division across the globe, though religion plays an important role in most people’s lives. But these results aren’t that informative given the observed variation across countries (see below):

Below is a plot showing the variation across the surveyed countries. Look at the first two lines showing a substantial difference between the U.S. and the more secular Canada.

Overall, I would have thought that even religious people wouldn’t assert that you need God to be moral, mainly because there’s so much evidence that nonbelievers are moral. In fact, the most secular countries in the world—those in Scandinavia—could be construed as being more moral than many of the more religious countries, like Islamic countries of the Middle East. Further, the Euthyphro argument, which shows that our sense of morality must be prior to belief in God (unless you believe in Divine Command theory), disposes of the we-need-God-to-be-moral claim. But of course few people have thought the issue through that far.

Muslim and Catholic (or devout Christian) countries show the strongest belief in God as a necessity for morality. 90% or above ratings are seen in the Philippines, Indonesia, Kenya, and Nigeria.

Three more plots. The first one shows the familiar pattern of richer countries adhering less to religious dicta than poorer ones. In this case there are multiple confounding factors, for “belief in God is important for morality” is surely itself highly correlated with simple “belief in God.” The relationship here is very strong. My own view is that of Marx: countries where you are in bad shape and can’t get help from the government tend to be those where people find hope and solace in religion.

This is also true within countries: there’s a consistent pattern in the surveyed nations of people with higher income being less likely to see God as necessary for morality (and of course the higher-income people are less likely to be religious in general).

As expected, people with more education tend to connect morality with God to a lesser extent. Again, this is probably because of a negative relationship between education and religiosity:

In the comments below, reader Eric said I may have “buried the lede” by neglecting the rather large drop between 2002 and 2019, in the proportion of Americans who think God is necessary for morality. This is part of the increasing secularization of the U.S:

 

Finally, there’s a plot showing the variation among countries on the general importance of religion. Western Europe, Australia, South Korea, and Japan lead the pack for secularism, while Catholic, Muslim, and African Christian countries are those seeing religion as more important. That’s no surprise:

In truth, the failure of nearly half the world’s people to see that atheists can be moral, which should dispose of the “God-is-necessary” hypothesis, is depressing. But one could argue that for many religious people, “morality” consists largely of religious dictates: what you eat, who you sleep with and how, how you feel about gays and women, and so on. So, for example, Catholics and Muslims might see the free-loving and egalitarian Scandinavians as immoral.

The Purity Posse pursues Pinker

July 5, 2020 • 12:30 pm

The Woke are after Pinker again, and if he’s called a racist and misogynist, as he is in this latest attempt to demonize him, then nobody is safe. After all, Pinker is a liberal Democrat who’s donated a lot of dosh to the Democratic Party, and relentlessly preaches a message of moral, material, and “well-being” progress that’s been attained through reason and adherence to Enlightenment values. But that sermon alone is enough to render him an Unperson, for the Woke prize narrative and “lived experience” over data, denigrate reason, and absolutely despise the Enlightenment.

The link to the document in question, “Open Letter to the Linguistic Society of America,”  was tweeted yesterday by Pinker’s fellow linguist John McWhorter, who clearly dislikes the letter. And, indeed, the letter is worthy of Stalinism in its distortion of the facts in trying to damage the career of an opponent. At least they don’t call for Pinker to be shot in the cellars of the Lubyanka!

After I read the letter and decided to respond to it, I contacted Steve, asking him questions, and he gave me permission to quote some of his answers, which were sent in an email. (Steve, by the way, has never asked me to defend him; I do so in this case because of the mendacity of the letter.)

The letter, on Google Documents, is accumulating signatories—up to 432 the last time I looked. You can access it in McWhorter’s tweet above, or by clicking on the letter’s first paragraph below:

Many of the signatories are grad students and undergrads, members of the Linguistics Society of America (LSA), which may explain why the vast amount of criticism leveled at Pinker comes from his social media, all tweets from Twitter. The letter shows no familiarity with Pinker’s work, and takes statements out of context in a way that, with the merest checking, are seen to be represented duplicitously. In the end, the authors confect a mess of links that, the signatories say, indict Pinker of racism, misogyny, and blindness to questions of social justice. As the authors say:

Though no doubt related, we set aside questions of Dr. Pinker’s tendency to move in the proximity of what The Guardian called a revival of “scientific racism”, his public support for David Brooks (who has been argued to be a proponent of “gender essentialism”), his expert testimonial in favor of Jeffrey Epstein (which Dr. Pinker now regrets), or his dubious past stances on rape and feminism. Nor are we concerned with Dr. Pinker’s academic contributions as a linguist, psychologist and cognitive scientist. Instead, we aim to show here Dr. Pinker as a public figure has a pattern of drowning out the voices of people suffering from racist and sexist violence, in particular in the immediate aftermath of violent acts and/or protests against the systems that created them.

In truth, Pinker as a public figure is hard to distinguish from Pinker the academic, for in both academia and in public he conveys the same message, one of progress (albeit with setbacks) and material and moral improvement, always using data to support this upward-bending arc of morality. And in both spheres he emphasizes the importance of secularism and reason as the best—indeed, the only—way to attain this progress. After indicting Pinker based on five tweets and a single word in one of his books, the signatories call for him to be stripped of his honors as a distinguished LSA Fellow and as one of the LSA’s media experts.

So what is the evidence that Pinker is a miscreant and a racist? I’ll go through the six accusations and try not to be tedious.

The first is about blacks being shot disproportionately to their numbers in the population, which, as I’ve written about recently, happens to be true. Emphases in the numbered bits is mine:

1.) In 2015, Dr. Pinker tweeted “Police don’t shoot blacks disproportionately”, linking to a New York Times article by Sendhil Mullainathan.


Let the record show that Dr. Pinker draws this conclusion from an article that contains the following quote: “The data is unequivocal. Police killings are a race problem: African-Americans are being killed disproportionately and by a wide margin.” (original emphasis) We believe this shows that Dr. Pinker is willing to make dishonest claims in order to obfuscate the role of systemic racism in police violence.

Actually, Pinker’s tweet was an accurate summary of the article. Have a look at the quote in its entirety, reading on after the first extracted sentence.

The data is unequivocal. Police killings are a race problem: African-Americans are being killed disproportionately and by a wide margin. And police bias may be responsible. But this data does not prove that biased police officers are more likely to shoot blacks in any given encounter.

Instead, there is another possibility: It is simply that — for reasons that may well include police bias — African-Americans have a very large number of encounters with police officers. Every police encounter contains a risk: The officer might be poorly trained, might act with malice or simply make a mistake, and civilians might do something that is perceived as a threat. The omnipresence of guns exaggerates all these risks.

Such risks exist for people of any race — after all, many people killed by police officers were not black. But having more encounters with police officers, even with officers entirely free of racial bias, can create a greater risk of a fatal shooting.

Arrest data lets us measure this possibility. For the entire country, 28.9 percent of arrestees were African-American. This number is not very different from the 31.8 percent of police-shooting victims who were African-Americans. If police discrimination were a big factor in the actual killings, we would have expected a larger gap between the arrest rate and the police-killing rate.

This in turn suggests that removing police racial bias will have little effect on the killing rate. Suppose each arrest creates an equal risk of shooting for both African-Americans and whites. In that case, with the current arrest rate, 28.9 percent of all those killed by police officers would still be African-American. This is only slightly smaller than the 31.8 percent of killings we actually see, and it is much greater than the 13.2 percent level of African-Americans in the overall population.

The signatories, not Pinker, stand guilty of dishonest quote-mining. I would argue that the cherry-picking here is intellectually dishonest—and deliberate.

2.) In 2017, when nearly 1000 people died at the hands of the police, the issue of anti-black police violence in particular was again widely discussed in the media. Dr. Pinker moved to dismiss the genuine concerns about the disproportionate killings of Black people at the hands of law enforcement by employing an “all lives matter” trope (we refer to Degen, Leigh, Waldon & Mengesha 2020 for a linguistic explanation of the trope’s harmful effects) that is eerily reminiscent of a “both-sides” rhetoric, all while explicitly claiming that a focus on race is a distraction. Once again, this clearly demonstrates Dr. Pinker’s willingness to dismiss and downplay racist violence, regardless of any evidence.

In light of the recent police killings of blacks, I’m pretty sure that this tweet would look worse today than it did in 2017. But the article Pinker is referring to is about general improvements in police departments, not ways to make cops less racist. It does note that there’s racism in police killings, but says that the fix, as Pinker notes, comes from general improvements in policing (along the lines of general improvements in airline safety), not by focusing on racism itself:

Police violence is tangled up with racism and systemic injustice. We desperately need to do more to address that, foremost by shoring up the criminal-justice system so that it holds police officers accountable when they kill. But it’s also true that deadly mistakes are going to happen when police officers engage in millions of potentially dangerous procedures a year. What aviation teaches us is that it should be possible to “accident proof” police work, if only we are willing to admit when mistakes are made.

. . . The routine traffic stop, like the one that killed Mr. Bell’s son, is especially in need of redesign because it contains so many potential failure points that cause confusion and violence. In the computer science department at the University of Florida, a team of students — all African-American women — have developed a technology that they hope might make these encounters far safer.

. . .How can we fix this system that puts civilians and the police officers who stop them at risk? The obvious solution is to take the officers — and their guns — out of the picture whenever possible.

The technology developed by the African-American women has nothing to do with race, but limns general principles that should be followed in all traffic stops. Now I doubt Steve would, given the recent events and protests, post the same tweet today, but his summary of the article is not at all an “all lives matter” trope. Remember, there’s still no good evidence that the killing of black men by police reflects “systemic racism” in police department, and that needs to be investigated, but in the meantime perhaps some general tactical changes should be considered as well.

I asked Steve to respond to the claim that this is an “all lives matter trope.” Here’s what he emailed back (quoted with permission):

Linguists, of all people, should understand the difference between a trope or collocation, such as the slogan “All lives matter,” and the proposition that all lives matter. (Is someone prepared to argue that some lives don’t matter?) And linguists, of all people,  should understand the difference between a turn in the context of a conversational exchange and a sentence that expresses an idea. It’s true that if someone were to retort “All lives matter” in direct response to “Black lives matter,’ they’d be making a statement that downplays the racism and other harms suffered by African Americans. But that is different from asking questions about whom police kill, being open to evidence on the answer, and seeking to reduce the number of innocent people killed by the police of all races. The fact is that Mullainathan and four other research reports have found the same thing: while there’s strong evidence that African Americans are disproportionately harassed, frisked, and manhandled by the police (so racism among the police is a genuine problem), there’s no evidence that they are killed more, holding rates of dangerous encounters constant. (References below.) As Mullainathan notes, this doesn’t downplay racism, but it pinpoints its effects: in drug laws, poverty, housing segregation, and other contributors to being in dangerous situations, but not on in the behavior of police in lethal encounters. And it has implications for how to reduce police killings, which is what we should all care about: it explains the finding that race-specific like training police in implicit bias and hiring more minority police have no effect, while across-the-board measures such as de-escalation training, demilitarization, changing police culture, and increasing accountability do have an effect.

Fryer, R. G. (2016). An Empirical Analysis of Racial Differences in Police Use of Force. National Bureau of Economic Research Working Papers(22099), 1-63.

Fryer, R. G. (forthcoming). Reconciling Results on Racial Differences in Police Shootings. American Economic Review (Papers and Proceedings).

Goff, P. A., Lloyd, T., Geller, A., Raphael, S., & Glaser, J. (2016). The science of justice: Race, arrests, and police use of force. Los Angeles: Center for Policing Equity, UCLA, Table 7.

Johnson, D. J., Tress, T., Burkel, N., Taylor, C., & Cesario, J. (2019). Officer characteristics and racial disparities in fatal officer-involved shootings. Proceedings of the National Academy of Sciences, 116(32), 15877-15882. doi:10.1073/pnas.1903856116

Johnson, D. J., & Cesario, J. (2020). Reply to Knox and Mummolo and Schimmack and Carlsson: Controlling for crime and population rates. Proceedings of the National Academy of Sciences, 117(3), 1264-1265. doi:10.1073/pnas.1920184117

Miller, T. R., Lawrence, B. A., Carlson, N. N., Hendrie, D., Randall, S., Rockett, I. R. H., & Spicer, R. S. (2016). Perils of police action: a cautionary tale from US data sets. Injury Prevention. doi:10.1136/injuryprev-2016-042023

Of course the signatories credit themselves with the ultrasonic ability to discern “dog whistles” in arguments that displease them, a license to throw standards of accurate citation out the window and accuse anyone of saying anything. 

Back to the letter:

3.) Pinker (2011:107) provides another example of Dr. Pinker downplaying actual violence in a casual manner: “[I]n 1984, Bernhard Goetz, a mild-mannered engineer, became a folk hero for shooting four young muggers in a New York subway car.”—Bernhard Goetz shot four Black teenagers for saying “Give me five dollars.” (whether it was an attempted mugging is disputed). Goetz, Pinker’s mild-mannered engineer, described the situation after the first four shots as follows: “I immediately looked at the first two to make sure they were ‘taken care of,’ and then attempted to shoot Cabey again in the stomach, but the gun was empty.” 18 months prior, the same “mild-mannered engineer” had said “The only way we’re going to clean up this street is to get rid of the sp*cs and n*****s”, according to his neighbor. Once again, the language Dr. Pinker employs in calling this person “mild-mannered” illustrates his tendency to downplay very real violence.

After I’d read Accusation #1 and this one, and saw the way the letter was distorting what Pinker said, I decided to write Steve and say that I was going to write something about the letter. I began by asking for the whole Goetz passage from The Better Angels of Our Nature (which you can see at the letter’s link) so I could embed it here. Steve sent it, along with these words:

The Goetz description was, of course, just a way to convey the atmosphere of New York in the high-crime 79s and 80s for those who didn’t live through it — just as the atmosphere was later depicted in The Joker. To depict this as sympathetic to a vigilante shooter is one of the many post-truth ascriptions in the piece.

Here’s the entire passage from Better Angels:

The flood of violence from the 1960s through the 1980s reshaped American culture, the political scene, and everyday life. Mugger jokes became a staple of comedians, with mentions of Central Park getting an instant laugh as a well-known death trap. New Yorkers imprisoned themselves in their apartments with batteries of latches and deadbolts, including the popular “police lock,” a steel bar with one end anchored in the floor and the other propped up against the door. The section of downtown Boston not far from where I now live was called the Combat Zone because of its endemic muggings and stabbings. Urbanites quit other American cities in droves, leaving burned-out cores surrounded by rings of suburbs, exurbs, and gated communities. Books, movies and television series used intractable urban violence as their backdrop, including Little Murders, Taxi Driver, The Warriors, Escape from New York, Fort Apache the Bronx, Hill Street Blues, and Bonfire of the Vanities. Women enrolled in self-defense courses to learn how to walk with a defiant gait, to use their keys, pencils, and spike heels as weapons, and to execute karate chops or jujitsu throws to overpower an attacker, role-played by a volunteer in a Michelin-man-tire suit. Red-bereted Guardian Angels patrolled the parks and the mass transit system, and in 1984 Bernhard Goetz, a mild-mannered engineer, became a folk hero for shooting four young muggers in a New York subway car. A fear of crime helped elect decades of conservative politicians, including Richard Nixon in 1968 with his “Law and Order” platform (overshadowing the Vietnam War as a campaign issue); George H. W. Bush in 1988 with his insinuation that Michael Dukakis, as governor of Massachusetts, had approved a prison furlough program that had released a rapist; and many senators and congressmen who promised to “get tough on crime.” Though the popular reaction was overblown—far more people are killed every year in car accidents than in homicides, especially among those who don’t get into arguments with young men in bars—the sense that violent crime had multiplied was not a figment of their imaginations.

Now if you think that this passage excuses Bernie Goetz for the shooting, and does so by using “mild-mannered” as an adjective, I feel sorry for you. Pinker’s doing here what he said he was doing: depicting the anti-crime atmosphere present at that time in New York City. Only someone desperately looking for reasons to be offended would glom onto this as evidence of racism. In fact, in 1985 the Washington Post called Goetz “the unassuming, apparently mild-mannered passenger who struck with force” . You can find the same adjective in other places. Complaint dismissed.

4.)  In 2014, a student murdered six women at UC Santa Barbara after posting a video online that detailed his misogynistic reasons. Ignoring the perpetrator’s own hate speech, Dr. Pinker called the idea that such a murder could be part of a sexist pattern “statistically obtuse”, once again undermining those who stand up against violence while downplaying the actual murder of six women as well as systems of mysogyny.

Here’s the “incriminating” tweet:

First, a correction: the 2014 Isla Vista killings by Eliot Rodger involved four male victims and two female victims, not six women. But that aside, Rodger did leave a misogynistic manifesto and a YouTube video clearly saying that he wanted to exact revenge on women for rejecting him, and whom he hated for that.

I couldn’t find the statistically obtuse link, and asked Steve about it, and he didn’t remember it either. But his point was clearly not to say that this murder wasn’t motivated by hatred of women, but to question whether it was part of a general pattern of hatred of women. That’s a different issue. I’ll quote Steve again, with his permission:

I don’t remember what it initially pointed to, but I’ve often argued that reading social trends into rampage shootings and suicide terrorists is statistically obtuse and politically harmful. It’s obtuse because vastly more people are killed in day-to-day homicides, to say nothing of accidents; news watchers who think they are common are victims of the Availability Bias, mistaking saturation media coverage of horrific isolated events for major social trends. Every victim of a murder is an unspeakable tragedy, but in trying to reduce violence, we should focus foremost on the phenomena that harm people in the largest numbers.

It’s possible — I don’t remember — that I mentioned data showing that uxoricide (the killing of women by husbands and romantic partners) has been in decline.

Focusing on rampage shooters and suicide terrorists is harmful because it gives these embittered losers exactly what they are seeking—notoriety and political importance—thereby incentivizing more of them. Also, the overreactions to these two smaller kinds of violence can have dangerous side effects, from traumatizing schoolchildren with pointless active shooter drills, to the invasions of Afghanistan and Iraq.

The legal scholar Adam Lankford is the one who’s written most compellingly about the drive of rampage shooters to “make a difference,” if only posthumously — a good reason not to grant undue importance to their vile final acts.

Again, Pinker’s attempt to make a general point is parsed for wording (do they even know what “statistically obtuse” means?) to argue that Steve is a misogynist. Steve added, “The difference between understanding the world through media-driven events versus data-based  trends is of course very much my thing.”

5.)  On June 3rd 2020, during historic Black Lives Matter protests in response to violent racist killings by police of George Floyd, Breonna Taylor, and many many others, Dr. Pinker chose to publicly co-opt the academic work of a Black social scientist to further his deflationary agenda. He misrepresents the work of that scholar, who himself mainly expressed the hope he felt that the protests might spark genuine change, in keeping with his belief in the ultimate goodness of humanity. A day after, the LSA commented on its public twitter account that it “stands with our Black community”. Please see the public post by linguist Dr. Maria Esipova for a more explicit discussion of this particular incident.

First, “co-opting” is a loaded word for the simple act of citation, both in Pinker’s books and in his tweet below, citation that shows a decline in racist attitudes among white people over time. This involves answers to questions—not actions like murders—but attitudes must surely be seen as manifestations of “racism”.

The incriminating tweet:

As for Bobo’s article in the Harvard Gazette, yes, there is cautious optimism, but there’s also despair.

Bobo:

On the one hand, I am greatly heartened by the level of mobilization and civil protests. That it has touched so many people and brought out so many tens of thousands of individuals to express their concern, their outrage, their condemnation of the police actions in this case and their demand for change and for justice, I find all that greatly encouraging. It is, at the same moment, very disappointing that some folks have taken this as an opportunity to try to bring chaos and violence to these occasions of otherwise high-minded civil protest. And I’m disappointed by those occasions where in law enforcement, individuals and agencies, have acted in ways that have provoked or antagonized otherwise peaceful protest actions.

It’s a complex and fraught moment that we’re in. And one of the most profoundly disappointing aspects of the current context is the lack of wise and sensible voices and leadership on the national stage to set the right tone, to heal the nation, and to reassure us all that we’re going to be on a path to a better, more just society.

. . .We had all thought, of course, that we made phenomenal strides. We inhabit an era in which there are certainly more rank-and-file minority police officers than ever before, more African American and minority and female police chiefs and leaders. But inhabiting a world where the poor and our deeply poor communities are still heavily disproportionately people of color, where we had a war on drugs that was racially biased in both its origins and its profoundly troubling execution over many years, that has bred a level of distrust and antagonism between police and black communities that should worry us all. There’s clearly an enormous amount of work to be done to undo those circumstances and to heal those wounds.

And if the following isn’t a statement by Bobo that justifies Pinker’s characterization above, I don’t know what is, for while indicting Trumpism for fomenting racism, Bobo does indeed say he is “guardedly optimistic”, even using the phrase “higher angels of our nature”. (My emphasis.)

The last three years have brought one moment of shock and awe after the other, as acts on a national and international stage from our leadership that one would have thought unimaginable play out each and every day under a blanket of security provided by a U.S. Senate that appears to have lost all sense of spine and justice and decency. I don’t know where this is. I think we’re in a deeply troubling moment. But I am going to remain guardedly optimistic that hopefully, in the not-too-distant future, the higher angels of our nature win out in what is a really frightening coalescence of circumstances.

Finally, Steve went into more detail about that tweet:

The intro to the tweet was context: introducing Larry Bobo and my connection to his research. It was followed by the transition “Here he ….”, so there was no implication that this interview was specifically about that research. Still, I’d argue that it’s hardly a coincidence that a social scientist who has documented racial progress in the past (including in a 2009 article entitled “A change has come: Race, politics, and the path to the Obama presidency”) would express guarded optimism that it can continue. After all, if 65 years of the civil rights movement had yielded no improvements in race relations, why should we bother continuing the fight? 

Now, one can legitimately ask (as Bobo does) whether responses to the General Social Survey are honest or are biased by social desirability. I address this in Enlightenment Now by looking for signs of implicit racism in Google search data (it’s declined), and more recently, have cited new data from my colleagues Tessa Charlesworth and Mahzarin Banaji (in Psychological Science last year) that implicit racial bias as measured by Banaji’s Implicit Association Test has declined as well. 

I’ve become used to incomprehension and outrage over data on signs of progress. People mentally auto-correct the claim that something bad has declined with the claim that it has disappeared. And they misinterpret evidence for progress as downplaying the important of activism. But of course progress in the past had to have had a cause, and often it was the work of past activists that pushed the curves down — all the more reason to continue it today.

I also asked Steve for the references to Bobo’s research showing “the decline of overt racism in the U.S.” Here they are:

Bobo, L. D. 2001. Racial attitudes and relations at the close of the twentieth century. In N. J. Smelser, W. J. Wilson, & F. Mitchell, eds., America becoming: Racial trends and their consequences. Washington, D.C.: National Academies Press.

Bobo, L. D., & Dawson, M. C. 2009. A change has come: Race, politics, and the path to the Obama presidency. Du Bois Review, 6, 1–14.

Schuman, H., Steeh, C., & Bobo, L. D. 1997. Racial attitudes in America: Trends and interpretations. Cambridge, Mass.: Harvard University Press.

Finally, the last indictment:

6.) On June 14th 2020, Dr. Pinker uses the dogwhistle “urban crime/violence” in two public tweets (neither of his sources used the term). A dogwhistle is a deniable speech act “that sends one message to an outgroup while at the same time sending a second (often taboo, controversial, or inflammatory) message to an ingroup”, according to recent and notable semantic/pragmatic work by linguistic researchers Robert Henderson & Elin McCready [1,2,3]. “Urban”, as a dogwhistle, signals covert and, crucially, deniable support of views that essentialize Black people as lesser-than, and, often, as criminals. Its parallel “inner-city”, is in fact one of the prototypical examples used as an illustration of the phenomenon by Henderson & McCready in several of the linked works. 

The two tweets at issue:

Umm.  .  both Patrick Sharkey at Princeton and Rod Brunson at Northeastern University are indeed experts in urban crime, and have taught and written extensively about it.  If there’s a “dogwhistle” here, blame Brunson and Sharkey, not Pinker. But there is no dogwhistle save the use of that phrase by the Woke to provoke cries of racism from their peers.

In the end, we have an indictment based on five tweets and the phrase “mild-mannered” in one of Pinker’s books, all of which distort or mischaracterize what Pinker was saying. That five social-media tweets and one word can lead to such a severe indictment (see below) is a sign of how far the termites have dined. I’m really steamed when a group of misguided zealots tries to damage someone’s career, and does so dishonestly.

The end of this pathetic letter:

We want to note here that we have no desire to judge Dr. Pinker’s actions in moral terms [JAC: oh for chrissake, of course they do!], or claim to know what his aims are. Nor do we seek to “cancel” Dr. Pinker, or to bar him from participating in the linguistics and LSA communities (though many of our signatories may well believe that doing so would be the right course of action). We do, however, believe that the examples introduced above establish that Dr. Pinker’s public actions constitute a pattern of downplaying the very real violence of systemic racism and sexism, and, moreover, a pattern that is not above deceitfulness, misrepresentation, or the employment of dogwhistles. In light of the fact that Dr. Pinker is read widely beyond the linguistics community, this behavior is particularly harmful, not merely for the perception of linguistics by the general public, but for movements against the systems of racism and sexism, and for linguists affected by these violent systems.

The people who are deceitful and who misrepresent the facts are the signatories of this screed, not Pinker.  File this letter in the circular file. I hope that the LSA doesn’t take it seriously, but if they do, the organization should be mocked and derided.

h/t: Many people sent me this letter; thanks to all.

Religion doesn’t improve society: more evidence

February 23, 2020 • 10:15 am

Religion is often touted as essential as a kind of secular glue, keeping society moral and empathic. Indeed, some say that even if there isn’t any evidence for a God, we should promote belief anyway because of its salutary side effects—the “spandrels” of belief.

This “belief in belief” trope, as Dennett calls it, is counteracted by lots of evidence, including the observation that there’s a negative correlation between the religiosity of a country and both its “happiness index” and various measures of well being. Because this is a correlation rather than a causation, we can’t say for sure that religion brings countries down while secularism brings happiness, but there’s certainly no support at all for the thesis that religion promotes well being.

That’s the point made in this new article in The Washington Post. It’s a response to Attorney General William Barr’s recent claim, in a speech at Notre Dame, that religion is essential to maintain morality and that its erosion causes dire consequences. Some of Barr’s quotes from that talk:

Modern secularists dismiss this idea of morality as other-worldly superstition imposed by a kill-joy clergy. In fact, Judeo-Christian moral standards are the ultimate utilitarian rules for human conduct.

They reflect the rules that are best for man, not in the by and by, but in the here and now. They are like God’s instruction manual for the best running of man and human society.

By the same token, violations of these moral laws have bad, real-world consequences for man and society. We may not pay the price immediately, but over time the harm is real.

Religion helps promote moral discipline within society. Because man is fallen, we don’t automatically conform ourselves to moral rules even when we know they are good for us.

But religion helps teach, train, and habituate people to want what is good. It does not do this primarily by formal laws – that is, through coercion. It does this through moral education and by informing society’s informal rules – its customs and traditions which reflect the wisdom and experience of the ages.

In other words, religion helps frame moral culture within society that instills and reinforces moral discipline.

And, added Barr, the rise of secularism is accompanied by a moral decrepitude afflicting America:

By any honest assessment, the consequences of this moral upheaval have been grim.

Virtually every measure of social pathology continues to gain ground.

In 1965, the illegitimacy rate was eight percent. In 1992, when I was last Attorney General, it was 25 percent. Today it is over 40 percent. In many of our large urban areas, it is around 70 percent.

Along with the wreckage of the family, we are seeing record levels of depression and mental illness, dispirited young people, soaring suicide rates, increasing numbers of angry and alienated young males, an increase in senseless violence, and a deadly drug epidemic.

As you all know, over 70,000 people die a year from drug overdoses. That is more casualities in a year than we experienced during the entire Vietnam War.

I will not dwell on all the bitter results of the new secular age. Suffice it to say that the campaign to destroy the traditional moral order has brought with it immense suffering, wreckage, and misery. And yet, the forces of secularism, ignoring these tragic results, press on with even greater militancy.

In response, columnist Max Boot cites some statistics that counteract Barr’s claims, and also give results of an international survey showing, as such surveys invariably do, that religious countries are not better off. Click on the screenshot to read the article:

 

Boot notes this:

Barr’s simplistic idea that the country is better off if it is more religious is based on faith, not evidence. My research associate Sherry Cho compiled statistics on the 10 countries with the highest percentage of religious people and the 10 countries with the lowest percentage based on a 2017 WIN/Gallup International survey of 68 countries. The least religious countries are either Asian nations where monotheism never took hold (China, Japan) or Western nations such as Australia, Sweden and Belgium, where secularism is much more advanced than in the United States. The most religious countries represent various faiths: There are predominantly Christian countries (the Philippines, Papua New Guinea, Armenia), Muslim Pakistan, Buddhist Thailand, Hindu India — and countries of mixed faiths (Nigeria, Ivory Coast, Ghana, Fiji).

Now there are data from 68 countries in this survey, but they show various indices of well being in only the 10 most religious and ten least religious. But the differences are still striking:

However, I’ve also published data (analysis by readers) on a lot more countries showing that the more religious the country, the less happy are its inhabitants: there’s a strong and significant negative correlation between the UN’s “happiness index” and religiosity among dozens of countries. Further, you see the same negative correlation between the religiosity of countries and various indices of their well-being, like their rank on the “successful societies” scale. This is also true among states within the U.S.

Further, among many countries, the index of poverty—how poor a country is—is positively correlated with religiosity.

Again, these are correlations, and not necessarily causal relationships. It’s possible, for example, that other factors play a role. In fact, I think they do, but they surely don’t point to religion in any way as promoting either morality or well being.

My theory, which is not mine but that of many sociologists, is that religion (as Marx maintained) is the last resort of a population which has poor well-being. Suffering and povery-stricken people look to God for help and succor when their society can’t provide them. That could cause the correlation. In other words, religiosity doesn’t cause dysfunctional societies, but dysfunctional societies maintain religiosity because that’s the only hope people have. And of course maintaining such hope erodes the will of people to actually do something to improve their society.  Further, as well being increases, religiosity diminishes as the eternal press of secularism in the modern world no longer comes up against impediments.

As I wrote previously:

 Although I’m not a Marxist, Marx may have gotten it right when he said, “Religion is the sigh of the oppressed creature, the heart of a heartless world, and the soul of soulless conditions. It is the opium of the people.”

Author Boot ends his article this way:

Fundamentalists may be unhappy that religious observance has declined over the decades, but the data shows that, by most measurements, life has gotten much better for most people. There is little evidence that a decline in religiosity leads to a decline in society — or that high levels of religiosity strengthen society. (Remember, Rome fell after it converted to Christianity.) If anything, the evidence suggests that too much religion is bad for a country.

Well, I’d put it another way: if a country is not well off, it tends to retain religion. But never mind: the conclusion of myself, Boot, and many sociologists—that there’s no evidence that high religiosity improves society—remains sound. I can’t imagine a survey of well being and religiosity that shows a positive relationship, and I know of no such results.

h/t: Randy

Gender differences in toy use: boys play with boy toys, girls with girl toys

January 30, 2020 • 10:30 am

Every parent I know with whom I’ve discussed the issue of sex differences has told me that, if they have children of both sexes, they notice behavioral differences between boys and girls quite early, and these include preferences for which toys they play with. Usually, but not inevitably, boys play with “boy toys” (trucks, trains, guns, soldiers) and girls prefer “girl toys” (dolls, kitchen stuff, tea sets, art stuff). Even when girls are given trucks and boys given dolls, they gravitate to the stereotyped toys. I’m using the classification employed by authors whose work is summarized in the meta-analysis I’m discussing today: the paper by Davis and Hine).

If you’re a hard-core blank-slater, you’ll attribute the toy-use difference to socialization: parents and society somehow influence children about which toys to prefer. If you’re a genetic determinist, you’ll attribute the behavior largely to innate preferences—the result of selection on our ancestors. And, of course, both factors could operate.

But there’s some evidence for a genetic component to this preference: the fact that rhesus monkeys, who presumably don’t get socialized by their parents, show a similar difference in toy preference, even when tested as adults. The monkey paper is shown below (click for free access). A BBC site implies that a related study with similar results was also done in Barbary macaques, a different species; but I can’t find any resulting publication. (UPDATE: The same result has been seen in vervet monkeys, as a reader notes in the comments.)

First, a picture:

And the paper:

And the abstract from the rhesus macaque study (my emphasis)

Socialization processes, parents, or peers encouraging play with gender specific toys are thought to be the primary force shaping sex differences in toy preference. A contrast in view is that toy preferences reflect biologically determined preferences for specific activities facilitated by specific toys. Sex differences in juvenile activities, such as rough and tumble play, peer preferences, and infant interest, share similarities in humans and monkeys. Thus if activity preferences shape toy preferences, male and female monkeys may show toy preferences similar to those seen in boys and girls. We compared the interactions of 34 rhesus monkeys, living within a 135 monkey troop, with human wheeled toys and plush toys. Male monkeys, like boys, showed consistent and strong preferences for wheeled toys, while female monkeys, like girls, showed greater variability in preferences. Thus, the magnitude of preference for wheeled over plush toys differed significantly between males and females. The similarities to human findings demonstrate that such preferences can develop without explicit gendered socialization. We offer the hypothesis that toy preferences reflect hormonally influenced behavioral and cognitive biases which are sculpted by social processes into the sex differences seen in monkeys and humans.

But if you’re dealing with humans, where socialization is also a possibility, the first thing to ask is this: Do boys and girls really differ in their toy preferences? For if they don’t, there’s no need to adhere to either socialization or genetic hypotheses. Previous research has generally showed a difference in the expected direction, but it’s not observed 100% of the time, and some studies show no difference between boys and girls.

The purpose of the 2018 study I’m discussing today, shown below, was to perform a meta-analysis of many earlier studies investigating toy preference  to see if there are statistically significant differences between the sexes when looking at the overall data.(Click on screenshot to see the paper, get the pdf here, and reference is at the bottom). I’ll try to be brief, which is hard for such a long paper!

Methods: Meta-analysis is a statistical way to combine the results of different studies, even if they use different methods. What the analysis looks for is an overall pattern among different studies: in this case, toy preferences between boys and girls. The paper of Davis and Hines, from the Gender Development Research Centre at the University of Cambridge, measures the sizes and direction of preference differences between the sexes and conducts overall tests of significance using the statistical package R.

They tested not only if there was simply a significant difference between the sexes (i.e., is there a difference between boys and girls in toy preference?), but also whether there was a difference in preference when the two sexes were tested separately. For example, there could be a significant difference between boys and girls in toy preference, but it could be due entirely to one sex, say boys, preferring boy toys, with girls showing no preference. To test the within-sex preference, you need to look at boys and girls separately.

The authors also analyzed the data using the two “classic” examples of sex-specific toys: dolls versus toy vehicles.

To see if there was a pattern over time—you’d expect a decrease over the years if socialization had decreased—they looked at the relationship between the year a study was published and the size of any sex-specific preferences. Since schools and parents are now making a big effort to socialize kids against playing with sex-specific toys, one might expect the preference to decrease over the five decades of the work included in the meta-analysis.

Finally, the authors tested whether the degree of preference changed with the child’s age. If preference is due to socialization, one might expect an increase with age, but one might also expect the same thing if hardwired differences simply take time to show up.

The authors plowed through 3,508 studies that initially looked relevant, eliminating the vast majority because they didn’t satisfy the authors’ criteria. This pruning wound up with 75 toy-preference studies included in the meta-analysis.

The age of children among studies ranged from 3 months to 11 years, and a variety of different tests were done, including “free play” (children were given a group of toys and allowed to choose ones to play with “in an unstructured way”), “visual preference” (children were shown images of toys and the amount of time they spent looking at a toy was a measure of their interest in it), “forced choice” (usually a child is forced to choose between two pictures of toys, one a “girl’s toy” and one a “boy’s toy”), and “naturalistic choice” (what kind of toys children own; the authors did not use studies in which children’s collections of toys reflected their parents’ buying habits rather than what children asked for).

Toys were classified by the experimenters, and the authors avoided studies in which classification was done post facto (that is, any toys preferred by boys were subsequently classified as “boy toys” and the same for girls.

Here’s the graph they give of how toys were classified among the various experiments. The bars represent the frequency in the 75 studies in which a given kind of toy was classified as a boy’s toy (black bars), a girl’s toy (light gray bars) or a “neutral” toy (medium-gray bars):

(From paper): Fig. 2 Toys used as girl-related, boy-related, and neutral toys as listed in method sections of studies included in the meta-analysis. Studies could contribute more than one toy to the figure. These toys were mentioned in method sections of studies, but data were not typically reported for each individual toy. Most studies reported statistics for groups of toys, but not for individual toys

 

The results were clear and their significance high; you can read the paper to see more:

1.) There were large and highly significant average differences between the two sexes in preference for both boy-related and girl-related toys. This was in the “expected” direction. As I note above, this doesn’t tell you whether girls prefer girl-toys over boy toys or boys prefer boy toys over girl toys; it just says that there’s an overall difference between the sexes in their preference for one class of toy versus the other. BUT. . . .

2.) Within boys, boys preferred boy toys more than girl toys. And within girls, girls preferred girl toys more than boy toys. The overall sex difference, then, is the result of each sex preferring in general the toys considered “appropriate” for that sex.”

3.) #1 and #2 also hold for the “plush toys versus vehicles” test: there was a highly significant differences between boys and girls in toy preference, and that reflected girls’ preference for plush toys over vehicles and boys’ preference for vehicles over plush toys.

4.) “No choice” tests showed a stronger degree of sex-specific preference than did “choice” tests like free-play experiments. But the three other methods of assessing preference also showed statistically significant sex-specific differences.

5.) In three out of four analyses, the degree of preference increased with the age of the child. The only exception was the size of girls’ sex-specific preference for girl-related over boy-related toys, which showed no significant change.

Finally, and the one result that bears on the “genes versus socialization” hypotheses:

6.) The year of publication showed no relationship with the gender difference. Boys preferred boy toys over girl toys, and girls girl toys over boy toys, to the same extent over the 5 decades of studies. This was true of all four ways of measuring sex difference; in no case did the significance of the temporal relationship drop below 0.103 (it has to be below 0.05 to be considered significant).  This goes counter to what is expected if “socialization” had decreased in the last 50 years, for child preference would also have been expected to decrease if that preference was due to society’s enforcing standards and stereotypes on children’s toy affinity.

What does it all mean?  On the face of it, all this study shows is that there are consistent differences in toy preferences between boys and girls, with each sex preferring the sex-specific toys labeled by the previous experimenters. Methodology does influence the degree of preference, but there is a strong and consistent preference in the expected direction.

That in itself says nothing bearing on whether toy preference is innate, the result of socialization, or a mixture of both. But two facts imply that a reasonable amount of toy preference is innate. The first is the results of the macaque studies, showing similar preference for vehicles over plush toys in one (or maybe two) studies. Since macaques don’t adhere to a human-like patriarchy, nor do they ever see toys before the tests are done, this implies an innate sex-specific difference in preference.

The same holds for the lack of change in the degree of preference with time in the human studies. One might expect that preference would have decreased over the years given the attempts of parents (at least in much of the West) to avoid socializing their children into preferring “appropriate” toys. But that didn’t happen. However, I’m not sure whether anyone’s actually measured that decrease in socialization.

Finally, the fact that preference seems to be present at very young ages, when socialization is seemingly impossible, may be evidence for an innate component to preference. However, the preference increases with age, and one might say that this trend reflects socialization. And blank slaters might claim that covert or unknowing socialization is going on right from birth.

In the end, I find the evidence from the macaques the most convincing, but I have a feeling that human children, whose preferences parallel those of macaques, are also showing preferences based in part on evolution. Studies in other primates would be useful (do our closer relatives like gorillas and chimps show such preferences?) as well as more studies of very young children, perhaps using children brought up in homes where socialization is deliberately avoided.

__________________________

Davis, J. T. M. and M. Hines. 2020. How Large Are Gender Differences in Toy Preferences? A Systematic Review and Meta-Analysis of Toy Preference Research. Archives of Sexual Behavior. Online, published 27 January, 2020

 

Thoughts and prayers: what are they worth?

September 18, 2019 • 9:15 am

Everyone knows about the “thoughts and prayers” sent out after tragedies as a quotidian feature of the daily news. And all of us nonbelivers disparage not only the use of prayers (shown in a Templeton-funded study to not have any effect on healing after surgery), but also the uselessness of thoughts—unless conveyed directly to the afflicted person instead of dissipated in the ether.

But an anthropologist and an economist wanted to know more: what is the value of thoughts and prayers (t&p)? That is, how much would somebody in trouble actually pay to receive a thought, a prayer, or both? And would it matter if that afflicted person was religious or just a nonbeliever? Or whether the person offering t&p was religious? So the study below was published in the Proceedings of the National Academy of Sciences (click on screenshot below; pdf here; reference at bottom).

I suppose that, to an economist, the psychic value of getting thoughts or prayers (t&p) from strangers can be measured in dollars, and I’ll leave that for others to discuss. At any rate, the results are more or less what you think: Christians value t&p, nonbelievers don’t.

What Thunström and Noy did was to recruit 436 residents of North Carolina, the state hit hardest last year by Hurricane Florence. Those who were not affected by the hurricane (about 70% of the sample) had experienced another “hardship”. They were then given a standard sum of money (not specified) for participating in a Qualtrics survey, and an additional $5 to be used in the t&p experiment. Among the 436 participants, some were self-identified as Christian, while another group, either denying or unsure of God’s existence, were deemed “atheist/agnostic”. (The numbers in each group weren’t specified.)

The experiment also included people offering thoughts and prayers: people who were recruited to actually give them to those who were afflicted. These people included Christians, the nonreligious, and one priest who was “recruited from the first author’s local community.” Each offerer received a note detailing the travails of an afflicted person, and instructing them to offer either a thought or a prayer (it’s not clear whether the names of the afflicted were included in the note, but of course God would know).

To value the thoughts and prayers, the afflicted were offered two alternatives, among which a computer decided: an intercessory gesture that they’d pay for, or the absence of a gesture that they’d pay for. Payments could be positive (you’d have to actually give money), or negative (you’d pay to not have the gesture). The amount you’d pay varied, says the paper, between $0 and $5—the amount given for participating in the study, and subjects stated this “willingness to pay (WTP) before the computer made the choice.

The experiment isn’t described very well, and there’s no supplementary information, but I’ve taken some other details from second-hand reports of the studies, with the reporters apparently having talked to the authors. At any rate, here are the results, indicated in how much money people would give up for t&p, including both Christians (dark bars) and atheists/agnostics (light bars). Since atheists/agnostics wouldn’t be praying, the only alternative people were offered to receive that group were “thoughts”.

(from paper) The value of thoughts and prayers from different senders (95% confidence intervals displayed; n = 436).

Christians would always give up an amount of money significantly greater than zero for both thoughts and prayers, except when the thinker was a nonreligious stranger, to whom they’d pay $1.52 not to receive thoughts (dark bar below zero). Since the authors are social scientists, they use a significance level of 0.1 (“hard scientists” use at most 0.05, and the latter is significantly different from zero using the more lax criterion but not the one that scientist would use).

Christians would of course offer the most money ($7.17) for prayers from a priest, less money ($4.36) for prayers from a Christian stranger, and still less ($3.27) for thoughts from a Christian stranger, though this doesn’t appear to be significantly different from the price for prayers from the Christian stranger (the statistical comparison isn’t given).

In contrast, atheists/agnostics don’t give a rat’s patootie about t&p. In fact, they’d pay money to have priests or Christians not offer them thoughts and prayers, as you can see from the three light bars to the left, which are all below zero. What surprised me is that the nonbelievers would pay more to avoid prayers from a Christian stranger than from a priest ($3.54 versus $1.66 respectively), while they’d pay an intermediate amount ($2.02) to avoid getting thoughts from a religious stranger (these are all significantly different from zero). Finally, as you’d expect, nonbelievers don’t give a fig for thoughts from other nonbelievers, as we’re not superstitious. These nonbelievers would pay 33¢ to get thoughts from nonbelieving strangers.

There’s another part of the experiment in which participants were asked to give their level of agreement or disagreement to the statement, “I may sometimes be more helped by others’ prayer for me than their material help.” This “expected benefits index” (EBI) explains a great deal of the variation in the amount of money people were willing to pay for prayers and thoughts (or not pay for prayers and thoughts).

What does this all mean? To me, nothing more than the obvious: religious people value thoughts and prayers more than do nonreligious people. Moreover, religious people do not value thoughts from nonbelievers, and nonbelievers give negative value to thoughts or prayers from Christians, and no value to thoughts from fellow nonbelievers. That’s not surprising.

What is a bit surprising is that Christians would sacrifice money to get thoughts and prayers, and would pay just about as much for thoughts from other Christians than for prayers from other Christians. (Prayers from priests, however, were most valuable, showing that the Christians really do believe that priests have a power to help them more than do everyday Christians). I was also surprised that nonbelievers would pay money to avoid thoughts and prayers from Christians. Since we think these are ineffectual, why pay to avoid them?

In general, I saw the study as weak, and afflicted by a failure to fully describe the methods as well as the use of an inflated level of statistical significance (0.1).  All that it really confirms is that Christians think that thoughts and prayers really work; i.e., that they believe in the supernatural. But we knew that already. I am in fact surprised that this study was published in PNAS, which is regarded as a pretty good scientific journal.

_______________________

ThunströmL. and S. Noy 2019. The value of thoughts and prayers

Greg Sheridan in The Spectator: The West will die without Christianity

August 11, 2019 • 9:00 am

I’ve heard mutterings in the dark corners of the Internet that I spend too much time posting critiques of religion and theology. Well, to those who beef about that, I say, “I hear you, but I’m gonna keep doing it anyway.” For there must be constant pushback against religion, which is always sticking its nose in our tent, until superstition retreats to only a tiny place in the psyche of humanity.

Today’s beef is about an article from The Spectator (click on screenshot below), which is so bad that, to use Wolfgang Pauli’s phrase, it’s “not even wrong.”

The author, Greg Sheridan, is of course a believer; he’s described as “the foreign editor of The Australian and author of God Is Good for You. He has been a visiting fellow at King’s College London this year.” The subtitle of his book is “A Defense of Christianity in Troubled Times.” And that’s what he promulgates in this cringeworthy article:

Here are Sheridan’s theses. (Quotes from the article are indented; my own comments are flush left.)

a.) Christians and Christianity are much maligned.  And, sadly, Christianity is disappearing—displaced by other faiths, ideologies, and forms of nationalism.

There is no faster way to get yourself classed as dim than by admitting that you hold religious belief, especially Christian belief. Anti-Catholicism used to be the anti-Semitism of intellectuals; now Catholics get no special attention. All believing Christians are regarded as stupid, eccentric or malevolent.

. . . Dawkins et al assume that faith is irrational. Most British people seem to take it on faith (ironically) that to have faith is stupid.

. . . The prestige of the West has declined as its belief in Christianity has declined. The world is full of vigorous societies and movements — Chinese and Russian nationalism, Islamism in all its forms, east Asian economic dynamism — which no longer think the West has anything much to say.

The first statement is hyperbole. There are many venues in which religion is given special treatment: not criticized and even coddled. I, for one, don’t think all believing Christians are “stupid, eccentric, or malevolent”. I think many are brainwashed, all are deluded, and only a very few are “malevolent.” We see more hyperbole in the idea that much of the world doesn’t think “the West has anything much to say.” Does he really believe that? Or is he really saying “much of the world isn’t Christian”?

As for “taking it on faith”, see the next point:

b.) Christian faith is, in fact, rational. And it’s rational because we feel it’s true.

But the way I see it, faith is not the enemy of reason but the basis of reason. First, to be reasonable, I have to have faith in my ability to distinguish between what is real and what is imaginary. Then, for almost everything I know, I need faith in other human beings. I believe I am the son of my late parents. I can’t prove it. It’s a rational belief but not proven. Much of the atheist assault on belief deliberately confuses what it is rational to believe with the much narrower category of what is rationally proven.

This is gobbledy-gook, conflating faith construed as “belief in religious tenets that lack evidence” with “faith as confidence in what you’ve seen confirmed repeatedly”. He uses “proven” as a red herring: nothing empirically is ever proven beyond refutation. But there are things on which you’d bet considerable amounts of money, like the sun coming up tomorrow. It’s possible that it might not, but would you say that that belief is identical to the “faith” that Jesus rose from the dead? If you want to read about conflations of the word “faith” like this one, I refer you to my Slate piece (which I much like, because it’s mine), “No faith in science.

And look at the criteria Sheridan uses to show that religion is true!:

Religious belief, of course, is not just the absence of atheism. That belief in God conforms to our intuition, and to the overwhelming history of human experience, is the most powerful evidence for its being true. God is a God of experience. The long human experience of God, and the vast testimony of this, is persuasive.

Persuasive, perhaps, to those prepared to drink the Kool-Aid! For most of human experience, diseases were seen as signs of a god’s or the spirits’ displeasure. Does that mean that that belief is also true? Further, which religion is true? For most of human history, believers weren’t Christians, and even now the number of Muslims plus Hindus exceeds the number of Christians. So what is true? Was Jesus resurrected, or did Brahma create the Earth, or did Muhammed receive the tenets of the Final Faith from the angel Gabriel? Please tell us what’s true, Mr. Sheridan!

Further, he implies that religion must also be true because most people, living or dead, adhered to religion.

It is a very eccentric position for the West to adopt. The vast majority of human beings who have ever lived, and the vast majority alive today, believe in God. Christianity is on fire in Asia — it’s the only social force the Chinese Communist party cannot control — and Africa and many parts of the world. It is also the most persecuted religion. From Pakistan to the Middle East, Christians believe so seriously that they accept death rather than disavowal.

Need we inform the sweating author that just because a lot of people believe something, especially something with no evidence, that has no bearing on its truth? Enough said.

c.) All good things in Western culture ultimately derive from Christianity. 

In reality everything we like about western liberalism grew directly and organically from Christianity. Tertullian in 3rd–century Carthage declared: ‘Everyone should be free to worship according to their own convictions.’ Benedict in the 6th century established the first democratic, egalitarian communities — the Benedictine monasteries — which combined hard work, social welfare, profound scholarship and a life of prayer. The church wrestled the concept of sin away from that of crime. Both St Augustine and Thomas Aquinas argued that prostitution should not be illegal, because while morally wrong it was inevitable, and the law should not try to enforce every moral teaching.

Across 2,000 years lots of Christians have done lots of bad things. Formal adherence to Christianity does not absolve anyone of the human condition with all its frailties. But Christianity always calls its followers back to the gospels’ first principles. You can read the gospels, or St Augustine in the 4th century, or Thomas Aquinas in the 13th, or John Wesley or William Wilberforce in the 18th, or Nicky Gumbel today, and recognise that you and they all inhabit the same moral universe, the same culture. That is astonishing.

This is a bizarre argument, which rests on Sheridan’s notion that Christianity calls its followers back to “the gospels’ first principles” as adumbrated by thinkers like Aquinas and Augustine the Hippo. I’m not going to belabor the point that the good things about liberal democracy come not from religion but from rational consideration, and in fact are opposed to Christian teachings. As Andrew Seidel argues persuasively in his new book The Founding Myth: Why Christian Nationalism Is Un-American, the basis of American constitutional democracy requires explicitly rejecting the tenets of Christianity. And seriously, is it good for us to accept Aquinas’s and Augstine’s literal belief in Hell? Or Jesus’s teaching that we must forsake our families to follow Christ?

I won’t belabor Sheridan’s insistence that Christianity improved the treatment of women and girls. Perhaps it did in the Roman Empire (this is above my pay grade), but it certainly does not now, for Christianity, and Catholicism in particular, is deeply patriarchal. So, of course, are most other religions.

Further, we can observe societies throughout the world, and see that the most atheist societies seem to be the most moral, not the least moral. Here I refer you to Scandinavia and other European countries with a strong and empathic social network, yet few inhabitants believe in God. Well, Sheridan would claim that the goodness of the nonbelieving West is simply derived from Christian principles. I reject that, too, for you can find those principles adumbrated in societies existing well before the advent of Christianity.

d.) The basis for our accepting Christianity must depend on accepting its truth statements.  Here we see something that I’ve long argued: a morality derived from religion is intimately connected with believing in the empirical truths of its tenets: the Resurrection of Jesus, the receiving of the Qur’an by Muhammad, and so on. For without these bases, there is no reason to accept moral dicta. This was a major point of my book Faith Versus Fact, and I’m glad to see that Sheridan admits it explicitly:

I have come to a disconcerting conclusion. The West cannot really survive as the West without a re-energised belief in Christianity. The idea that we can live off Christianity’s moral capital, its ethics and traditions, without believing in it, appeals naturally to conservatives of a certain age. But you cannot inspire the young with a vision which you happily admit arises from beliefs that are fictional and nothing more than long-standing superstition. Christianity is either true, or it’s not much use at all.

There you have it, ladies and gentlemen, brothers and sisters, and comrades: Christianity is either true, or it’s not much use at all. I’m sure he’s not referring to the “truth” of its moral statements, but rather to the truth of its dogma: Jesus was divine, the son of God, was resurrected, and will return to judge us all. If you don’t buy this argument, Sheridan repeats it in his last paragraph:

There is, however, only one reason that counts for believing in Christianity: it’s true. Come on in, the water’s fine.

Yeah, like the Love Canal was fine.

d.) We need to preserve Christianity because, without it, the West would fall apart. He asserts this in the first paragraph quoted in section (d), but then adds this palpable nonsense (my emphasis):

Liberalism today, in rejecting its Christian roots, is cut off from all limits, all common sense, from a living tradition. It is careening down ever more febrile paths of identity politics, rejecting the Christian universalism from which it sprang. It is harming people in the process. Sociologists have established beyond reasonable doubt that religious belief and practice lead to the greatest human happiness.

Really? Which sociologists are those? Surely not Norris and Inglehart, or many like them, who have repeatedly shown that there is a negative correlation between the religiosity of a country and its well being. The same holds true among the fifty states of the U.S. And, as I reported before, the countries that are most religious also have the lowest values of the United Nations’ “Global Happiness Index.” To wit (graph made by reader Michael Coon):

Now this is a correlation and doesn’t prove that religion causes unhappiness—in fact, my own view, supported by other data, is that religion tends to be more tenacious in those countries where people have lower well-being, for they turn to God when they can’t turn to their society or government. But that aside, there is simply no evidence that, at least across states and nations, more religious people are happier. And even if they were, are we supposed to believe in something that we reject, simply because it’s supposed to be good for us? You can’t believe in something you don’t believe, unless you’re Winston Smith.

If you want to find out more, simply plow through Steve Pinker’s last two big books, in which he argues, with data, that it is secular values alone that have lead to the greater well being of the West over the last two centuries. Religion has, and remains, an impediment to happiness and social progress. The West will survive just fine—in fact, better—without superstition (i.e., Christianity and other faiths).

Sheridan is not even wrong.

h/t: Harry

 

How many atheists are there in the U.S.? A new paper says about 26% of the population

May 16, 2019 • 8:45 am

The estimate given in the title suggests a much higher number of American atheists than estimates from other studies relying on self-report (e.g., “Are you an atheist?”). Those self-report estimates range between 3% and 11% (the authors of the paper below define “atheists” as “people who disbelieve or lack belief in the existence of a god or gods”). The higher number from the present study could reflect errors or biases in how the authors derived their estimates, or (and I think this is a bit more likely) the fact that people’s atheism was estimated indirectly rather than by self-report.

Note that the paper, which you can get free by clicking on the link) is on Arχiv, so it hasn’t yet gone through peer review. Nevertheless, the results are interesting and it’s well worth reading. (It’s not overly long.)

I’ll try to be brief. The authors estimated the frequency of atheists among Americans by surveying people using two YouGov samples of 2000 people each. They also did their estimates using Bayesian techniques: seeing what proportion of atheists in the public was most likely to yield the survey results. The composite result was, as I said, 26%.

How did they indirectly estimate the proportion of atheists? They used a clever technique in which people were asked to list how many statements were either true or not true about them, with one list adding an atheist belief and the other missing it. The difference in the number of statements that people agreed or disagreed with in the two lists can be used to estimate the proportion of atheists. They also had a control question that you’ll see in on the second list, which should yield a 0% estimate of people who think that 2 + 2 is more than 13. The authors say that this indirect method of estimation has been showing in other studies to be revealing in that it gives a higher percentage than self-report, but only for socially sensitive traits which people don’t want to disclose in a direct self report. 

The authors also asked people directly if they were atheists, so they have an estimate of self-report.

This first group of questions yielded a Bayesian estimate of 32% atheists, with 95% confidence intervals of 11% and 54%, while the self-report (first question) yielded only 17%. You can see that the added question is in the third column and the participants aren’t supposed to say which statements aren’t true of them, but merely give the number. The difference between the totals in column 2 and 3 can be used to give a Bayesian estimate of the proportion of people who do NOT “believe in God”:

Some confirmation of the technique’s validity comes from analyzing the data from those who self-report being atheists in column 1. The Bayesian data from columns 2 and 3 give an estimated proportion of atheism of 100% of these people, so the self-report among those brave enough to disclose their atheism matches the indirect estimate.

Sample II used the same method, but couching atheism as a positive rather than a negative answer (i.e., you have to note whether “not believing in God” is true of you). There’s a control question about math in the third column.

This report yielded an estimate of atheism (comparison of first versus second column) of 20%, with confidence intervals of 6% and 35%.

The lower estimate in Sample II versus Sample I may, as the authors note, be attributed to the fact that in the second sample you have to note (indirectly) that atheism is “true of me”, which is more similar to a self report. And indeed, the 20% Bayesian estimate here is close to the self-report estimate of 17%.

Overall, combining both studies gave a Bayesian estimate of the proportion of atheists in America of 26%, with confidence intervals of 13% and 39%. The authors add that it is 99% certain that more than 11% of Americans are atheists (the Gallup poll estimate) and 93% certain that the proportion of atheists is higher than 17% (their self-report estimate). That means that about a third of atheists won’t disclose their nonbelief when asked directly. At any rate, the higher estimates from this study than in direct-question surveys suggests that there are far more atheists in America than believed: perhaps more than 80 million.

One weakness of the study is that the control question, which should show 0% of people rejecting the statement “I do not believe that 2 + 2 is less than 13”, actually gave an estimate of 34%. The authors note this, showing that they are careful about the data:

Without a doubt, this is our most damning result (cf. Vazire, 2016). It may reflect any combination of genuine innumeracy, incomprehension of an oddly phrased item, participant inattentiveness or jesting, sampling error, or a genuine flaw in the unmatched count technique. Fortunately, we were also able to assess validity in a second way. In Sample II, the unmatched count to generated an atheist prevalence estimate of almost exactly 100% among self-described atheists, but only 13% among all other religious identifications. It is unlikely that a genuinely invalid method would track self-reported atheism this precisely. Across two assessment attempts our validity evidence was a mixed bag. This perhaps suggests that future researchers should attempt to—as we were able in Sample II but not Sample I—include diagnostic self-reports alongside the unmatched count to assess validity. And, as the present estimates are only as strong as the method that generated them, they should be treated with some caution. In our view—given heavy social pressures to be or appear religious—the 11% atheism prevalence estimates derived solely from telephone self-reports is probably untenable. Does this imply that our most credible estimate of 26% should be uncritically accepted instead? Of course not. The present two nationally representative samples merely provide additional estimates using a different technique, and our model suggests a wide range of relatively credible estimates. We hope that future work using a variety of direct and indirect measures will provide satisfactory convergence across methods, and the present estimates are merely an initial indirect measurement data point to be considered in this ongoing scientific effort.

Finally, here’s a table breaking down atheism (both self-report and indirect estimates) by sex, politics, age, and education. We see that the prevalence of atheism isn’t that disparate among any groups except “political affiliation”, but follows the familiar pattern of more atheists among males than females; more atheism among more highly rather than less educated people; more atheism among Democrats than among Republicans (note the 0% indirect estimate for Republicans!); and no difference between Millennials and baby boomers. Self-report is always lower than indirect estimates except among Republicans, which is a mystery. (The last column gives the probability that the indirect estimates are higher than the self-reporting estimates.)

The authors discuss the wider implications on the last page of their paper, noting that we can’t extend these estimates to the rest of the world because the degree of underreporting in the direct-question technique (the only one used) may vary, with opprobrium less in countries like Norway and Denmark, and greater in countries like Saudi Arabia. But the authors do speculate that there may be around two billion atheists worldwide, which makes the number of atheists higher than the number of Muslims (1.8 billion), but less than the number of Christians (2.4 billion) And their final paragraph gives the implications for the social acceptance of atheists:

Finally, the present results may have considerable societal implications. Preliminary research suggests that learning about how common atheists actually are reduces distrust of atheists (Gervais, 2011). Thus, obtaining accurate atheist prevalence estimates may help promote trust and tolerance of atheists—potentially 80+ million people in the USA and well over a billion worldwide.

Join the club! I’m referring to you, Andrew Sullivan!

And for dessert, you can have this new op-ed by David Leonhardt about the demonization of atheists in America (h/t: Greg Mayer):

h/t: Ginger K