The Middle East and Ireland losing their religion

February 5, 2021 • 9:30 am

Two of the last holdout areas for religion—countries and regions that have historically been resistant to nonbelief—are now becoming surprisingly secular. Those are Ireland in the West and seven countries in the Middle East—at least according to recent surveys. The stunning thing about both areas is how fast the change is coming.

Let’s take the Middle East first. There are two studies mentioned in the article below in Die Deutsche Welle (click on screenshot):

The article itself gives data for only Iran, but you can find data for six other countries by clicking on the article’s link to a study at The Arab Barometer (AB), described as “a research network at Princeton University and the University of Michigan.” (The sample size for that study isn’t easily discernible from the various articles about it).

First, a graph showing a striking increase in secularism across the six nations:

The change from the blue bar to the burgundy one is at most 7 years, yet every index in each country has dropped over that period save for a few indices that appear to be unchanged. The true indices of religiosity itself—profession of nonbelief and attendance at mosques—has fallen dramatically. And remember, this is over less than a decade.  Trust in religious leaders and Islamist parties has also dropped.

Here’s the summary among all these countries. (Note that many Muslim countries, including those in Africa and the Far East, as well as nations like Saudi Arabia and Yemen, aren’t represented.) 

In 2013 around 51% of respondents said they trusted their religious leaders to a “great” or “medium” extent. When a comparable question was asked last year the number was down to 40%. The share of Arabs who think religious leaders should have influence over government decision-making is also steadily declining. “State religious actors are often perceived as co-opted by the regime, making citizens unlikely to trust them,” says Michael Robbins of Arab Barometer.

The share of Arabs describing themselves as “not religious” is up to 13%, from 8% in 2013. That includes nearly half of young Tunisians, a third of young Libyans, a quarter of young Algerians and a fifth of young Egyptians. But the numbers are fuzzy. Nearly half of Iraqis described themselves as “religious”, up from 39% in 2013. Yet the share who say they attend Friday prayers has fallen by nearly half, to 33%. Perhaps faith is increasingly personal, says Mr Robbins.

And some data from Iran, not represented in the survey above. Remember, Iran is a theocracy. The survey is for those over 19, and the sample size is large: over 40,000 “literate interviewees”.

An astonishing 47% have, within their lifetime, gone from being religious to nonreligious, while only 6% went in the opposite direction. As we see for almost every faith, women retain their religion more than men.  The “non-religious people” aren’t all atheists or agnostics, but instead appear to be “nones”—those with no formal affiliation to a faith. (This includes atheists and “spiritual people” as well as goddies who don’t belong to a formal church.)

I say that many are “nones” because another study in Iran, cited in the AB article, showed that 78% of those surveyed in the Middle East believe in God: a lot more than the 47% below who professor to being “non-religious” (of course these are different surveys and might not be comparable). Still, in this other survey, 9% claim that they’re atheists—comparable to the 10% of Americans who self-describe as atheists.

And a general remark by a religion expert whom we’ve encountered before:

The sociologist Ronald Inglehart, Lowenstein Professor of Political Science emeritus at the University of Michigan and author of the book Religious Sudden Decline [sic], has analyzed surveys of more than 100 countries, carried out from 1981-2020. Inglehart has observed that rapid secularization is not unique to a single country in the Middle East. “The rise of the so-called ‘nones,’ who do not identify with a particular faith, has been noted in Muslim majority countries as different as Iraq, Tunisia, and Morocco,” Tamimi Arab added.

Inglehart’s book, Religion’s Sudden Decline, came out January 2, so it’s brand new, and you can order it on Amazon here.

*************

It’s a pity that Grania isn’t here to comment on this article from Unherd’s new news site The Post, as she always had a good take on Catholicism in Ireland (she was, in fact, a German citizen born in South Africa). These data come from a study taken by the World Inequality Database, which I can’t access. I’ll just give the scant data for Ireland presented by David Quinn (click on screenshot):

The proportion of Irish people who say they never go to church:

2011-2016: 19%
2020:     50%

That is a huge jump!

The proportion of Irish people who regularly attend church (once a month or more often):

2011-2016: 33%
2020:     28%

This shows that the drop in Irish religiosity reflects a rise in who rarely or never go to church, not a falling-off of the regulars. Quinn reports that “just under half of Irish people were coming to church less than once a month four or five year [sic] ago and this is now just 22%. Many of those sporadic attenders have stopped coming altogether.”

Over much of the 12 years this website has been going (we started in January 2009), I’ve written posts showing the decline of religiosity in the West, predicting that it is a long-term trend that will end with religion becoming a vestigial social organ. Yes, it will always be with us, but in the future it won’t be very much with us. But I thought the Middle East would be a last bastion of belief, as Islam is so deeply intertwined with politics and daily life. But that appears to be waning as well, for the Middle East is becoming Westernized in many ways, and with that comes Western values and secularism (see Pinker’s Enlightenment Now for discussion of increased secularism and humanism.) This is to be applauded, except by those anti-Whigs who say that religion is good for humanity.

Quinn echoes much of this at the end of his piece, explaining why Ireland remained more religious than England and the countries of Northern Europe:

Secularisation has swept across the whole of the western world, and Ireland is part of the West. It was impossible for Ireland not to eventually be affected by social and intellectual trends elsewhere. What almost certainly delayed secularisation in Ireland is that, in the years after we gained independence, one way of showing we had shaken off British rule was by making Catholicism an integral part of our national identity. As we no longer believe it is necessary to do this, we are now shaking off the Church.

The third factor is that, as a small country it can be particularly hard to stand out from the crowd. Once, we all went to Mass. Now, below a certain age, almost no-one goes. We were a nation of nuns and priests. Now, we are becoming a people with no direct religious affiliation: a country of ‘nones’.

Amen!

h/t: Steve, Clive

Dueling essays that come to the same conclusion about wokeness

January 29, 2021 • 12:45 pm

“We are all on campus now.”
—Andrew Sullivan

Here we have two editorials purporting to say different things, but in the end reaching nearly identical conclusions.

The first, published at Persuasion (click on screenshot), is by a young writer, Sahil Handa, described by Harvard’s Kennedy school: “a rising Junior from London studying Social Studies and Philosophy with a secondary in English. At Harvard, Sahil writes an editorial column for the Crimson and is a tutor at the Harvard Writing Center. He is the co-founder of a Podcast Platform startup, called Project Valentine, and is on the board of the Centrist Society and the Gap Year Society.”

The title of Handa’s piece (below) is certainly provocative—I see it as a personal challenge!—and his conclusion seems to be this: most students at elite colleges (including Harvard) are not really “woke” in the sense of constantly enforcing “political correctness” and trying to expunge those who disagree with them. He admits that yes, this happens sometimes at Harvard, but he attributes wokeness to a vocal minority. The rest of the students simply don’t care, and don’t participate. In the end, he sees modern students as being similar to college students of all eras, especially the Sixties, when conformity meant going to “hippie protests.”  His conclusion: modern “woke” students, and those who don’t participate in the wokeness but also don’t speak up, are evincing the same “old borgeois values” (presumably conformity). And we shouldn’t worry about them.

It’s undeniable, and Handa doesn’t deny it, that Wokeism is pervasive at Harvard. He just doesn’t see it as universal:

If you’re reading this, chances are you’ve heard of the woke mob that has taken over college campuses, and is making its way through other cultural institutions. I also suspect you aren’t particularly sympathetic to that mob. While I’m not writing as a representative of the woke, I do wish to convince you that they are not as you fear. What you’re seeing is less a dedicated mob than a self-interested blob.

I recently finished three years as a Harvard student—a “student of color,” to be precise—and I passed much of that time with the type you might have heard about in the culture wars. These were students who protested against platforming Charles Murray, the sociologist often accused of racist pseudoscience; these were students who stormed the admissions office to demand the reversal of a tenure decision; these were students who got Ronald Sullivan—civil rights lawyer who chose to represent Harvey Weinstein in court—fired as Harvard dean.

. . . . Nor are most students even involved in campus protest.

There are almost 7,000 undergraduates at Harvard, yet the tenure protest was attended by fewer than 50 students, and a few hundred signed the letters urging the administration to fire Sullivan. Fretful liberals do not pause to think of all the students who didn’t join: those who talked critically of the activists in the privacy of their dorm rooms; those who wrestled with reservations but decided not to voice them; or those who simply decided that none of it was worth their time.

But Sullivan was fired as a dean. The Harvard administration itself engages in a lot of woke decisions, like punishing students from belonging to off-campus single-sex “finals= clubs” (probably an illegal punishment), and giving them “social justice placemats” in the dining halls to prepare them to go home for the holidays. The woke students may not be predominant, but they are vocal and loud and activist. If that’s all the administration sees and hears, then that’s what they’ll cater to.

But why aren’t the non-woke students protesting the woke ones? Well, Handa says they just don’t care: they’re too busy with their studies. But it’s more than that. As he says above, the students who have “reservations” “decide not to voice them.” Why the reticence, though?

It’s because voicing them turns them into apostates, for their college and post-college success depends on going along with the loud students—that is, acquiescing to woke culture.  The Silent Majority has, by their self censorship, become part of woke culture, which creates self-censorship. (My emphases in Handa’s excerpt below):

The true problem is this: Four years in college, battling for grades, for résumé enhancements and for the personal recommendations needed to enter the upper-middle-class—all of this produces incentives that favor self-censorship.

College campuses are different than in the Sixties, and students attend for different reasons. Young people today have less sex, less voting power and, for the first time, reduced expectations for the future. Back in the Sixties, campus activists were for free speech, and conservatives were skeptical; today, hardly anybody seems to consistently defend free speech. In 1960, 97% of students at Harvard were white, and almost all of them had places waiting in the upper class, regardless of whether they had even attended university. Today, fewer than 50% of Harvard students are white, tuition rates are 500% higher, and four years at an Ivy League college is one of the only ways to guarantee a place at the top of the meritocratic dog pile.

It would be strange if priorities at university had not changed. It would be even stranger if students had not changed as a result.

Elite education is increasingly a consumer product, which means that consumer demands—i.e. student demands—hold sway over administration actions. Yet most of those student demands are less a product of deeply understood theory than they are a product of imitation. Most students want to be well-liked, right-thinking, and spend their four years running on the treadmill that is a liberal education. Indeed, this drive for career success and social acquiescence are exactly the traits that the admissions process selects for. Even if only, say, 5% of students are deplatforming speakers and competing to be woker-than-thou, few among the remaining 95% would want to risk gaining a reputation as a bigot that could ruin their precious few years at college—and dog them on social media during job hunts and long after.

It seems to me that he does see a difference between the students of then and now. Yes, both are interested in conforming, but they conform to different values, and act in different ways. After all, they want to be “right thinking”, which means not ignoring the woke, but adopting the ideas of the woke.  And that conformity extends into life beyond college, for Harvard students become pundits and New York Times writers. This means that intellectual culture will eventually conform to the woke mold, as it’s already been doing for some time.

In the end, Handa’s argument that we should pretty much ignore Woke culture as an aberration doesn’t hold water, for he himself makes the case that many Harvard students exercise their conformity by not fighting Woke culture, and even becoming “right-thinking”.  After tacitly admitting that Wokeism is the wave of the future, which can’t be denied, he then reiterates that college Wokeism doesn’t matter. Nothing to see here folks except a war among elites, a passing fad:

The battle over wokeism is a civil war among elites, granting an easy way to signal virtue without having to do much. Meantime, the long-term issues confronting society—wage stagnation, social isolation, existential risk, demographic change, the decline of faith—are often overlooked in favor of this theater.

Wokeism does represent a few students’ true ideals. To a far greater number, it is an awkward, formulaic test. Sometimes, what might look to you like wild rebellion on campus might emanate from nothing more militant than old bourgeois values.

Perhaps Stalinism didn’t represent the ideas of every Russian, either, but by authoritarian means and suppression of dissent, all of Russia became Stalinist. The woke aren’t yet like Stalinists (though they are in statu nascendi), but even if they aren’t a majority of the young, the values of the Woke can, and will, become the dominant strain in American liberal culture. For it is the “elites” who control that culture. Even poor Joe Biden is being forced over to the woke Left because he’s being pushed by the woke people he appointed.

***********

Michael Lind has what I think is a more thoughtful piece at Tablet, which lately has had some really good writing. (They’ve been doing good reporting for a while; remember when they exposed the anti-Semitism infecting the leaders of the Women’s March?). Lind is identified by Wikipedia as “an American writer and academic. He has explained and defended the tradition of American democratic nationalism in a number of books, beginning with The Next American Nation (1995). He is currently a professor at the Lyndon B. Johnson School of Public Affairs at the University of Texas at Austin.”

Lind’s thesis, and I’ll be brief, is that the nature of American elitism has changed, and has become more woke. It used to be parochial, with each section of the country having its own criteria for belonging to the elite (i.e. attending the best regional rather than national colleges). Now, he says, we have a “single, increasingly homogeneous national oligarchy, with the same accent manners, values, and educational backgrounds from Boston to Austin and San Francisco to New York and Atlanta. He sees this as a significant social change: a “truly epochal development.”

Click on the screenshot to read his longer piece:

In some ways, avers Lind, society is more egalitarian than ever, and what he means by that is that there is less obvious bigotry or impediments to success for minorities. And he’s right:

Compared with previous American elites, the emerging American oligarchy is open and meritocratic and free of most glaring forms of racial and ethnic bias. As recently as the 1970s, an acquaintance of mine who worked for a major Northeastern bank had to disguise the fact of his Irish ancestry from the bank’s WASP partners. No longer. Elite banks and businesses are desperate to prove their commitment to diversity. At the moment Wall Street and Silicon Valley are disproportionately white and Asian American, but this reflects the relatively low socioeconomic status of many Black and Hispanic Americans, a status shared by the Scots Irish white poor in greater Appalachia (who are left out of “diversity and inclusion” efforts because of their “white privilege”). Immigrants from Africa and South America (as opposed to Mexico and Central America) tend to be from professional class backgrounds and to be better educated and more affluent than white Americans on average—which explains why Harvard uses rich African immigrants to meet its informal Black quota, although the purpose of affirmative action was supposed to be to help the American descendants of slaves (ADOS). According to Pew, the richest groups in the United States by religion are Episcopalian, Jewish, and Hindu (wealthy “seculars” may be disproportionately East Asian American, though the data on this point is not clear).

Membership in the multiracial, post-ethnic national overclass depends chiefly on graduation with a diploma—preferably a graduate or professional degree—from an Ivy League school or a selective state university, which makes the Ivy League the new social register. But a diploma from the Ivy League or a top-ranked state university by itself is not sufficient for admission to the new national overclass. Like all ruling classes, the new American overclass uses cues like dialect, religion, and values to distinguish insiders from outsiders.

And that’s where Wokeness comes in. One has to have the right religion (not evangelical), dialect (not southern) and values (Woke ones!):

More and more Americans are figuring out that “wokeness” functions in the new, centralized American elite as a device to exclude working-class Americans of all races, along with backward remnants of the old regional elites. In effect, the new national oligarchy changes the codes and the passwords every six months or so, and notifies its members through the universities and the prestige media and Twitter. America’s working-class majority of all races pays far less attention than the elite to the media, and is highly unlikely to have a kid at Harvard or Yale to clue them in. And non-college-educated Americans spend very little time on Facebook and Twitter, the latter of which they are unlikely to be able to identify—which, among other things, proves the idiocy of the “Russiagate” theory that Vladimir Putin brainwashed white working-class Americans into voting for Trump by memes in social media which they are the least likely American voters to see.

Constantly replacing old terms with new terms known only to the oligarchs is a brilliant strategy of social exclusion. The rationale is supposed to be that this shows greater respect for particular groups. But there was no grassroots working-class movement among Black Americans demanding the use of “enslaved persons” instead of “slaves” and the overwhelming majority of Americans of Latin American descent—a wildly homogenizing category created by the U.S. Census Bureau—reject the weird term “Latinx.” Woke speech is simply a ruling-class dialect, which must be updated frequently to keep the lower orders from breaking the code and successfully imitating their betters.

I think Lind is onto something here, though I’m not sure I agree 100%. This morning I had an “animated discussion” with a white friend who insisted that there was nothing wrong with using the word “Negro”. After all, he said, there’s the “United Negro College Fund.” And I said, “Yeah, and there’s also the National Association for the Advancement of Colored People, but you better not say ‘colored people’ instead of ‘people of color’!” In fact, the term “Negro” would be widely seen as racist now, though in the Sixties it wasn’t, and was used frequently by Dr. King, who almost never used the n-word in public. “Negro” was simply the going term for African-Americans then, but now it’s “people of color”, or, better yet, “BIPOCs. And that will change too”. “Gay” has now become a veritable alphabet of initials that always ends in a “+”. “Latinx” isn’t used by Hispanics, but by white people and the media. It’s an elitist thing, as Lind maintains.

But whether this terminology—and its need to constantly evolve, 1984-like—is a way of leveraging and solidifying cultural power, well, I’m not sure I agree. Weigh in below.

Should Ph.D.s call themselves “doctor” in everyday life?

December 13, 2020 • 1:00 pm

UPDATE: At the libertarian website Reason, legal scholar Eugene Volokh has a different take, based partly on what he sees as the overly lax and non-scholarly nature of Jill Biden’s Ed.D.

_____________________

This week’s kerfuffle involves a writer at the Wall Street Journal, Joseph Epstein, taking Jill Biden to task for calling herself “Dr. Biden”—and allowing Joe Biden’s campaign to call her that—when her doctorate was in education (she has two master’s degrees as well). In other words, she’s a Ph.D. In the article below (click on screenshot, or make a judicious inquiry if you can’t access it), Epstein argues that only medical doctors should call themselves “doctor”, and advises Jill Biden to ditch her title.


I have to say that Epstein’s article, which has been universally attacked for being sexist and misogynistic, is indeed patronizing and condescending (Epstein has an honorary doctorate, but not an “earned” one). I’d be loath to call it sexist on those grounds alone, but the tone of the article, and the words he uses, do seem sexist. Here are two excerpts:

Madame First Lady—Mrs. Biden—Jill—kiddo: a bit of advice on what may seem like a small but I think is a not unimportant matter. Any chance you might drop the “Dr.” before your name? “Dr. Jill Biden ” sounds and feels fraudulent, not to say a touch comic. Your degree is, I believe, an Ed.D., a doctor of education, earned at the University of Delaware through a dissertation with the unpromising title “Student Retention at the Community College Level: Meeting Students’ Needs.” A wise man once said that no one should call himself “Dr.” unless he has delivered a child. Think about it, Dr. Jill, and forthwith drop the doc.

As for your Ed.D., Madame First Lady, hard-earned though it may have been, please consider stowing it, at least in public, at least for now. Forget the small thrill of being Dr. Jill, and settle for the larger thrill of living for the next four years in the best public housing in the world as First Lady Jill Biden.

The use of the word “kiddo,” and the reference to her as “Dr. Jill” does seem sexist, though of course there’s “Dr. Phil” (Ph.D., clinical psychology) and a whole host of other doctors, including M.D. medical experts on the evening news, who are called by their first name. (“Thanks, Dr. Tim”.) Those are usually terms of affection, though, while “Dr. Jill” is clearly not meant affectionately. And why the denigration for the title of her thesis? Finally—”kiddo”? Fuggedabout it. The undoubted truth that women’s credentials have historically been impugned also would lead one to see Epstein’s piece as falling into that tradition.

I sure as hell wouldn’t have written that article, and, as somebody suggested in the pile-on, would Epstein have written it about a man? Where’s his critique of “Dr. Phil”?

The fracas is described in a piece by Matt Cannon in Newsweek and the piece below in the New York Times. I haven’t been able to find a single article about Epstein’s op-ed piece that doesn’t damn it to hell for sexism, and, in fact, although he was a long-term honorary emeritus lecturer at Northwestern, that University criticized his piece (official statement: “Northwestern is firmly committed to equity, diversity and inclusion, and strongly disagrees with Mr. Epstein’s misogynistic views”). His picture has also been removed from Northwestern’s website, showing that he’s toast.  Were Epstein at the University of Chicago, my school wouldn’t have made any official statement, as it’s not 100% clear that his statement was motivated by misogyny, much as the article suggests it.

But that leaves the question “should anyone with a Ph.D. call themselves ‘doctor'”? My answer would be “it’s up to them.”

But I have to say that I have never been able to call myself “Doctor Coyne” except as a humorous remark or in very rare situations that I can’t even remember. I will allow other people to call me “Doctor Coyne.”, but as soon as I have a relationship with them, the “Doctor” gets dropped for “Jerry.” My undergraduates would usually call me “Professor Coyne”, or sometimes “Doctor Coyne,” and that was okay, for being on a first-name basis with them effaces the mentor/student relationship that is useful when teaching. But to my grad students I was always “Jerry.”

It is true that I worked as hard, or even harder, than do medical students to earn the right to be called “Doctor”, taking five years of seven-days-a-week labor to get it, but somehow I don’t feel that I should get a lifetime honorific for that. I got a Ph.D. so I could become a professional evolutionist, not to command respect from people, many of whom might mistakenly think I was a medical doctor.  The New York Times quotes Miss Manners here:

Judith Martin, better known as the columnist Miss Manners, said her father, who had a Ph.D. in economics, insisted on not being called Dr. and implored his fiancée, Ms. Martin’s mother, to print new wedding invitations after the first version included the title.

“As my father used to say, ‘I’m not the kind of doctor who does anybody any good,’” Ms. Martin said in an interview on Saturday. “He didn’t feel it was dignified. I am well aware that this is a form of reverse snobbery.”

Still, Ms. Martin said, “I don’t tell people what to call themselves and I’m aware that women often have trouble with people who don’t respect their credentials.”

I’m pretty much on board with both her and her father here, though I’d take issue with saying my refusal to call myself “Doctor. Coyne” is reverse snobbery. Rather, it’s part of my lifelong desire not to be seen as better than other people just because I got a fancy education. I remember that when I got my first job at the University of Maryland, I was given an empty lab on the second floor of the Zoology Building. But it was in a box containing all the application folders for everyone who had applied for the job I got. After a few days of resisting, I peeked into my own folder to see my letters of recommendation. And I’ll always remember Dick Lewontin’s letter, which, though highly positive, added something like this, “If Jerry has any faults, is that he is too self-denigrating, always underselling himself.”  Well, that may be true, but it’s better to undersell yourself than oversell yourself! I’ve always detested the pomposity of accomplished academics. Other academics think it lends cachet to their books (even “trade books”) by using “Dr.” in the title. More power to them, but I could never bring myself to do that.

One other interesting point: the AP Style Manual agrees with Epstein about the use of “Dr.”  According to the Newsweek piece:

The AP stylebook, a writing guide used by major U.S. publications including Newsweek, also suggests that the term doctor should not be used by those with academic doctoral degrees.

Its latest edition reads: “Use Dr. in first reference as a formal title before the name of an individual who holds a doctor of dental surgery, doctor of medicine, doctor of optometry, doctor of osteopathic medicine, doctor of podiatric medicine, or doctor of veterinary medicine.”

It adds: “Do not use Dr. before the names of individuals who hold other types of doctoral degrees.”

So you could say Epstein was adhering to that rule, but the tone of his piece is snarky and condescending. The opprobrium he’s earned for it is largely deserved.

I suppose I adhere to the AP dictum on this website, too, as it seems weird to call my colleagues “Dr.”, but less weird to call medical doctors “Dr. X”.

(Epstein also denigrates honorary doctorates, for they’re not markers of scholarly achievement—except at the University of Chicago, which may be the only school in the U.S. that confers honorary degrees only on scholars—never to actors, cartoonists, sports figures, and so on. But I don’t know anybody who calls themselves “Dr.” with only an honorary doctorate.)

So if Jill Biden wants to be called “Dr. Biden,” it’s churlish to refuse—after all, she did earn the right to use it. And it’s a matter of simple civility to address people how they want to be addressed.

I have only one caveat here: nobody—be they medical doctors or Ph.Ds—should ever put “Dr.” before their names on their bank checks. That’s where I draw the line. It looks like a move of pompous one-upsmanship—like you’re trying to lord it over salespeople, cashiers, and bank tellers.

Andrew Sullivan: The genetic underpinnings of IQ means we shouldn’t value it so much, that we should ditch the meritocracy, and that we should become more of a communist society

September 12, 2020 • 11:30 am

Andrew Sullivan has devoted a lot of the last two editions of The Weekly Dish to the genetics of intelligence, perhaps because he’s taken a lot of flak for supposedly touting The Bell Curve and the genetic underpinnings of IQ.  Now I haven’t read The Bell Curve, nor the many posts Sullivan’s devoted to the genetics of intelligence (see the long list here), but he’s clearly been on the defensive about his record which, as far as I can see, does emphasize the genetic component to intelligence. But there’s nothing all that wrong with that: a big genetic component of IQ is something that all geneticists save Very Woke Ones accept. But as I haven’t read his posts, I can neither defend nor attack him on his specific conclusions.

I can, however, briefly discuss this week’s post, which is an explication and defense of a new book by Freddie DeBoer, The Cult of Smart. (Note: I haven’t read the book, either, as it’s just out.) You can read Sullivan’s piece by clicking on the screenshot below (I think it’s still free for the time being):

The Amazon summary of the book pretty much mirrors what Sullivan says about it:

. . . no one acknowledges a scientifically-proven fact that we all understand intuitively: academic potential varies between individuals, and cannot be dramatically improved. In The Cult of Smart, educator and outspoken leftist Fredrik deBoer exposes this omission as the central flaw of our entire society, which has created and perpetuated an unjust class structure based on intellectual ability.

Since cognitive talent varies from person to person, our education system can never create equal opportunity for all. Instead, it teaches our children that hierarchy and competition are natural, and that human value should be based on intelligence. These ideas are counter to everything that the left believes, but until they acknowledge the existence of individual cognitive differences, progressives remain complicit in keeping the status quo in place.

There are several points to “unpack” here, as the PoMos say. Here is what Sullivan takes from the book, and appears to agree with:

1.) Intelligence is largely genetic.

2.) Because of that, intellectual abilities “cannot be dramatically improved”.

3.) Because high intelligence is rewarded in American society, people who are smarter are better off, yet they don’t deserve to be because, after all, they are simply the winners in a random Mendelian lottery of genes fostering high IQ (I will take IQ as the relevant measure of intelligence, which it seems to be for most people, including Sullivan).

4.) The meritocracy is thus unfair, and we need to fix it.

5.) We can do that by adopting a version of communism, whereby those who benefit from the genetic lottery get taxed at a very high rate, redistributing the wealth that accrues to them from their smarts. According to DeBoer via Sullivan,

For DeBoer, that means ending meritocracy — for “what could be crueler than an actual meritocracy, a meritocracy fulfilled?” It means a revolutionary transformation in which there are no social or cultural rewards for higher intelligence, no higher after-tax income for the brainy, and in which education, with looser standards, is provided for everyone on demand — for the sake of nothing but itself. DeBoer believes the smart will do fine under any system, and don’t need to be incentivized — and their disproportionate gains in our increasingly knowledge-based economy can simply be redistributed to everyone else. In fact, the transformation in the economic rewards of intelligence — they keep increasing at an alarming rate as we leave physical labor behind — is not just not a problem, it is, in fact, what will make human happiness finally possible.

If early 20th Century Russia was insufficiently developed for communism, in other words, America today is ideal. . .

Sullivan adds that the moral worth of smart people is no higher than that of people like supermarket cashiers, trash collectors, or nurses. (I agree, but I’m not sure that smart people are really seen as being more morally worthy. They are seen as being more deserving of financial rewards.)

6.) Sullivan says that his own admitted high intelligence hasn’t been that good for him, and he doesn’t see it as a virtue:

For me, intelligence is a curse as well as a blessing — and it has as much salience to my own sense of moral worth as my blood-type. In many ways, I revere those with less of it, whose different skills — practical, human, imaginative — make the world every day a tangibly better place for others, where mine do not. Being smart doesn’t make you happy; it can inhibit your sociability; it can cut you off from others; it can generate a lifetime of insecurity; it is correlated with mood disorders and anxiety. And yet the system we live in was almost designed for someone like me.

This smacks a bit of humblebragging, but I’ll take it on face value. It’s still quite odd, though, to see a centrist like Sullivan, once a conservative, come out in favor of communism and radical redistribution of wealth. So be it. But do his arguments make sense?

Now Sullivan’s emphasis on the genetic basis of intelligence is clearly part of his attack on the extreme Left, which dismisses hereditarianism because it’s said to imply (falsely) that differences between groups, like blacks and whites, are based on genetic differences. It also implies (falsely) that traits like intellectual achievement cannot be affected by environmental effects or environmental intervention (like learning). Here Andrew is right: Blank-Slateism is the philosophy of the extreme left, and it’s misguided in several ways. Read Pinker’s book The Blank Slate if you want a long and cogent argument about the importance of genetics.

But there are some flaws, or potential flaws, in Sullivan’s argument, which I take to be point 1-5 above.

First, intelligence is largely genetic, but not completely genetic. There is no way for a given person to determine what proportion of their IQ is attributable to genes and how much to environment or to the interaction between the two: that question doesn’t even make sense. But what we can estimate is the proportion of variation of IQ among people in a population that is due to variation in their genes. This figure is known as the heritability of IQ, and can be calculated (if you have the right data) for any trait. Heritability ranges from 0 (all variation we see in the trait is environmental, with no component due to genetics) to 1 (or 100%), with all the observed variation in the trait being due to variation in genes. (Eye color is largely at this end of the scale.)

A reasonable value for the heritability of IQ in a white population is around 0.6, so about 60% of the variation we see in that population is due to variation in genes, and the other 40% to different environments experienced by different people as well as to the differential interaction between their genes and their environments. That means, first of all, that an appreciable proportion of variation in intelligence is due to variations in people’s environments. And that means that while the IQ of a person doesn’t change much over time, if you let people develop in different environments you can change their IQ in different ways—up or down. IQ is not something that is unaffected by the environment.

Related to that is the idea that a person’s IQ is not fixed at birth by their genes, but can be changed by rearing them in different environments, so it’s not really valid to conclude (at least from the summary above) that “academic potential cannot be dramatically improved”. Indeed, Sullivan’s summary of DeBoer’s thesis is that the difference in IQ between blacks and whites (an average of 15 points, or one standard deviation) is not due to genes, but to different environments faced by blacks and whites:

DeBoer doesn’t explain it as a factor of class — he notes the IQ racial gap persists even when removing socio-economic status from the equation. Nor does he ascribe it to differences in family structure — because parenting is not that important. He cites rather exposure to lead, greater disciplinary punishment for black kids, the higher likelihood of being arrested, the stress of living in a crime-dominated environment, the deep and deadening psychological toll of pervasive racism, and so on: “white supremacy touches on so many aspects of American life that it’s irresponsible to believe we have adequately controlled for it in our investigations of the racial achievement gap.”

Every factor cited here is an environmental factor, not a genetic one. And if those factors can add up to lowering your IQ by 15 points, on what basis does DeBoer conclude (with Sullivan, I think), that you cannot improve IQ or academic performance by environmental intervention? Fifteen points is indeed a “dramatic improvement”, which according to DeBoer, we’d get by simply letting black kids grow up in the environment of white people.  (I note here that I don’t know how much, if any, of that 15-point difference reflects genetic versus environmental differences; what I’m doing is simply asserting that even DeBoer notes that you can change IQ a lot by changing environments.)

Further, what you do with your intelligence can be further affected by the environment. If you’re lazy, and don’t want to apply yourself, a big IQ isn’t necessarily going to make you successful in society. So there is room for further improvement of people by proper education and instilling people with motivation. This doesn’t mean that IQ isn’t important as a correlate of “success” (however it’s measured) in American society—just that environmental factors, including education and upbringing, are also quite important.

What about genetic determinism and the meritocracy? It’s likely that many other factors that lead to success in the U.S. have a high heritability as well. Musical ability may be one of these, and therefore those who get rich not because they have high IQs, but can make good music that sells, also have an “unfair advantage”. What about good looks? Facial characteristic are highly heritable, and insofar as good looks can give you a leg up as a model or an actor, that too is an unfair genetic win. (I think there are data that better-looking people are on average more successful.) In fact, since nobody is “responsible” for either their genes or their environments, as a determinist I think that nobody really “deserves” what they get, since nobody chooses to be successful or a failure. Society simply rewards those people who have certain traits, and punishes those who have other traits. With that I don’t have much quarrel, except about the traits that are deemed reward-worthy (viz., the Kardashians).

This means, if you take Sullivan and DeBoer seriously, we must eliminate not just the meritocracy for intelligence, but for anything: musical ability, good looks, athletic ability, and so on. In other words, everybody who is successful should be taxed to the extent that, after redistribution, everyone in society gets the same amount of money and the same goods. (It’s not clear from Sullivan’s piece to what extent things should be equalized, but if you’re a determinist and buy his argument, everyone should be on the same level playing field.)

After all, if “the smart don’t need to be incentivized”, why does anybody? The answer, of course, is that the smart do need to be incentivized, as does everyone else. The failure of purely communist societies to achieve parity with capitalistic ones already shows that. (I’m not pushing here for pure capitalism: I like a capitalistic/socialistic hybrid, as in Scandinavia.)  And I wonder how much of Sullivan’s $500,000 income he’d be willing to redistribute.

If you think I’m exaggerating Sullivan’s approbation of communism, at least in theory, here’s how he ends his piece, referring to his uneducated grandmother who cleaned houses for a living.

My big brain, I realized, was as much an impediment to living well as it was an advantage. It was a bane and a blessing. It simply never occurred to me that higher intelligence was in any way connected to moral worth or happiness.

In fact, I saw the opposite. I still do. I don’t believe that a communist revolution will bring forward the day when someone like my grandmother could be valued in society and rewarded as deeply as she should have been. But I believe a moral revolution in this materialist, competitive, emptying rat-race of smarts is long overdue. It could come from the left or the right. Or it could come from a spiritual and religious revival. Either way, Freddie DeBoer and this little book are part of the solution to the unfairness and cruelty of it all. If, of course, there is one.

Let’s forget about the “spiritual and religious revival” (I wrote about that before), and realize that what we have here is a call for material equality, even if people aren’t morally valued as the same. And why should we empty the rat-race just of smarts? Why not empty it of everything that brings differential rewards, like writing a well-remunerated blog? In the end, Sullivan’s dislike of extreme leftism and its blank-slate ideology has, ironically, driven him to propose a society very like communism.

Are people becoming more talkative during the pandemic?

July 28, 2020 • 8:15 am

I’ve noticed in the last couple of months that people I talk to, either over the phone or in person, seem to have become much more loquacious, to the point where  it seems that 90% or more of the conversational airtime is taken up by one person’s words. (To be sure, I’m often laconic.) Now I haven’t quantified this, though I could do so, at least over the phone with a stopwatch. But subjectively, it seems to me a real temporal change.

The first thing to determine is whether the subjective change is an objective change. To determine that, I would have to have timed participation in conversations over the last year or so, and compared the conversational “pie” before and after lockdown. And I don’t have that data. 

In the absence of hard data, it’s possible that I’ve simply become more peevish and impatient, so that it only seems that people are monopolizing conversations more. And indeed, I think I have become more peevish, though I think many people have changed in this way as well.

But let’s assume it’s real: that the proportion of conversational time in a two-person chat has become more unequal since March.  If that’s the case, why?

The only explanation I can imagine is that people who are more socially isolated have become more eager to talk, and that’s manifested in a higher degree of conversational dominance. Of course if two such chatty people meet, it could be a festival of interruptions and “talking over,” but I tend to become monosyllabic, and this is exacerbated when I am peevish.  My philosophy has always been that in a conversation, you learn nothing by talking but only by listening.

At any rate, am I imagining this or have others noticed it?

A world survey: Do we need God to be moral?

July 24, 2020 • 8:45 am

A new study by the Pew organization (click on screenshot below or get full pdf here) surveyed 38,436 people in 34 countries across the globe, asking them questions about how important God or religion is to them and—today’s topic)—do you really need God to be moral.  The methods included both face to face and phone surveys.

The overall results aren’t that surprising: more religious countries and more religious people within countries think that “belief in God is necessary to be moral and have good values”, while richer countries (which are also less religious countries) tend to harbor respondents who don’t think faith is necessary for morality. And the proportion of those who see God as important in this respect is waning in most of Western Europe over time, though growing in Russia, Bulgaria, Japan and Ukraine).

The overall results show a pretty even division across the globe, though religion plays an important role in most people’s lives. But these results aren’t that informative given the observed variation across countries (see below):

Below is a plot showing the variation across the surveyed countries. Look at the first two lines showing a substantial difference between the U.S. and the more secular Canada.

Overall, I would have thought that even religious people wouldn’t assert that you need God to be moral, mainly because there’s so much evidence that nonbelievers are moral. In fact, the most secular countries in the world—those in Scandinavia—could be construed as being more moral than many of the more religious countries, like Islamic countries of the Middle East. Further, the Euthyphro argument, which shows that our sense of morality must be prior to belief in God (unless you believe in Divine Command theory), disposes of the we-need-God-to-be-moral claim. But of course few people have thought the issue through that far.

Muslim and Catholic (or devout Christian) countries show the strongest belief in God as a necessity for morality. 90% or above ratings are seen in the Philippines, Indonesia, Kenya, and Nigeria.

Three more plots. The first one shows the familiar pattern of richer countries adhering less to religious dicta than poorer ones. In this case there are multiple confounding factors, for “belief in God is important for morality” is surely itself highly correlated with simple “belief in God.” The relationship here is very strong. My own view is that of Marx: countries where you are in bad shape and can’t get help from the government tend to be those where people find hope and solace in religion.

This is also true within countries: there’s a consistent pattern in the surveyed nations of people with higher income being less likely to see God as necessary for morality (and of course the higher-income people are less likely to be religious in general).

As expected, people with more education tend to connect morality with God to a lesser extent. Again, this is probably because of a negative relationship between education and religiosity:

In the comments below, reader Eric said I may have “buried the lede” by neglecting the rather large drop between 2002 and 2019, in the proportion of Americans who think God is necessary for morality. This is part of the increasing secularization of the U.S:

 

Finally, there’s a plot showing the variation among countries on the general importance of religion. Western Europe, Australia, South Korea, and Japan lead the pack for secularism, while Catholic, Muslim, and African Christian countries are those seeing religion as more important. That’s no surprise:

In truth, the failure of nearly half the world’s people to see that atheists can be moral, which should dispose of the “God-is-necessary” hypothesis, is depressing. But one could argue that for many religious people, “morality” consists largely of religious dictates: what you eat, who you sleep with and how, how you feel about gays and women, and so on. So, for example, Catholics and Muslims might see the free-loving and egalitarian Scandinavians as immoral.

The Purity Posse pursues Pinker

July 5, 2020 • 12:30 pm

The Woke are after Pinker again, and if he’s called a racist and misogynist, as he is in this latest attempt to demonize him, then nobody is safe. After all, Pinker is a liberal Democrat who’s donated a lot of dosh to the Democratic Party, and relentlessly preaches a message of moral, material, and “well-being” progress that’s been attained through reason and adherence to Enlightenment values. But that sermon alone is enough to render him an Unperson, for the Woke prize narrative and “lived experience” over data, denigrate reason, and absolutely despise the Enlightenment.

The link to the document in question, “Open Letter to the Linguistic Society of America,”  was tweeted yesterday by Pinker’s fellow linguist John McWhorter, who clearly dislikes the letter. And, indeed, the letter is worthy of Stalinism in its distortion of the facts in trying to damage the career of an opponent. At least they don’t call for Pinker to be shot in the cellars of the Lubyanka!

After I read the letter and decided to respond to it, I contacted Steve, asking him questions, and he gave me permission to quote some of his answers, which were sent in an email. (Steve, by the way, has never asked me to defend him; I do so in this case because of the mendacity of the letter.)

The letter, on Google Documents, is accumulating signatories—up to 432 the last time I looked. You can access it in McWhorter’s tweet above, or by clicking on the letter’s first paragraph below:

Many of the signatories are grad students and undergrads, members of the Linguistics Society of America (LSA), which may explain why the vast amount of criticism leveled at Pinker comes from his social media, all tweets from Twitter. The letter shows no familiarity with Pinker’s work, and takes statements out of context in a way that, with the merest checking, are seen to be represented duplicitously. In the end, the authors confect a mess of links that, the signatories say, indict Pinker of racism, misogyny, and blindness to questions of social justice. As the authors say:

Though no doubt related, we set aside questions of Dr. Pinker’s tendency to move in the proximity of what The Guardian called a revival of “scientific racism”, his public support for David Brooks (who has been argued to be a proponent of “gender essentialism”), his expert testimonial in favor of Jeffrey Epstein (which Dr. Pinker now regrets), or his dubious past stances on rape and feminism. Nor are we concerned with Dr. Pinker’s academic contributions as a linguist, psychologist and cognitive scientist. Instead, we aim to show here Dr. Pinker as a public figure has a pattern of drowning out the voices of people suffering from racist and sexist violence, in particular in the immediate aftermath of violent acts and/or protests against the systems that created them.

In truth, Pinker as a public figure is hard to distinguish from Pinker the academic, for in both academia and in public he conveys the same message, one of progress (albeit with setbacks) and material and moral improvement, always using data to support this upward-bending arc of morality. And in both spheres he emphasizes the importance of secularism and reason as the best—indeed, the only—way to attain this progress. After indicting Pinker based on five tweets and a single word in one of his books, the signatories call for him to be stripped of his honors as a distinguished LSA Fellow and as one of the LSA’s media experts.

So what is the evidence that Pinker is a miscreant and a racist? I’ll go through the six accusations and try not to be tedious.

The first is about blacks being shot disproportionately to their numbers in the population, which, as I’ve written about recently, happens to be true. Emphases in the numbered bits is mine:

1.) In 2015, Dr. Pinker tweeted “Police don’t shoot blacks disproportionately”, linking to a New York Times article by Sendhil Mullainathan.


Let the record show that Dr. Pinker draws this conclusion from an article that contains the following quote: “The data is unequivocal. Police killings are a race problem: African-Americans are being killed disproportionately and by a wide margin.” (original emphasis) We believe this shows that Dr. Pinker is willing to make dishonest claims in order to obfuscate the role of systemic racism in police violence.

Actually, Pinker’s tweet was an accurate summary of the article. Have a look at the quote in its entirety, reading on after the first extracted sentence.

The data is unequivocal. Police killings are a race problem: African-Americans are being killed disproportionately and by a wide margin. And police bias may be responsible. But this data does not prove that biased police officers are more likely to shoot blacks in any given encounter.

Instead, there is another possibility: It is simply that — for reasons that may well include police bias — African-Americans have a very large number of encounters with police officers. Every police encounter contains a risk: The officer might be poorly trained, might act with malice or simply make a mistake, and civilians might do something that is perceived as a threat. The omnipresence of guns exaggerates all these risks.

Such risks exist for people of any race — after all, many people killed by police officers were not black. But having more encounters with police officers, even with officers entirely free of racial bias, can create a greater risk of a fatal shooting.

Arrest data lets us measure this possibility. For the entire country, 28.9 percent of arrestees were African-American. This number is not very different from the 31.8 percent of police-shooting victims who were African-Americans. If police discrimination were a big factor in the actual killings, we would have expected a larger gap between the arrest rate and the police-killing rate.

This in turn suggests that removing police racial bias will have little effect on the killing rate. Suppose each arrest creates an equal risk of shooting for both African-Americans and whites. In that case, with the current arrest rate, 28.9 percent of all those killed by police officers would still be African-American. This is only slightly smaller than the 31.8 percent of killings we actually see, and it is much greater than the 13.2 percent level of African-Americans in the overall population.

The signatories, not Pinker, stand guilty of dishonest quote-mining. I would argue that the cherry-picking here is intellectually dishonest—and deliberate.

2.) In 2017, when nearly 1000 people died at the hands of the police, the issue of anti-black police violence in particular was again widely discussed in the media. Dr. Pinker moved to dismiss the genuine concerns about the disproportionate killings of Black people at the hands of law enforcement by employing an “all lives matter” trope (we refer to Degen, Leigh, Waldon & Mengesha 2020 for a linguistic explanation of the trope’s harmful effects) that is eerily reminiscent of a “both-sides” rhetoric, all while explicitly claiming that a focus on race is a distraction. Once again, this clearly demonstrates Dr. Pinker’s willingness to dismiss and downplay racist violence, regardless of any evidence.

In light of the recent police killings of blacks, I’m pretty sure that this tweet would look worse today than it did in 2017. But the article Pinker is referring to is about general improvements in police departments, not ways to make cops less racist. It does note that there’s racism in police killings, but says that the fix, as Pinker notes, comes from general improvements in policing (along the lines of general improvements in airline safety), not by focusing on racism itself:

Police violence is tangled up with racism and systemic injustice. We desperately need to do more to address that, foremost by shoring up the criminal-justice system so that it holds police officers accountable when they kill. But it’s also true that deadly mistakes are going to happen when police officers engage in millions of potentially dangerous procedures a year. What aviation teaches us is that it should be possible to “accident proof” police work, if only we are willing to admit when mistakes are made.

. . . The routine traffic stop, like the one that killed Mr. Bell’s son, is especially in need of redesign because it contains so many potential failure points that cause confusion and violence. In the computer science department at the University of Florida, a team of students — all African-American women — have developed a technology that they hope might make these encounters far safer.

. . .How can we fix this system that puts civilians and the police officers who stop them at risk? The obvious solution is to take the officers — and their guns — out of the picture whenever possible.

The technology developed by the African-American women has nothing to do with race, but limns general principles that should be followed in all traffic stops. Now I doubt Steve would, given the recent events and protests, post the same tweet today, but his summary of the article is not at all an “all lives matter” trope. Remember, there’s still no good evidence that the killing of black men by police reflects “systemic racism” in police department, and that needs to be investigated, but in the meantime perhaps some general tactical changes should be considered as well.

I asked Steve to respond to the claim that this is an “all lives matter trope.” Here’s what he emailed back (quoted with permission):

Linguists, of all people, should understand the difference between a trope or collocation, such as the slogan “All lives matter,” and the proposition that all lives matter. (Is someone prepared to argue that some lives don’t matter?) And linguists, of all people,  should understand the difference between a turn in the context of a conversational exchange and a sentence that expresses an idea. It’s true that if someone were to retort “All lives matter” in direct response to “Black lives matter,’ they’d be making a statement that downplays the racism and other harms suffered by African Americans. But that is different from asking questions about whom police kill, being open to evidence on the answer, and seeking to reduce the number of innocent people killed by the police of all races. The fact is that Mullainathan and four other research reports have found the same thing: while there’s strong evidence that African Americans are disproportionately harassed, frisked, and manhandled by the police (so racism among the police is a genuine problem), there’s no evidence that they are killed more, holding rates of dangerous encounters constant. (References below.) As Mullainathan notes, this doesn’t downplay racism, but it pinpoints its effects: in drug laws, poverty, housing segregation, and other contributors to being in dangerous situations, but not on in the behavior of police in lethal encounters. And it has implications for how to reduce police killings, which is what we should all care about: it explains the finding that race-specific like training police in implicit bias and hiring more minority police have no effect, while across-the-board measures such as de-escalation training, demilitarization, changing police culture, and increasing accountability do have an effect.

Fryer, R. G. (2016). An Empirical Analysis of Racial Differences in Police Use of Force. National Bureau of Economic Research Working Papers(22099), 1-63.

Fryer, R. G. (forthcoming). Reconciling Results on Racial Differences in Police Shootings. American Economic Review (Papers and Proceedings).

Goff, P. A., Lloyd, T., Geller, A., Raphael, S., & Glaser, J. (2016). The science of justice: Race, arrests, and police use of force. Los Angeles: Center for Policing Equity, UCLA, Table 7.

Johnson, D. J., Tress, T., Burkel, N., Taylor, C., & Cesario, J. (2019). Officer characteristics and racial disparities in fatal officer-involved shootings. Proceedings of the National Academy of Sciences, 116(32), 15877-15882. doi:10.1073/pnas.1903856116

Johnson, D. J., & Cesario, J. (2020). Reply to Knox and Mummolo and Schimmack and Carlsson: Controlling for crime and population rates. Proceedings of the National Academy of Sciences, 117(3), 1264-1265. doi:10.1073/pnas.1920184117

Miller, T. R., Lawrence, B. A., Carlson, N. N., Hendrie, D., Randall, S., Rockett, I. R. H., & Spicer, R. S. (2016). Perils of police action: a cautionary tale from US data sets. Injury Prevention. doi:10.1136/injuryprev-2016-042023

Of course the signatories credit themselves with the ultrasonic ability to discern “dog whistles” in arguments that displease them, a license to throw standards of accurate citation out the window and accuse anyone of saying anything. 

Back to the letter:

3.) Pinker (2011:107) provides another example of Dr. Pinker downplaying actual violence in a casual manner: “[I]n 1984, Bernhard Goetz, a mild-mannered engineer, became a folk hero for shooting four young muggers in a New York subway car.”—Bernhard Goetz shot four Black teenagers for saying “Give me five dollars.” (whether it was an attempted mugging is disputed). Goetz, Pinker’s mild-mannered engineer, described the situation after the first four shots as follows: “I immediately looked at the first two to make sure they were ‘taken care of,’ and then attempted to shoot Cabey again in the stomach, but the gun was empty.” 18 months prior, the same “mild-mannered engineer” had said “The only way we’re going to clean up this street is to get rid of the sp*cs and n*****s”, according to his neighbor. Once again, the language Dr. Pinker employs in calling this person “mild-mannered” illustrates his tendency to downplay very real violence.

After I’d read Accusation #1 and this one, and saw the way the letter was distorting what Pinker said, I decided to write Steve and say that I was going to write something about the letter. I began by asking for the whole Goetz passage from The Better Angels of Our Nature (which you can see at the letter’s link) so I could embed it here. Steve sent it, along with these words:

The Goetz description was, of course, just a way to convey the atmosphere of New York in the high-crime 79s and 80s for those who didn’t live through it — just as the atmosphere was later depicted in The Joker. To depict this as sympathetic to a vigilante shooter is one of the many post-truth ascriptions in the piece.

Here’s the entire passage from Better Angels:

The flood of violence from the 1960s through the 1980s reshaped American culture, the political scene, and everyday life. Mugger jokes became a staple of comedians, with mentions of Central Park getting an instant laugh as a well-known death trap. New Yorkers imprisoned themselves in their apartments with batteries of latches and deadbolts, including the popular “police lock,” a steel bar with one end anchored in the floor and the other propped up against the door. The section of downtown Boston not far from where I now live was called the Combat Zone because of its endemic muggings and stabbings. Urbanites quit other American cities in droves, leaving burned-out cores surrounded by rings of suburbs, exurbs, and gated communities. Books, movies and television series used intractable urban violence as their backdrop, including Little Murders, Taxi Driver, The Warriors, Escape from New York, Fort Apache the Bronx, Hill Street Blues, and Bonfire of the Vanities. Women enrolled in self-defense courses to learn how to walk with a defiant gait, to use their keys, pencils, and spike heels as weapons, and to execute karate chops or jujitsu throws to overpower an attacker, role-played by a volunteer in a Michelin-man-tire suit. Red-bereted Guardian Angels patrolled the parks and the mass transit system, and in 1984 Bernhard Goetz, a mild-mannered engineer, became a folk hero for shooting four young muggers in a New York subway car. A fear of crime helped elect decades of conservative politicians, including Richard Nixon in 1968 with his “Law and Order” platform (overshadowing the Vietnam War as a campaign issue); George H. W. Bush in 1988 with his insinuation that Michael Dukakis, as governor of Massachusetts, had approved a prison furlough program that had released a rapist; and many senators and congressmen who promised to “get tough on crime.” Though the popular reaction was overblown—far more people are killed every year in car accidents than in homicides, especially among those who don’t get into arguments with young men in bars—the sense that violent crime had multiplied was not a figment of their imaginations.

Now if you think that this passage excuses Bernie Goetz for the shooting, and does so by using “mild-mannered” as an adjective, I feel sorry for you. Pinker’s doing here what he said he was doing: depicting the anti-crime atmosphere present at that time in New York City. Only someone desperately looking for reasons to be offended would glom onto this as evidence of racism. In fact, in 1985 the Washington Post called Goetz “the unassuming, apparently mild-mannered passenger who struck with force” . You can find the same adjective in other places. Complaint dismissed.

4.)  In 2014, a student murdered six women at UC Santa Barbara after posting a video online that detailed his misogynistic reasons. Ignoring the perpetrator’s own hate speech, Dr. Pinker called the idea that such a murder could be part of a sexist pattern “statistically obtuse”, once again undermining those who stand up against violence while downplaying the actual murder of six women as well as systems of mysogyny.

Here’s the “incriminating” tweet:

First, a correction: the 2014 Isla Vista killings by Eliot Rodger involved four male victims and two female victims, not six women. But that aside, Rodger did leave a misogynistic manifesto and a YouTube video clearly saying that he wanted to exact revenge on women for rejecting him, and whom he hated for that.

I couldn’t find the statistically obtuse link, and asked Steve about it, and he didn’t remember it either. But his point was clearly not to say that this murder wasn’t motivated by hatred of women, but to question whether it was part of a general pattern of hatred of women. That’s a different issue. I’ll quote Steve again, with his permission:

I don’t remember what it initially pointed to, but I’ve often argued that reading social trends into rampage shootings and suicide terrorists is statistically obtuse and politically harmful. It’s obtuse because vastly more people are killed in day-to-day homicides, to say nothing of accidents; news watchers who think they are common are victims of the Availability Bias, mistaking saturation media coverage of horrific isolated events for major social trends. Every victim of a murder is an unspeakable tragedy, but in trying to reduce violence, we should focus foremost on the phenomena that harm people in the largest numbers.

It’s possible — I don’t remember — that I mentioned data showing that uxoricide (the killing of women by husbands and romantic partners) has been in decline.

Focusing on rampage shooters and suicide terrorists is harmful because it gives these embittered losers exactly what they are seeking—notoriety and political importance—thereby incentivizing more of them. Also, the overreactions to these two smaller kinds of violence can have dangerous side effects, from traumatizing schoolchildren with pointless active shooter drills, to the invasions of Afghanistan and Iraq.

The legal scholar Adam Lankford is the one who’s written most compellingly about the drive of rampage shooters to “make a difference,” if only posthumously — a good reason not to grant undue importance to their vile final acts.

Again, Pinker’s attempt to make a general point is parsed for wording (do they even know what “statistically obtuse” means?) to argue that Steve is a misogynist. Steve added, “The difference between understanding the world through media-driven events versus data-based  trends is of course very much my thing.”

5.)  On June 3rd 2020, during historic Black Lives Matter protests in response to violent racist killings by police of George Floyd, Breonna Taylor, and many many others, Dr. Pinker chose to publicly co-opt the academic work of a Black social scientist to further his deflationary agenda. He misrepresents the work of that scholar, who himself mainly expressed the hope he felt that the protests might spark genuine change, in keeping with his belief in the ultimate goodness of humanity. A day after, the LSA commented on its public twitter account that it “stands with our Black community”. Please see the public post by linguist Dr. Maria Esipova for a more explicit discussion of this particular incident.

First, “co-opting” is a loaded word for the simple act of citation, both in Pinker’s books and in his tweet below, citation that shows a decline in racist attitudes among white people over time. This involves answers to questions—not actions like murders—but attitudes must surely be seen as manifestations of “racism”.

The incriminating tweet:

As for Bobo’s article in the Harvard Gazette, yes, there is cautious optimism, but there’s also despair.

Bobo:

On the one hand, I am greatly heartened by the level of mobilization and civil protests. That it has touched so many people and brought out so many tens of thousands of individuals to express their concern, their outrage, their condemnation of the police actions in this case and their demand for change and for justice, I find all that greatly encouraging. It is, at the same moment, very disappointing that some folks have taken this as an opportunity to try to bring chaos and violence to these occasions of otherwise high-minded civil protest. And I’m disappointed by those occasions where in law enforcement, individuals and agencies, have acted in ways that have provoked or antagonized otherwise peaceful protest actions.

It’s a complex and fraught moment that we’re in. And one of the most profoundly disappointing aspects of the current context is the lack of wise and sensible voices and leadership on the national stage to set the right tone, to heal the nation, and to reassure us all that we’re going to be on a path to a better, more just society.

. . .We had all thought, of course, that we made phenomenal strides. We inhabit an era in which there are certainly more rank-and-file minority police officers than ever before, more African American and minority and female police chiefs and leaders. But inhabiting a world where the poor and our deeply poor communities are still heavily disproportionately people of color, where we had a war on drugs that was racially biased in both its origins and its profoundly troubling execution over many years, that has bred a level of distrust and antagonism between police and black communities that should worry us all. There’s clearly an enormous amount of work to be done to undo those circumstances and to heal those wounds.

And if the following isn’t a statement by Bobo that justifies Pinker’s characterization above, I don’t know what is, for while indicting Trumpism for fomenting racism, Bobo does indeed say he is “guardedly optimistic”, even using the phrase “higher angels of our nature”. (My emphasis.)

The last three years have brought one moment of shock and awe after the other, as acts on a national and international stage from our leadership that one would have thought unimaginable play out each and every day under a blanket of security provided by a U.S. Senate that appears to have lost all sense of spine and justice and decency. I don’t know where this is. I think we’re in a deeply troubling moment. But I am going to remain guardedly optimistic that hopefully, in the not-too-distant future, the higher angels of our nature win out in what is a really frightening coalescence of circumstances.

Finally, Steve went into more detail about that tweet:

The intro to the tweet was context: introducing Larry Bobo and my connection to his research. It was followed by the transition “Here he ….”, so there was no implication that this interview was specifically about that research. Still, I’d argue that it’s hardly a coincidence that a social scientist who has documented racial progress in the past (including in a 2009 article entitled “A change has come: Race, politics, and the path to the Obama presidency”) would express guarded optimism that it can continue. After all, if 65 years of the civil rights movement had yielded no improvements in race relations, why should we bother continuing the fight? 

Now, one can legitimately ask (as Bobo does) whether responses to the General Social Survey are honest or are biased by social desirability. I address this in Enlightenment Now by looking for signs of implicit racism in Google search data (it’s declined), and more recently, have cited new data from my colleagues Tessa Charlesworth and Mahzarin Banaji (in Psychological Science last year) that implicit racial bias as measured by Banaji’s Implicit Association Test has declined as well. 

I’ve become used to incomprehension and outrage over data on signs of progress. People mentally auto-correct the claim that something bad has declined with the claim that it has disappeared. And they misinterpret evidence for progress as downplaying the important of activism. But of course progress in the past had to have had a cause, and often it was the work of past activists that pushed the curves down — all the more reason to continue it today.

I also asked Steve for the references to Bobo’s research showing “the decline of overt racism in the U.S.” Here they are:

Bobo, L. D. 2001. Racial attitudes and relations at the close of the twentieth century. In N. J. Smelser, W. J. Wilson, & F. Mitchell, eds., America becoming: Racial trends and their consequences. Washington, D.C.: National Academies Press.

Bobo, L. D., & Dawson, M. C. 2009. A change has come: Race, politics, and the path to the Obama presidency. Du Bois Review, 6, 1–14.

Schuman, H., Steeh, C., & Bobo, L. D. 1997. Racial attitudes in America: Trends and interpretations. Cambridge, Mass.: Harvard University Press.

Finally, the last indictment:

6.) On June 14th 2020, Dr. Pinker uses the dogwhistle “urban crime/violence” in two public tweets (neither of his sources used the term). A dogwhistle is a deniable speech act “that sends one message to an outgroup while at the same time sending a second (often taboo, controversial, or inflammatory) message to an ingroup”, according to recent and notable semantic/pragmatic work by linguistic researchers Robert Henderson & Elin McCready [1,2,3]. “Urban”, as a dogwhistle, signals covert and, crucially, deniable support of views that essentialize Black people as lesser-than, and, often, as criminals. Its parallel “inner-city”, is in fact one of the prototypical examples used as an illustration of the phenomenon by Henderson & McCready in several of the linked works. 

The two tweets at issue:

Umm.  .  both Patrick Sharkey at Princeton and Rod Brunson at Northeastern University are indeed experts in urban crime, and have taught and written extensively about it.  If there’s a “dogwhistle” here, blame Brunson and Sharkey, not Pinker. But there is no dogwhistle save the use of that phrase by the Woke to provoke cries of racism from their peers.

In the end, we have an indictment based on five tweets and the phrase “mild-mannered” in one of Pinker’s books, all of which distort or mischaracterize what Pinker was saying. That five social-media tweets and one word can lead to such a severe indictment (see below) is a sign of how far the termites have dined. I’m really steamed when a group of misguided zealots tries to damage someone’s career, and does so dishonestly.

The end of this pathetic letter:

We want to note here that we have no desire to judge Dr. Pinker’s actions in moral terms [JAC: oh for chrissake, of course they do!], or claim to know what his aims are. Nor do we seek to “cancel” Dr. Pinker, or to bar him from participating in the linguistics and LSA communities (though many of our signatories may well believe that doing so would be the right course of action). We do, however, believe that the examples introduced above establish that Dr. Pinker’s public actions constitute a pattern of downplaying the very real violence of systemic racism and sexism, and, moreover, a pattern that is not above deceitfulness, misrepresentation, or the employment of dogwhistles. In light of the fact that Dr. Pinker is read widely beyond the linguistics community, this behavior is particularly harmful, not merely for the perception of linguistics by the general public, but for movements against the systems of racism and sexism, and for linguists affected by these violent systems.

The people who are deceitful and who misrepresent the facts are the signatories of this screed, not Pinker.  File this letter in the circular file. I hope that the LSA doesn’t take it seriously, but if they do, the organization should be mocked and derided.

h/t: Many people sent me this letter; thanks to all.

Religion doesn’t improve society: more evidence

February 23, 2020 • 10:15 am

Religion is often touted as essential as a kind of secular glue, keeping society moral and empathic. Indeed, some say that even if there isn’t any evidence for a God, we should promote belief anyway because of its salutary side effects—the “spandrels” of belief.

This “belief in belief” trope, as Dennett calls it, is counteracted by lots of evidence, including the observation that there’s a negative correlation between the religiosity of a country and both its “happiness index” and various measures of well being. Because this is a correlation rather than a causation, we can’t say for sure that religion brings countries down while secularism brings happiness, but there’s certainly no support at all for the thesis that religion promotes well being.

That’s the point made in this new article in The Washington Post. It’s a response to Attorney General William Barr’s recent claim, in a speech at Notre Dame, that religion is essential to maintain morality and that its erosion causes dire consequences. Some of Barr’s quotes from that talk:

Modern secularists dismiss this idea of morality as other-worldly superstition imposed by a kill-joy clergy. In fact, Judeo-Christian moral standards are the ultimate utilitarian rules for human conduct.

They reflect the rules that are best for man, not in the by and by, but in the here and now. They are like God’s instruction manual for the best running of man and human society.

By the same token, violations of these moral laws have bad, real-world consequences for man and society. We may not pay the price immediately, but over time the harm is real.

Religion helps promote moral discipline within society. Because man is fallen, we don’t automatically conform ourselves to moral rules even when we know they are good for us.

But religion helps teach, train, and habituate people to want what is good. It does not do this primarily by formal laws – that is, through coercion. It does this through moral education and by informing society’s informal rules – its customs and traditions which reflect the wisdom and experience of the ages.

In other words, religion helps frame moral culture within society that instills and reinforces moral discipline.

And, added Barr, the rise of secularism is accompanied by a moral decrepitude afflicting America:

By any honest assessment, the consequences of this moral upheaval have been grim.

Virtually every measure of social pathology continues to gain ground.

In 1965, the illegitimacy rate was eight percent. In 1992, when I was last Attorney General, it was 25 percent. Today it is over 40 percent. In many of our large urban areas, it is around 70 percent.

Along with the wreckage of the family, we are seeing record levels of depression and mental illness, dispirited young people, soaring suicide rates, increasing numbers of angry and alienated young males, an increase in senseless violence, and a deadly drug epidemic.

As you all know, over 70,000 people die a year from drug overdoses. That is more casualities in a year than we experienced during the entire Vietnam War.

I will not dwell on all the bitter results of the new secular age. Suffice it to say that the campaign to destroy the traditional moral order has brought with it immense suffering, wreckage, and misery. And yet, the forces of secularism, ignoring these tragic results, press on with even greater militancy.

In response, columnist Max Boot cites some statistics that counteract Barr’s claims, and also give results of an international survey showing, as such surveys invariably do, that religious countries are not better off. Click on the screenshot to read the article:

 

Boot notes this:

Barr’s simplistic idea that the country is better off if it is more religious is based on faith, not evidence. My research associate Sherry Cho compiled statistics on the 10 countries with the highest percentage of religious people and the 10 countries with the lowest percentage based on a 2017 WIN/Gallup International survey of 68 countries. The least religious countries are either Asian nations where monotheism never took hold (China, Japan) or Western nations such as Australia, Sweden and Belgium, where secularism is much more advanced than in the United States. The most religious countries represent various faiths: There are predominantly Christian countries (the Philippines, Papua New Guinea, Armenia), Muslim Pakistan, Buddhist Thailand, Hindu India — and countries of mixed faiths (Nigeria, Ivory Coast, Ghana, Fiji).

Now there are data from 68 countries in this survey, but they show various indices of well being in only the 10 most religious and ten least religious. But the differences are still striking:

However, I’ve also published data (analysis by readers) on a lot more countries showing that the more religious the country, the less happy are its inhabitants: there’s a strong and significant negative correlation between the UN’s “happiness index” and religiosity among dozens of countries. Further, you see the same negative correlation between the religiosity of countries and various indices of their well-being, like their rank on the “successful societies” scale. This is also true among states within the U.S.

Further, among many countries, the index of poverty—how poor a country is—is positively correlated with religiosity.

Again, these are correlations, and not necessarily causal relationships. It’s possible, for example, that other factors play a role. In fact, I think they do, but they surely don’t point to religion in any way as promoting either morality or well being.

My theory, which is not mine but that of many sociologists, is that religion (as Marx maintained) is the last resort of a population which has poor well-being. Suffering and povery-stricken people look to God for help and succor when their society can’t provide them. That could cause the correlation. In other words, religiosity doesn’t cause dysfunctional societies, but dysfunctional societies maintain religiosity because that’s the only hope people have. And of course maintaining such hope erodes the will of people to actually do something to improve their society.  Further, as well being increases, religiosity diminishes as the eternal press of secularism in the modern world no longer comes up against impediments.

As I wrote previously:

 Although I’m not a Marxist, Marx may have gotten it right when he said, “Religion is the sigh of the oppressed creature, the heart of a heartless world, and the soul of soulless conditions. It is the opium of the people.”

Author Boot ends his article this way:

Fundamentalists may be unhappy that religious observance has declined over the decades, but the data shows that, by most measurements, life has gotten much better for most people. There is little evidence that a decline in religiosity leads to a decline in society — or that high levels of religiosity strengthen society. (Remember, Rome fell after it converted to Christianity.) If anything, the evidence suggests that too much religion is bad for a country.

Well, I’d put it another way: if a country is not well off, it tends to retain religion. But never mind: the conclusion of myself, Boot, and many sociologists—that there’s no evidence that high religiosity improves society—remains sound. I can’t imagine a survey of well being and religiosity that shows a positive relationship, and I know of no such results.

h/t: Randy

Gender differences in toy use: boys play with boy toys, girls with girl toys

January 30, 2020 • 10:30 am

Every parent I know with whom I’ve discussed the issue of sex differences has told me that, if they have children of both sexes, they notice behavioral differences between boys and girls quite early, and these include preferences for which toys they play with. Usually, but not inevitably, boys play with “boy toys” (trucks, trains, guns, soldiers) and girls prefer “girl toys” (dolls, kitchen stuff, tea sets, art stuff). Even when girls are given trucks and boys given dolls, they gravitate to the stereotyped toys. I’m using the classification employed by authors whose work is summarized in the meta-analysis I’m discussing today: the paper by Davis and Hine).

If you’re a hard-core blank-slater, you’ll attribute the toy-use difference to socialization: parents and society somehow influence children about which toys to prefer. If you’re a genetic determinist, you’ll attribute the behavior largely to innate preferences—the result of selection on our ancestors. And, of course, both factors could operate.

But there’s some evidence for a genetic component to this preference: the fact that rhesus monkeys, who presumably don’t get socialized by their parents, show a similar difference in toy preference, even when tested as adults. The monkey paper is shown below (click for free access). A BBC site implies that a related study with similar results was also done in Barbary macaques, a different species; but I can’t find any resulting publication. (UPDATE: The same result has been seen in vervet monkeys, as a reader notes in the comments.)

First, a picture:

And the paper:

And the abstract from the rhesus macaque study (my emphasis)

Socialization processes, parents, or peers encouraging play with gender specific toys are thought to be the primary force shaping sex differences in toy preference. A contrast in view is that toy preferences reflect biologically determined preferences for specific activities facilitated by specific toys. Sex differences in juvenile activities, such as rough and tumble play, peer preferences, and infant interest, share similarities in humans and monkeys. Thus if activity preferences shape toy preferences, male and female monkeys may show toy preferences similar to those seen in boys and girls. We compared the interactions of 34 rhesus monkeys, living within a 135 monkey troop, with human wheeled toys and plush toys. Male monkeys, like boys, showed consistent and strong preferences for wheeled toys, while female monkeys, like girls, showed greater variability in preferences. Thus, the magnitude of preference for wheeled over plush toys differed significantly between males and females. The similarities to human findings demonstrate that such preferences can develop without explicit gendered socialization. We offer the hypothesis that toy preferences reflect hormonally influenced behavioral and cognitive biases which are sculpted by social processes into the sex differences seen in monkeys and humans.

But if you’re dealing with humans, where socialization is also a possibility, the first thing to ask is this: Do boys and girls really differ in their toy preferences? For if they don’t, there’s no need to adhere to either socialization or genetic hypotheses. Previous research has generally showed a difference in the expected direction, but it’s not observed 100% of the time, and some studies show no difference between boys and girls.

The purpose of the 2018 study I’m discussing today, shown below, was to perform a meta-analysis of many earlier studies investigating toy preference  to see if there are statistically significant differences between the sexes when looking at the overall data.(Click on screenshot to see the paper, get the pdf here, and reference is at the bottom). I’ll try to be brief, which is hard for such a long paper!

Methods: Meta-analysis is a statistical way to combine the results of different studies, even if they use different methods. What the analysis looks for is an overall pattern among different studies: in this case, toy preferences between boys and girls. The paper of Davis and Hines, from the Gender Development Research Centre at the University of Cambridge, measures the sizes and direction of preference differences between the sexes and conducts overall tests of significance using the statistical package R.

They tested not only if there was simply a significant difference between the sexes (i.e., is there a difference between boys and girls in toy preference?), but also whether there was a difference in preference when the two sexes were tested separately. For example, there could be a significant difference between boys and girls in toy preference, but it could be due entirely to one sex, say boys, preferring boy toys, with girls showing no preference. To test the within-sex preference, you need to look at boys and girls separately.

The authors also analyzed the data using the two “classic” examples of sex-specific toys: dolls versus toy vehicles.

To see if there was a pattern over time—you’d expect a decrease over the years if socialization had decreased—they looked at the relationship between the year a study was published and the size of any sex-specific preferences. Since schools and parents are now making a big effort to socialize kids against playing with sex-specific toys, one might expect the preference to decrease over the five decades of the work included in the meta-analysis.

Finally, the authors tested whether the degree of preference changed with the child’s age. If preference is due to socialization, one might expect an increase with age, but one might also expect the same thing if hardwired differences simply take time to show up.

The authors plowed through 3,508 studies that initially looked relevant, eliminating the vast majority because they didn’t satisfy the authors’ criteria. This pruning wound up with 75 toy-preference studies included in the meta-analysis.

The age of children among studies ranged from 3 months to 11 years, and a variety of different tests were done, including “free play” (children were given a group of toys and allowed to choose ones to play with “in an unstructured way”), “visual preference” (children were shown images of toys and the amount of time they spent looking at a toy was a measure of their interest in it), “forced choice” (usually a child is forced to choose between two pictures of toys, one a “girl’s toy” and one a “boy’s toy”), and “naturalistic choice” (what kind of toys children own; the authors did not use studies in which children’s collections of toys reflected their parents’ buying habits rather than what children asked for).

Toys were classified by the experimenters, and the authors avoided studies in which classification was done post facto (that is, any toys preferred by boys were subsequently classified as “boy toys” and the same for girls.

Here’s the graph they give of how toys were classified among the various experiments. The bars represent the frequency in the 75 studies in which a given kind of toy was classified as a boy’s toy (black bars), a girl’s toy (light gray bars) or a “neutral” toy (medium-gray bars):

(From paper): Fig. 2 Toys used as girl-related, boy-related, and neutral toys as listed in method sections of studies included in the meta-analysis. Studies could contribute more than one toy to the figure. These toys were mentioned in method sections of studies, but data were not typically reported for each individual toy. Most studies reported statistics for groups of toys, but not for individual toys

 

The results were clear and their significance high; you can read the paper to see more:

1.) There were large and highly significant average differences between the two sexes in preference for both boy-related and girl-related toys. This was in the “expected” direction. As I note above, this doesn’t tell you whether girls prefer girl-toys over boy toys or boys prefer boy toys over girl toys; it just says that there’s an overall difference between the sexes in their preference for one class of toy versus the other. BUT. . . .

2.) Within boys, boys preferred boy toys more than girl toys. And within girls, girls preferred girl toys more than boy toys. The overall sex difference, then, is the result of each sex preferring in general the toys considered “appropriate” for that sex.”

3.) #1 and #2 also hold for the “plush toys versus vehicles” test: there was a highly significant differences between boys and girls in toy preference, and that reflected girls’ preference for plush toys over vehicles and boys’ preference for vehicles over plush toys.

4.) “No choice” tests showed a stronger degree of sex-specific preference than did “choice” tests like free-play experiments. But the three other methods of assessing preference also showed statistically significant sex-specific differences.

5.) In three out of four analyses, the degree of preference increased with the age of the child. The only exception was the size of girls’ sex-specific preference for girl-related over boy-related toys, which showed no significant change.

Finally, and the one result that bears on the “genes versus socialization” hypotheses:

6.) The year of publication showed no relationship with the gender difference. Boys preferred boy toys over girl toys, and girls girl toys over boy toys, to the same extent over the 5 decades of studies. This was true of all four ways of measuring sex difference; in no case did the significance of the temporal relationship drop below 0.103 (it has to be below 0.05 to be considered significant).  This goes counter to what is expected if “socialization” had decreased in the last 50 years, for child preference would also have been expected to decrease if that preference was due to society’s enforcing standards and stereotypes on children’s toy affinity.

What does it all mean?  On the face of it, all this study shows is that there are consistent differences in toy preferences between boys and girls, with each sex preferring the sex-specific toys labeled by the previous experimenters. Methodology does influence the degree of preference, but there is a strong and consistent preference in the expected direction.

That in itself says nothing bearing on whether toy preference is innate, the result of socialization, or a mixture of both. But two facts imply that a reasonable amount of toy preference is innate. The first is the results of the macaque studies, showing similar preference for vehicles over plush toys in one (or maybe two) studies. Since macaques don’t adhere to a human-like patriarchy, nor do they ever see toys before the tests are done, this implies an innate sex-specific difference in preference.

The same holds for the lack of change in the degree of preference with time in the human studies. One might expect that preference would have decreased over the years given the attempts of parents (at least in much of the West) to avoid socializing their children into preferring “appropriate” toys. But that didn’t happen. However, I’m not sure whether anyone’s actually measured that decrease in socialization.

Finally, the fact that preference seems to be present at very young ages, when socialization is seemingly impossible, may be evidence for an innate component to preference. However, the preference increases with age, and one might say that this trend reflects socialization. And blank slaters might claim that covert or unknowing socialization is going on right from birth.

In the end, I find the evidence from the macaques the most convincing, but I have a feeling that human children, whose preferences parallel those of macaques, are also showing preferences based in part on evolution. Studies in other primates would be useful (do our closer relatives like gorillas and chimps show such preferences?) as well as more studies of very young children, perhaps using children brought up in homes where socialization is deliberately avoided.

__________________________

Davis, J. T. M. and M. Hines. 2020. How Large Are Gender Differences in Toy Preferences? A Systematic Review and Meta-Analysis of Toy Preference Research. Archives of Sexual Behavior. Online, published 27 January, 2020

 

Thoughts and prayers: what are they worth?

September 18, 2019 • 9:15 am

Everyone knows about the “thoughts and prayers” sent out after tragedies as a quotidian feature of the daily news. And all of us nonbelivers disparage not only the use of prayers (shown in a Templeton-funded study to not have any effect on healing after surgery), but also the uselessness of thoughts—unless conveyed directly to the afflicted person instead of dissipated in the ether.

But an anthropologist and an economist wanted to know more: what is the value of thoughts and prayers (t&p)? That is, how much would somebody in trouble actually pay to receive a thought, a prayer, or both? And would it matter if that afflicted person was religious or just a nonbeliever? Or whether the person offering t&p was religious? So the study below was published in the Proceedings of the National Academy of Sciences (click on screenshot below; pdf here; reference at bottom).

I suppose that, to an economist, the psychic value of getting thoughts or prayers (t&p) from strangers can be measured in dollars, and I’ll leave that for others to discuss. At any rate, the results are more or less what you think: Christians value t&p, nonbelievers don’t.

What Thunström and Noy did was to recruit 436 residents of North Carolina, the state hit hardest last year by Hurricane Florence. Those who were not affected by the hurricane (about 70% of the sample) had experienced another “hardship”. They were then given a standard sum of money (not specified) for participating in a Qualtrics survey, and an additional $5 to be used in the t&p experiment. Among the 436 participants, some were self-identified as Christian, while another group, either denying or unsure of God’s existence, were deemed “atheist/agnostic”. (The numbers in each group weren’t specified.)

The experiment also included people offering thoughts and prayers: people who were recruited to actually give them to those who were afflicted. These people included Christians, the nonreligious, and one priest who was “recruited from the first author’s local community.” Each offerer received a note detailing the travails of an afflicted person, and instructing them to offer either a thought or a prayer (it’s not clear whether the names of the afflicted were included in the note, but of course God would know).

To value the thoughts and prayers, the afflicted were offered two alternatives, among which a computer decided: an intercessory gesture that they’d pay for, or the absence of a gesture that they’d pay for. Payments could be positive (you’d have to actually give money), or negative (you’d pay to not have the gesture). The amount you’d pay varied, says the paper, between $0 and $5—the amount given for participating in the study, and subjects stated this “willingness to pay (WTP) before the computer made the choice.

The experiment isn’t described very well, and there’s no supplementary information, but I’ve taken some other details from second-hand reports of the studies, with the reporters apparently having talked to the authors. At any rate, here are the results, indicated in how much money people would give up for t&p, including both Christians (dark bars) and atheists/agnostics (light bars). Since atheists/agnostics wouldn’t be praying, the only alternative people were offered to receive that group were “thoughts”.

(from paper) The value of thoughts and prayers from different senders (95% confidence intervals displayed; n = 436).

Christians would always give up an amount of money significantly greater than zero for both thoughts and prayers, except when the thinker was a nonreligious stranger, to whom they’d pay $1.52 not to receive thoughts (dark bar below zero). Since the authors are social scientists, they use a significance level of 0.1 (“hard scientists” use at most 0.05, and the latter is significantly different from zero using the more lax criterion but not the one that scientist would use).

Christians would of course offer the most money ($7.17) for prayers from a priest, less money ($4.36) for prayers from a Christian stranger, and still less ($3.27) for thoughts from a Christian stranger, though this doesn’t appear to be significantly different from the price for prayers from the Christian stranger (the statistical comparison isn’t given).

In contrast, atheists/agnostics don’t give a rat’s patootie about t&p. In fact, they’d pay money to have priests or Christians not offer them thoughts and prayers, as you can see from the three light bars to the left, which are all below zero. What surprised me is that the nonbelievers would pay more to avoid prayers from a Christian stranger than from a priest ($3.54 versus $1.66 respectively), while they’d pay an intermediate amount ($2.02) to avoid getting thoughts from a religious stranger (these are all significantly different from zero). Finally, as you’d expect, nonbelievers don’t give a fig for thoughts from other nonbelievers, as we’re not superstitious. These nonbelievers would pay 33¢ to get thoughts from nonbelieving strangers.

There’s another part of the experiment in which participants were asked to give their level of agreement or disagreement to the statement, “I may sometimes be more helped by others’ prayer for me than their material help.” This “expected benefits index” (EBI) explains a great deal of the variation in the amount of money people were willing to pay for prayers and thoughts (or not pay for prayers and thoughts).

What does this all mean? To me, nothing more than the obvious: religious people value thoughts and prayers more than do nonreligious people. Moreover, religious people do not value thoughts from nonbelievers, and nonbelievers give negative value to thoughts or prayers from Christians, and no value to thoughts from fellow nonbelievers. That’s not surprising.

What is a bit surprising is that Christians would sacrifice money to get thoughts and prayers, and would pay just about as much for thoughts from other Christians than for prayers from other Christians. (Prayers from priests, however, were most valuable, showing that the Christians really do believe that priests have a power to help them more than do everyday Christians). I was also surprised that nonbelievers would pay money to avoid thoughts and prayers from Christians. Since we think these are ineffectual, why pay to avoid them?

In general, I saw the study as weak, and afflicted by a failure to fully describe the methods as well as the use of an inflated level of statistical significance (0.1).  All that it really confirms is that Christians think that thoughts and prayers really work; i.e., that they believe in the supernatural. But we knew that already. I am in fact surprised that this study was published in PNAS, which is regarded as a pretty good scientific journal.

_______________________

ThunströmL. and S. Noy 2019. The value of thoughts and prayers