What, exactly, is critical race theory?

September 30, 2021 • 12:45 pm

All of us bandy about the term “critical race theory”, or use its initials, CRT. But how many of us really know what it is? And IS there really a widely-accepted canon of thought called CRT? If you were to ask me, I’d say CRT is the view that all life is a fight for power and hegemony of socially-constructed “races” that have no biological reality, that all politics is to be viewed through the lens of race, that the “oppressors” are, by and large, all biased against minorities and fight endlessly to keep them powerless, with many of the oppressors not even knowing their bias, and that different kinds of minority status can be combined into an “intersectionality” so that someone can be oppressed on several axes at once (for example, a Hispanic lesbian).

But not everybody agrees with that, and in fact there are widely different versions of CRT depending on the exponent (Ibram Kendi is perhaps the most extreme in his pronouncements), and also on the country. In the article below at Counterweight, Helen Pluckrose, co-author with James Lindsay of the good book Cynical Theories, tries to parse a meaning of CRT from all the diverse construals.

It turns out that because there are so many versions of CRT, perhaps (in my view) it’s best to stop using the term at all.

Click on the screenshot to read:

There’s Materialist CRT, Postmodernist CRT, the British Educational Association’s CRT, Critical Social Justice Anti-Racism, and even a version for higher education confected by Payne Hiraldo (a professor of the University of Vermont).  I won’t give them all here, and of course there’s considerable overlap. Here’s what Helen says are the tenets from the book Critical Race Theory: An Introductionwith her interpolations.  Her words are indented, and the tenets are doubly indented and put in bold:

Critical Race Theory: An Introduction describes it as a departure from liberal Civil Rights approaches:

Unlike traditional civil rights discourse, which stresses incrementalism and step-by-step progress, critical race theory questions the very foundations of the liberal order, including equality theory, legal reasoning, Enlightenment rationalism, and neutral principles of constitutional law.

and sets out four key tenets:

First, racism is ordinary, not aberrational—“normal science,” the usual way society does business, the common, everyday experience of most people of color in this country.

This is a claim that racism is everywhere. All the time. It’s just the water we swim in. It’s also claimed that most people of colour agree with this.  In reality, people of colour differ on this although a greater percentage of black people believe it to be true than white people.

Second, most would agree that our system of white-over-color ascendancy serves important purposes, both psychic and material, for the dominant group.

This means that this system, which has just been asserted to exist everywhere, is valued by white people both psychologically and in practical terms. Many white people would disagree that they regard racism positively.

A third theme of critical race theory, the “social construction” thesis, holds that race and races are products of social thought and relations. Not objective, inherent, or fixed, they correspond to no biological or genetic reality; rather, races are categories that society invents, manipulates, or retires when convenient.

This argues that races are social constructs rather than biological realities which is true – “populations” are the biological categories and don’t map neatly onto how we understand race – and that society has categorised and recategorised races according to custom, which is also true.  [JAC: I’d take issue with the claim that there is no biological “reality” at all to populations, races, or whatever you call ethnic groups. The classical definition of “race” is incorrect, but the view that races have no biological differences and are thus completely socially constructed, is also wrong.]

A final element concerns the notion of a unique voice of color. Coexisting in somewhat uneasy tension with antiessentialism, the voice-of-color thesis holds that because of their different histories and experiences with oppression, black, American Indian, Asian, and Latino writers and thinkers may be able to communicate to their white counterparts matters that the whites are unlikely to know. Minority status, in other words, brings with it a presumed competence to speak about race and racism.

There is much evidence that there is no unique voice of colour, and although there is good reason to think that people who have experienced racism may well have more perspective on it, they tend to have different perspectives. CRTs are more likely to regard those who agree with them as authoritative than those who disagree – i.e  “Yes” to Derrick Bell and Kimberlé Crenshsaw but “No” to Thomas Sowell or Shelby Steele.

After you work your way through Helen’s long piece, you realize that you simply cannot use “Critical Race Theory” unless you specify exactly what version you’re talking about. In fact, I’d say it’s best to ditch the phrase altogether and just discuss the claims.  I believe that’s Helen’s conclusion as well:

If it helps to call the current anti-racist theories “contemporary critical theories of race” rather than “Critical Race Theory”, do so, but for goodness’ sake, let’s stop the endless quibbling about terminology and talk about the ideas that have deeply infiltrated universities, employment, education, mainstream media, social media and general culture.

This is vitally important for two reasons.  Firstly, we need to be able address racism in society ethically and effectively. Secondly and relatedly, individuals need to be allowed to have their own views about how racism works and their own ethical frameworks for opposing it. They need to be able to discuss and compare them. This will help with achieving the first goal.

When it comes to discussing contemporary critical theories of race, we need to be able to talk about what the current theories actually say and advocate for and whether they are ethical and effective. Many people from a wide range of political, cultural, racial, religious and philosophical backgrounds would say “No” they are not, and they should be able to make their case for alternative approaches.

It is also vitally important that we are able to talk about how much influence these theories already have and how much they should have on society in general and on government, employment, mainstream media, social media and education in particular, and whether this influence is largely positive or negative. From my time listening to clients of Counterweight, I would respond, “Way too much” and “Largely negative” to these questions.

She ends with what are perhaps the most important questions, and can’t resist injecting her own opinion. Others may differ, but she says she has an open mind:

Most importantly, we need to be able to measure and discuss what effects these theories have on reducing racism, increasing social cohesion and furthering the goals of social justice. Are they achieving that or are they increasing racial tensions, decreasing social cohesion and being the driving force for many injustices in society while creating a culture of fear, pigeonholing people of racial minority into political stereotypes, and silencing the voices of those who dissent? I strongly believe, based on the reports coming into Counterweight, that it is the latter. However, I am willing to be persuaded to think differently, so let’s talk.

In the end, the theory is important only if we can get data supporting or contradicting it.

What makes a good life?

September 7, 2021 • 1:00 pm

I usually avoid TED talks because they smack too much of motivational speech: like the advice of Matt Foley, who lives in a van down by the river and eats government cheese. But this one popped up when I was watching YouTube, and, listening to the introduction, I was drawn into it.

The speaker, Robert Waldinger, is director of the Harvard Study of Adult Development, a project that’s been going on for 75 years.  The researchers studied 724 men over that period, asking them how they were doing and what they were doing every two years until the men died. They also did personal interviews, got medical records, and even drew the subjects’ blood.

There were two groups in the original study that’s ongoing since the 1930s: Harvard sophomores and the “control” group of boys who came from troubled and disadvantaged families in poor parts of Boston.

60 of the original 724 men are still alive, and now their children are being studied as well: 2000 more. Women have been added at last.  This represents an unparalleled study of what factors make for a happy and healthy life.

The answer, which may seem anodyne to you, nevertheless contradicts the Millennial answer Waldinger describes, which is the view that having fame and money make for a good life. (“A good life” is one in which the person lives it is both healthy and happy and lives a long time.) I’ll let you listen to the video for yourself.

I think this 13-minute talk is worth hearing, both for your own well being and, perhaps, to help other people. But maybe you’ll see it as obvious and trite.

By the way, Waldinger is a psychiatrist and (disappointingly to me) a psychoanalyst and is also a Zen priest.

A short primer on Critical Race Theory

July 22, 2021 • 9:15 am

Is the phrase “short primer” redundant? If so, forgive me. At any rate, there’s a pretty evenhanded treatment of CRT, covering its main tenets and its implications, in Forbes. You can see it by clicking on the screenshot below:

The author’s bona fides: Redstone is “the founder of Diverse Perspectives Consulting and a professor of sociology at the University of Illinois at Urbana-Champaign. [She is] the co-author of Unassailable Ideas: How Unwritten Rules and Social Media Shape Discourse in American Higher Education and a faculty fellow at Heterodox Academy.”

Her main point is that Critical Race Theory “forms a closed system”, a “perspective that leaves no space for anyone, no matter how well-intentioned, to see the world differently.” In other words, it brooks neither dissent nor discussion.

Her concerns are these:

CRT’s critics are often portrayed as wanting to “whitewash” history and deny the reality of slavery. If the problem were that simple, the criticisms would indeed be worthy of the dismissal they often receive. Yet, there are serious concerns about CRT that are rarely aired and that have nothing to do with these points. As a result, confusion and misinformation abound and tension continues to mount.

She lays out what she sees as the four main tenets of the theory as it’s presented in schools or to the public. Note that these differ from conceptions of CRT offered by scholars in academia. Quotes from the article are indented; any comments of mine are flush left.

1. Colorblind racism—Deemphasizing the role of race and racism, including to focus on concepts of merit, is itself a manifestation of racism.

2. Interest convergence—Members of the dominant group will only support equality when it’s in their best interest to do so.

3. Race and racism are always tied together. Race is a construct meant to preserve white dominance over people of color, while making it seem like life is about meritocracy.

4. Inattention to systemic racism—An unwillingness to recognize the full force of systemic racism as determining disparities between groups is a denial of the reality of racism today (and evidence of ignorance at best and racism at worst).

I’d add to that the following three points, which are mine. (Actually, points 5 and 6 come from Ibram Kendi and point 7 from Robin DiAngelo and many others):

5. (Really a supplement to point 4):  Inequalities in representation or groups, for example disproportionately low numbers of people of color in STEM fields, is prima facie evidence of current and ongoing racism in those fields and not a historical residuum of racism in the past.

6. The only way to rectify this kind of systemic racism resulting from ongoing discrimination is to discriminate in favor of minorities (i.e., affirmative action, dismantling meritocracies, etc.). As Kendi said, ““The only remedy to racist discrimination is antiracist discrimination. The only remedy to past discrimination is present discrimination.”

7.  Every white person, whether they know it or not, is a racist, embodying, even unconsciously, the tenets of white supremacy instantiated in point 3 above.

According to Redstone, the downside of promulgating CRT is that all criticism of the theory is immediately dismissed as racism, so that there is no room for “principled concerns some may have about seeing every aspect of society through the lens of race and power.” Further, it may be hard to restructure society, she avers, when all social problems are fobbed off on either racism and ignorance.

Finally, in this short piece she gives her recommendations for people on all sides of the political spectrum, as well as for schools and the mainstream media. I quote:

To conservatives: Stop trying to enact legislative bans on CRT. Such bans are censorious, probably unconstitutional, and, simply put, will do nothing to solve the underlying problem.

To progressives: Stop talking about CRT and, more importantly, its related ideas as though objections to it and concerns about it are all driven by a denial of systemic racism or an unwillingness to acknowledge the reality of slavery. As I’ve pointed out here, this is to grossly miss the point. The importance of this point stands even if the loudest critics are not raising the concerns I’ve outlined here.

To the mainstream media: See advice for progressives, above.

To schools and workplaces: Critical Race Theory is a social science theory—a tool to understand the world around us. As a theory, its related ideas about race, identity, power, and fairness constitute one possible way to see the world. As with any social science theory, but particularly one this controversial, its ideas should be placed in context. Placing the ideas in context requires presenting contrasting viewpoints—for instance, perspectives that do not automatically assert that racialized explanations and solutions should be the primary lens for viewing the world. Importantly, these contrasting viewpoints are to be presented on moral footing that’s equal to CRT’s.

I can’t say I disagree with any of these prescriptions. The presentation of CRT as a given that brooks no dissent is particularly troubling to me as a scientist, because, after all, it is a “theory” and can’t be taken as absolute truth.  My points #5 and #7, for example, are dubious and, I think, palpably false assertions. Yet if you raise objections, you’re not only typed as a racist yourself, but demonized. We have to beware of a theory that is presented as prima facie truth, for, like CRT, it constitutes a system that, because it cannot be shown to be wrong, cannot be assumed to be right.

This is not to say, of course, that racism doesn’t exist, or hasn’t shaped our country profoundly. It does and it has. But it’s not the only problem we face (there’s the matter of class inequality, for instance), and even fixing racial inequality is far more difficult than some adherents to CRT suggest. (Effacing history, for example, by removing statues or renaming buildings, while such efforts may be warranted, will accomplish almost nothing.) And CRT won’t touch the issue of anti-Semitism.

Steve Pinker talks with Helen Pluckrose for Counterweight

July 11, 2021 • 8:45 am

You all know Steve Pinker, and surely nearly all of you have heard of Helen Pluckrose, who not only participated in the “Grievance Studies Affair“, but coauthored with James Lindsay the book Cynical Theories: How Activist Scholarship Made Everything about Race, Gender, and Identity and has now founded the humanist but anti-woke organization Counterweight.

Here Helen has an eight-minute interview with Steve Pinker. (Note that there’s a photo of Cape Cod in the background, where Steve and Rebecca repair to their second home.) It’s mostly about wokeness and how to combat it.

 

h/t: Paul

What is a social construct?

June 18, 2021 • 9:15 am

The literature of identity politics and social justice, with or without capitals, is full of assertions that this or that system, conception, or object is a “social construct.”  This is nearly always claimed without defining “social construct,” though most of us have a vague idea that the term means something that lacks an objective reality independent of human social agreement.  And it’s usually used dismissively—not to deny something like gender identity or racism—but to deny that they exist independently of human thought. That is, the claim that “race is a social construct” is taken to mean that “there is no objective reality to the concept of race, which was simply created by humans” (the usual reason is to give groups power over other groups), but nevertheless race is seen and treated as real in the same way that the idea of a monarchy (see below) is treated as real.

I decided to look up various definitions of “social construct” to see if my notion was true. It turns out that, by and large, it is, and most definitions are pretty similar. Below, for example, are four definitions with links (given definitions and glosses are indented).

Oxford English Dictionary:    A concept or perception of something based on the collective views developed and maintained within a society or social group; a social phenomenon or convention originating within and cultivated by society or a particular social group, as opposed to existing inherently or naturally.

Merriam-Webster:  an idea that has been created and accepted by the people in a society. Class distinctions are a social construct.

Macmillan Dictionary: a concept or belief that is based on the collective views of a society rather than existing naturally

yourdictionary.com:   Social constructs develop within a society or group. They don’t represent objective reality but instead are meaningful only because people within the society or group accept that they have meaning. Simply put, social constructs do not have inherent meaning. The only meaning they have is the meaning given to them by people.

For example, the idea that pink is for girls and blue is for boys is an example of a social construct related to gender and the color of items. The collective perception that a particular color can be associated with a certain gender is not an objective representation of truth or fact. Instead, it a social convention that came to have meaning within the context of society.

So I was correct: “social constructs” are ideas or objects or notions that do not exist independently of human decision making and social agreement. They are not “real” in the sense that without social agreement about what they mean, they would have no objective reality. Or so it is claimed.

Now the term “reality”, of course, is slippery. Certainly money is real, in terms of paper currency, but the agreement that it can be used to purchase goods and has ascribed value is a social construct. Even Martian sociologists could observe this, but the value of a dollar bill would have to be ascertained by observing how it’s used. And the British monarchy is real, though it wouldn’t exist without social agreement.  I won’t go on in this vein, as it leads into psychological hinterlands where I would be criticized by some no matter what I said. I simply present the definitions I’ve seen above.

Now, here is a list of examples of “social constructs” along with my rough take on whether I think they really do adhere to the definitions above. You can find more examples here.

gender.  Gender and gender roles are multifarious, and more are devised each day. The behaviors associated with these (e.g., “genderfluid”) do describe real behaviors, but “genderfluid” as a given category seems to me a social construct.

gender roles. Same as above, though the behaviors may stem from biology. I would have trouble, for example, with the idea that being bisexual is “just” a social construct, for it does describe people who are attracted to members of both sexes. And there may be a biological basis for this.

sex. As I’ve argued at length, sex is a biological and objective reality, in nearly all cases of animals a binary category with a strict basis resting on gamete size. So while gender may be a social construct, sex, as in “biological sex”, is not.

sex roles. This is a mixture of both an objective reality and a social construct. That is, the view that men are generally attracted to women and vice versa, a feature that has an evolutionary basis, is not something agreed on by society, despite numerous exceptions like homosexuality. And some behavioral differences between the sexes, like aggression and risk-taking, are, I think, not social constructs but partly encoded in our DNA by natural selection. Other “roles”, like guys should like blue and girls pink, are clearly social constructs.

religion.  Despite the claim that people have an inborn desire to apprehend and worship divine deities or concepts, I see religion as a social construct. It may have a biological origin, as some claim (ascribing mysterious events to specific causes), but religion in the form we know it is clearly something devised by humans. I also don’t think that if we wiped out all religious sentiment from the planet, it would return with nearly the ubiquity it has today. We simply know too much about what makes things happen, and we still have no evidence for gods.

social class system.  It’s an objective fact that some people are smarter than others and some make more money than others. But the idea that this makes some people superior to others is clearly a social construct, and a maladaptive one. Indian castes are similar, but have been genetically separated for so long via historical origins as well as prohibitions on intermarriage that now there are correlations between one’s caste and one’s genes.

monarchy. A social construct and, I think, another maladaptive one.

marriage. A social construct; many societies don’t have marriage in the way we know it. The rules, rituals, and laws about marriage have all been made up by society.

countries. Clearly social constructs based on human history and either warfare or general agreement among different groups of people.

money (see above).

biological species. Not a social construct in general, but a reality existing independent of humans, most obvious in sexually reproducing animals but also in many plants (animals, after all, chose to mate with members of their own species, and that choice has nothing to do with human consensus). For a full-scale justification of species as real groups, independent of human conception, see Chapter 1 of my book with Allen Orr, Speciation.

disability. Another slippery one. Clearly if someone has lost their sight or their limbs, they are not as “able” to do some stuff than people who are relatively intact, though they may develop compensatory skills (like more acute hearing in the deaf) that make them “super able” in other ways. Ergo the term “differently abled.” In general the idea that people with such losses should have interventions to compensate for them and enable them to participate more fully in society, and should have such interventions, is both an objective reality (e.g., for the blind) but also a social convention (our moral view that the disabled deserve to be accommodated).

I should add here that I see morality is perhaps the most prominent social construct, for while it’s a fact that societies have moral systems, the specific actions viewed as “good” or “bad” have no objective justification or even a label independent of human agreement.

race. This is the most hot-button of the topics, so I’ve saved it for last. Clearly race is a “social construct” if by the term you mean that “races are absolutely distinguishable groups of individuals with substantial and diagnostic genetic differences.”  The old Carleton Coon-ian races of “Caucasoid, Mongoloid, Capoids, Congoids, and Australoids” have gone down the drain.

On the other hand, multi-site genetic analysis shows, in general, that humans do fall into groups largely distinguishable from their DNA, though those groups are overlapping and show gene mixing, so that many individuals cannot be said to fall into a given group. But the grouping of humans can, with fair accuracy, give an idea of someone’s geographic origins and ethnicity, because it reflects an ancient geographic separation of populations that led to their genetic differentiation. As we know, the amount of diversity within any given group exceeds the diversity between groups, but that doesn’t mean that you can’t use multiple segments of DNA, combined, to diagnose someone’s ancestry and ethnicity.

Multilocus groupings of humans, for example, show that they can be divided into various fairly distinct genetic clusters, ranging from 4-7, and which correspond roughly to areas where humans were genetically isolated (Africa, Oceania, East Asia, the Americas, etc.)  In the U.S., multi-site cluster analysis identifies four clusters, corresponding to whites, Hispanics, African-Americans, and East Asians (Chinese and Japanese). Further, when you look at someone’s genetic profile and put it into one of those four clusters, and then ask them, without that knowledge, what their self-identified “race” is, the match between genetics and self-identified “race” is remarkable. As the paper of Tang et al. notes:

“Of 3,636 subjects of varying race/ethnicity, only 5 (0.14%) showed genetic cluster membership different from their self-identified race/ethnicity.”

I won’t cite other studies showing that you can identify the location of one’s genetic ancestors with remarkable accuracy. The point is that this correspondence between genes and ancestry, and between phenotype (correlated with ancestry) and genes means that “race”, while a loaded term—I use “ethnic groups” as a substitute—has some basis in biological reality and therefore is not a social construct. If the concept of “race” (or “ethnicity”, as I prefer to say) were purely an agreement of people within society having nothing to do with objective reality, you wouldn’t see the correspondence between how one identifies themselves and the code in their DNA. I hasten to add that these biological identifiers of races say nothing about hierarchies, but they are biologically and evolutionarily meaningful.

All this discussion goes to show several things. First, the concept of a “social construct” is bandied about widely, but often used either inaccurately or is not defined at all. Some things seen as social constructs, like sex and race—or species, for that matter, as some misguided biologists have asserted that species in nature are purely human-defined segments of a biological continuum—actually have an objective reality independent of human consensus. Others, like a monarchy or Mormonism, are purely the results of a human consensus. Thus you need to explain what you mean when you claim that something is a “social construct”, and explain why that concept has no objective reality but is purely the result of social agreement.

The Middle East and Ireland losing their religion

February 5, 2021 • 9:30 am

Two of the last holdout areas for religion—countries and regions that have historically been resistant to nonbelief—are now becoming surprisingly secular. Those are Ireland in the West and seven countries in the Middle East—at least according to recent surveys. The stunning thing about both areas is how fast the change is coming.

Let’s take the Middle East first. There are two studies mentioned in the article below in Die Deutsche Welle (click on screenshot):

The article itself gives data for only Iran, but you can find data for six other countries by clicking on the article’s link to a study at The Arab Barometer (AB), described as “a research network at Princeton University and the University of Michigan.” (The sample size for that study isn’t easily discernible from the various articles about it).

First, a graph showing a striking increase in secularism across the six nations:

The change from the blue bar to the burgundy one is at most 7 years, yet every index in each country has dropped over that period save for a few indices that appear to be unchanged. The true indices of religiosity itself—profession of nonbelief and attendance at mosques—has fallen dramatically. And remember, this is over less than a decade.  Trust in religious leaders and Islamist parties has also dropped.

Here’s the summary among all these countries. (Note that many Muslim countries, including those in Africa and the Far East, as well as nations like Saudi Arabia and Yemen, aren’t represented.) 

In 2013 around 51% of respondents said they trusted their religious leaders to a “great” or “medium” extent. When a comparable question was asked last year the number was down to 40%. The share of Arabs who think religious leaders should have influence over government decision-making is also steadily declining. “State religious actors are often perceived as co-opted by the regime, making citizens unlikely to trust them,” says Michael Robbins of Arab Barometer.

The share of Arabs describing themselves as “not religious” is up to 13%, from 8% in 2013. That includes nearly half of young Tunisians, a third of young Libyans, a quarter of young Algerians and a fifth of young Egyptians. But the numbers are fuzzy. Nearly half of Iraqis described themselves as “religious”, up from 39% in 2013. Yet the share who say they attend Friday prayers has fallen by nearly half, to 33%. Perhaps faith is increasingly personal, says Mr Robbins.

And some data from Iran, not represented in the survey above. Remember, Iran is a theocracy. The survey is for those over 19, and the sample size is large: over 40,000 “literate interviewees”.

An astonishing 47% have, within their lifetime, gone from being religious to nonreligious, while only 6% went in the opposite direction. As we see for almost every faith, women retain their religion more than men.  The “non-religious people” aren’t all atheists or agnostics, but instead appear to be “nones”—those with no formal affiliation to a faith. (This includes atheists and “spiritual people” as well as goddies who don’t belong to a formal church.)

I say that many are “nones” because another study in Iran, cited in the AB article, showed that 78% of those surveyed in the Middle East believe in God: a lot more than the 47% below who professor to being “non-religious” (of course these are different surveys and might not be comparable). Still, in this other survey, 9% claim that they’re atheists—comparable to the 10% of Americans who self-describe as atheists.

And a general remark by a religion expert whom we’ve encountered before:

The sociologist Ronald Inglehart, Lowenstein Professor of Political Science emeritus at the University of Michigan and author of the book Religious Sudden Decline [sic], has analyzed surveys of more than 100 countries, carried out from 1981-2020. Inglehart has observed that rapid secularization is not unique to a single country in the Middle East. “The rise of the so-called ‘nones,’ who do not identify with a particular faith, has been noted in Muslim majority countries as different as Iraq, Tunisia, and Morocco,” Tamimi Arab added.

Inglehart’s book, Religion’s Sudden Decline, came out January 2, so it’s brand new, and you can order it on Amazon here.

*************

It’s a pity that Grania isn’t here to comment on this article from Unherd’s new news site The Post, as she always had a good take on Catholicism in Ireland (she was, in fact, a German citizen born in South Africa). These data come from a study taken by the World Inequality Database, which I can’t access. I’ll just give the scant data for Ireland presented by David Quinn (click on screenshot):

The proportion of Irish people who say they never go to church:

2011-2016: 19%
2020:     50%

That is a huge jump!

The proportion of Irish people who regularly attend church (once a month or more often):

2011-2016: 33%
2020:     28%

This shows that the drop in Irish religiosity reflects a rise in who rarely or never go to church, not a falling-off of the regulars. Quinn reports that “just under half of Irish people were coming to church less than once a month four or five year [sic] ago and this is now just 22%. Many of those sporadic attenders have stopped coming altogether.”

Over much of the 12 years this website has been going (we started in January 2009), I’ve written posts showing the decline of religiosity in the West, predicting that it is a long-term trend that will end with religion becoming a vestigial social organ. Yes, it will always be with us, but in the future it won’t be very much with us. But I thought the Middle East would be a last bastion of belief, as Islam is so deeply intertwined with politics and daily life. But that appears to be waning as well, for the Middle East is becoming Westernized in many ways, and with that comes Western values and secularism (see Pinker’s Enlightenment Now for discussion of increased secularism and humanism.) This is to be applauded, except by those anti-Whigs who say that religion is good for humanity.

Quinn echoes much of this at the end of his piece, explaining why Ireland remained more religious than England and the countries of Northern Europe:

Secularisation has swept across the whole of the western world, and Ireland is part of the West. It was impossible for Ireland not to eventually be affected by social and intellectual trends elsewhere. What almost certainly delayed secularisation in Ireland is that, in the years after we gained independence, one way of showing we had shaken off British rule was by making Catholicism an integral part of our national identity. As we no longer believe it is necessary to do this, we are now shaking off the Church.

The third factor is that, as a small country it can be particularly hard to stand out from the crowd. Once, we all went to Mass. Now, below a certain age, almost no-one goes. We were a nation of nuns and priests. Now, we are becoming a people with no direct religious affiliation: a country of ‘nones’.

Amen!

h/t: Steve, Clive

Dueling essays that come to the same conclusion about wokeness

January 29, 2021 • 12:45 pm

“We are all on campus now.”
—Andrew Sullivan

Here we have two editorials purporting to say different things, but in the end reaching nearly identical conclusions.

The first, published at Persuasion (click on screenshot), is by a young writer, Sahil Handa, described by Harvard’s Kennedy school: “a rising Junior from London studying Social Studies and Philosophy with a secondary in English. At Harvard, Sahil writes an editorial column for the Crimson and is a tutor at the Harvard Writing Center. He is the co-founder of a Podcast Platform startup, called Project Valentine, and is on the board of the Centrist Society and the Gap Year Society.”

The title of Handa’s piece (below) is certainly provocative—I see it as a personal challenge!—and his conclusion seems to be this: most students at elite colleges (including Harvard) are not really “woke” in the sense of constantly enforcing “political correctness” and trying to expunge those who disagree with them. He admits that yes, this happens sometimes at Harvard, but he attributes wokeness to a vocal minority. The rest of the students simply don’t care, and don’t participate. In the end, he sees modern students as being similar to college students of all eras, especially the Sixties, when conformity meant going to “hippie protests.”  His conclusion: modern “woke” students, and those who don’t participate in the wokeness but also don’t speak up, are evincing the same “old borgeois values” (presumably conformity). And we shouldn’t worry about them.

It’s undeniable, and Handa doesn’t deny it, that Wokeism is pervasive at Harvard. He just doesn’t see it as universal:

If you’re reading this, chances are you’ve heard of the woke mob that has taken over college campuses, and is making its way through other cultural institutions. I also suspect you aren’t particularly sympathetic to that mob. While I’m not writing as a representative of the woke, I do wish to convince you that they are not as you fear. What you’re seeing is less a dedicated mob than a self-interested blob.

I recently finished three years as a Harvard student—a “student of color,” to be precise—and I passed much of that time with the type you might have heard about in the culture wars. These were students who protested against platforming Charles Murray, the sociologist often accused of racist pseudoscience; these were students who stormed the admissions office to demand the reversal of a tenure decision; these were students who got Ronald Sullivan—civil rights lawyer who chose to represent Harvey Weinstein in court—fired as Harvard dean.

. . . . Nor are most students even involved in campus protest.

There are almost 7,000 undergraduates at Harvard, yet the tenure protest was attended by fewer than 50 students, and a few hundred signed the letters urging the administration to fire Sullivan. Fretful liberals do not pause to think of all the students who didn’t join: those who talked critically of the activists in the privacy of their dorm rooms; those who wrestled with reservations but decided not to voice them; or those who simply decided that none of it was worth their time.

But Sullivan was fired as a dean. The Harvard administration itself engages in a lot of woke decisions, like punishing students from belonging to off-campus single-sex “finals= clubs” (probably an illegal punishment), and giving them “social justice placemats” in the dining halls to prepare them to go home for the holidays. The woke students may not be predominant, but they are vocal and loud and activist. If that’s all the administration sees and hears, then that’s what they’ll cater to.

But why aren’t the non-woke students protesting the woke ones? Well, Handa says they just don’t care: they’re too busy with their studies. But it’s more than that. As he says above, the students who have “reservations” “decide not to voice them.” Why the reticence, though?

It’s because voicing them turns them into apostates, for their college and post-college success depends on going along with the loud students—that is, acquiescing to woke culture.  The Silent Majority has, by their self censorship, become part of woke culture, which creates self-censorship. (My emphases in Handa’s excerpt below):

The true problem is this: Four years in college, battling for grades, for résumé enhancements and for the personal recommendations needed to enter the upper-middle-class—all of this produces incentives that favor self-censorship.

College campuses are different than in the Sixties, and students attend for different reasons. Young people today have less sex, less voting power and, for the first time, reduced expectations for the future. Back in the Sixties, campus activists were for free speech, and conservatives were skeptical; today, hardly anybody seems to consistently defend free speech. In 1960, 97% of students at Harvard were white, and almost all of them had places waiting in the upper class, regardless of whether they had even attended university. Today, fewer than 50% of Harvard students are white, tuition rates are 500% higher, and four years at an Ivy League college is one of the only ways to guarantee a place at the top of the meritocratic dog pile.

It would be strange if priorities at university had not changed. It would be even stranger if students had not changed as a result.

Elite education is increasingly a consumer product, which means that consumer demands—i.e. student demands—hold sway over administration actions. Yet most of those student demands are less a product of deeply understood theory than they are a product of imitation. Most students want to be well-liked, right-thinking, and spend their four years running on the treadmill that is a liberal education. Indeed, this drive for career success and social acquiescence are exactly the traits that the admissions process selects for. Even if only, say, 5% of students are deplatforming speakers and competing to be woker-than-thou, few among the remaining 95% would want to risk gaining a reputation as a bigot that could ruin their precious few years at college—and dog them on social media during job hunts and long after.

It seems to me that he does see a difference between the students of then and now. Yes, both are interested in conforming, but they conform to different values, and act in different ways. After all, they want to be “right thinking”, which means not ignoring the woke, but adopting the ideas of the woke.  And that conformity extends into life beyond college, for Harvard students become pundits and New York Times writers. This means that intellectual culture will eventually conform to the woke mold, as it’s already been doing for some time.

In the end, Handa’s argument that we should pretty much ignore Woke culture as an aberration doesn’t hold water, for he himself makes the case that many Harvard students exercise their conformity by not fighting Woke culture, and even becoming “right-thinking”.  After tacitly admitting that Wokeism is the wave of the future, which can’t be denied, he then reiterates that college Wokeism doesn’t matter. Nothing to see here folks except a war among elites, a passing fad:

The battle over wokeism is a civil war among elites, granting an easy way to signal virtue without having to do much. Meantime, the long-term issues confronting society—wage stagnation, social isolation, existential risk, demographic change, the decline of faith—are often overlooked in favor of this theater.

Wokeism does represent a few students’ true ideals. To a far greater number, it is an awkward, formulaic test. Sometimes, what might look to you like wild rebellion on campus might emanate from nothing more militant than old bourgeois values.

Perhaps Stalinism didn’t represent the ideas of every Russian, either, but by authoritarian means and suppression of dissent, all of Russia became Stalinist. The woke aren’t yet like Stalinists (though they are in statu nascendi), but even if they aren’t a majority of the young, the values of the Woke can, and will, become the dominant strain in American liberal culture. For it is the “elites” who control that culture. Even poor Joe Biden is being forced over to the woke Left because he’s being pushed by the woke people he appointed.

***********

Michael Lind has what I think is a more thoughtful piece at Tablet, which lately has had some really good writing. (They’ve been doing good reporting for a while; remember when they exposed the anti-Semitism infecting the leaders of the Women’s March?). Lind is identified by Wikipedia as “an American writer and academic. He has explained and defended the tradition of American democratic nationalism in a number of books, beginning with The Next American Nation (1995). He is currently a professor at the Lyndon B. Johnson School of Public Affairs at the University of Texas at Austin.”

Lind’s thesis, and I’ll be brief, is that the nature of American elitism has changed, and has become more woke. It used to be parochial, with each section of the country having its own criteria for belonging to the elite (i.e. attending the best regional rather than national colleges). Now, he says, we have a “single, increasingly homogeneous national oligarchy, with the same accent manners, values, and educational backgrounds from Boston to Austin and San Francisco to New York and Atlanta. He sees this as a significant social change: a “truly epochal development.”

Click on the screenshot to read his longer piece:

In some ways, avers Lind, society is more egalitarian than ever, and what he means by that is that there is less obvious bigotry or impediments to success for minorities. And he’s right:

Compared with previous American elites, the emerging American oligarchy is open and meritocratic and free of most glaring forms of racial and ethnic bias. As recently as the 1970s, an acquaintance of mine who worked for a major Northeastern bank had to disguise the fact of his Irish ancestry from the bank’s WASP partners. No longer. Elite banks and businesses are desperate to prove their commitment to diversity. At the moment Wall Street and Silicon Valley are disproportionately white and Asian American, but this reflects the relatively low socioeconomic status of many Black and Hispanic Americans, a status shared by the Scots Irish white poor in greater Appalachia (who are left out of “diversity and inclusion” efforts because of their “white privilege”). Immigrants from Africa and South America (as opposed to Mexico and Central America) tend to be from professional class backgrounds and to be better educated and more affluent than white Americans on average—which explains why Harvard uses rich African immigrants to meet its informal Black quota, although the purpose of affirmative action was supposed to be to help the American descendants of slaves (ADOS). According to Pew, the richest groups in the United States by religion are Episcopalian, Jewish, and Hindu (wealthy “seculars” may be disproportionately East Asian American, though the data on this point is not clear).

Membership in the multiracial, post-ethnic national overclass depends chiefly on graduation with a diploma—preferably a graduate or professional degree—from an Ivy League school or a selective state university, which makes the Ivy League the new social register. But a diploma from the Ivy League or a top-ranked state university by itself is not sufficient for admission to the new national overclass. Like all ruling classes, the new American overclass uses cues like dialect, religion, and values to distinguish insiders from outsiders.

And that’s where Wokeness comes in. One has to have the right religion (not evangelical), dialect (not southern) and values (Woke ones!):

More and more Americans are figuring out that “wokeness” functions in the new, centralized American elite as a device to exclude working-class Americans of all races, along with backward remnants of the old regional elites. In effect, the new national oligarchy changes the codes and the passwords every six months or so, and notifies its members through the universities and the prestige media and Twitter. America’s working-class majority of all races pays far less attention than the elite to the media, and is highly unlikely to have a kid at Harvard or Yale to clue them in. And non-college-educated Americans spend very little time on Facebook and Twitter, the latter of which they are unlikely to be able to identify—which, among other things, proves the idiocy of the “Russiagate” theory that Vladimir Putin brainwashed white working-class Americans into voting for Trump by memes in social media which they are the least likely American voters to see.

Constantly replacing old terms with new terms known only to the oligarchs is a brilliant strategy of social exclusion. The rationale is supposed to be that this shows greater respect for particular groups. But there was no grassroots working-class movement among Black Americans demanding the use of “enslaved persons” instead of “slaves” and the overwhelming majority of Americans of Latin American descent—a wildly homogenizing category created by the U.S. Census Bureau—reject the weird term “Latinx.” Woke speech is simply a ruling-class dialect, which must be updated frequently to keep the lower orders from breaking the code and successfully imitating their betters.

I think Lind is onto something here, though I’m not sure I agree 100%. This morning I had an “animated discussion” with a white friend who insisted that there was nothing wrong with using the word “Negro”. After all, he said, there’s the “United Negro College Fund.” And I said, “Yeah, and there’s also the National Association for the Advancement of Colored People, but you better not say ‘colored people’ instead of ‘people of color’!” In fact, the term “Negro” would be widely seen as racist now, though in the Sixties it wasn’t, and was used frequently by Dr. King, who almost never used the n-word in public. “Negro” was simply the going term for African-Americans then, but now it’s “people of color”, or, better yet, “BIPOCs. And that will change too”. “Gay” has now become a veritable alphabet of initials that always ends in a “+”. “Latinx” isn’t used by Hispanics, but by white people and the media. It’s an elitist thing, as Lind maintains.

But whether this terminology—and its need to constantly evolve, 1984-like—is a way of leveraging and solidifying cultural power, well, I’m not sure I agree. Weigh in below.

Should Ph.D.s call themselves “doctor” in everyday life?

December 13, 2020 • 1:00 pm

UPDATE: At the libertarian website Reason, legal scholar Eugene Volokh has a different take, based partly on what he sees as the overly lax and non-scholarly nature of Jill Biden’s Ed.D.

_____________________

This week’s kerfuffle involves a writer at the Wall Street Journal, Joseph Epstein, taking Jill Biden to task for calling herself “Dr. Biden”—and allowing Joe Biden’s campaign to call her that—when her doctorate was in education (she has two master’s degrees as well). In other words, she’s a Ph.D. In the article below (click on screenshot, or make a judicious inquiry if you can’t access it), Epstein argues that only medical doctors should call themselves “doctor”, and advises Jill Biden to ditch her title.


I have to say that Epstein’s article, which has been universally attacked for being sexist and misogynistic, is indeed patronizing and condescending (Epstein has an honorary doctorate, but not an “earned” one). I’d be loath to call it sexist on those grounds alone, but the tone of the article, and the words he uses, do seem sexist. Here are two excerpts:

Madame First Lady—Mrs. Biden—Jill—kiddo: a bit of advice on what may seem like a small but I think is a not unimportant matter. Any chance you might drop the “Dr.” before your name? “Dr. Jill Biden ” sounds and feels fraudulent, not to say a touch comic. Your degree is, I believe, an Ed.D., a doctor of education, earned at the University of Delaware through a dissertation with the unpromising title “Student Retention at the Community College Level: Meeting Students’ Needs.” A wise man once said that no one should call himself “Dr.” unless he has delivered a child. Think about it, Dr. Jill, and forthwith drop the doc.

As for your Ed.D., Madame First Lady, hard-earned though it may have been, please consider stowing it, at least in public, at least for now. Forget the small thrill of being Dr. Jill, and settle for the larger thrill of living for the next four years in the best public housing in the world as First Lady Jill Biden.

The use of the word “kiddo,” and the reference to her as “Dr. Jill” does seem sexist, though of course there’s “Dr. Phil” (Ph.D., clinical psychology) and a whole host of other doctors, including M.D. medical experts on the evening news, who are called by their first name. (“Thanks, Dr. Tim”.) Those are usually terms of affection, though, while “Dr. Jill” is clearly not meant affectionately. And why the denigration for the title of her thesis? Finally—”kiddo”? Fuggedabout it. The undoubted truth that women’s credentials have historically been impugned also would lead one to see Epstein’s piece as falling into that tradition.

I sure as hell wouldn’t have written that article, and, as somebody suggested in the pile-on, would Epstein have written it about a man? Where’s his critique of “Dr. Phil”?

The fracas is described in a piece by Matt Cannon in Newsweek and the piece below in the New York Times. I haven’t been able to find a single article about Epstein’s op-ed piece that doesn’t damn it to hell for sexism, and, in fact, although he was a long-term honorary emeritus lecturer at Northwestern, that University criticized his piece (official statement: “Northwestern is firmly committed to equity, diversity and inclusion, and strongly disagrees with Mr. Epstein’s misogynistic views”). His picture has also been removed from Northwestern’s website, showing that he’s toast.  Were Epstein at the University of Chicago, my school wouldn’t have made any official statement, as it’s not 100% clear that his statement was motivated by misogyny, much as the article suggests it.

But that leaves the question “should anyone with a Ph.D. call themselves ‘doctor'”? My answer would be “it’s up to them.”

But I have to say that I have never been able to call myself “Doctor Coyne” except as a humorous remark or in very rare situations that I can’t even remember. I will allow other people to call me “Doctor Coyne.”, but as soon as I have a relationship with them, the “Doctor” gets dropped for “Jerry.” My undergraduates would usually call me “Professor Coyne”, or sometimes “Doctor Coyne,” and that was okay, for being on a first-name basis with them effaces the mentor/student relationship that is useful when teaching. But to my grad students I was always “Jerry.”

It is true that I worked as hard, or even harder, than do medical students to earn the right to be called “Doctor”, taking five years of seven-days-a-week labor to get it, but somehow I don’t feel that I should get a lifetime honorific for that. I got a Ph.D. so I could become a professional evolutionist, not to command respect from people, many of whom might mistakenly think I was a medical doctor.  The New York Times quotes Miss Manners here:

Judith Martin, better known as the columnist Miss Manners, said her father, who had a Ph.D. in economics, insisted on not being called Dr. and implored his fiancée, Ms. Martin’s mother, to print new wedding invitations after the first version included the title.

“As my father used to say, ‘I’m not the kind of doctor who does anybody any good,’” Ms. Martin said in an interview on Saturday. “He didn’t feel it was dignified. I am well aware that this is a form of reverse snobbery.”

Still, Ms. Martin said, “I don’t tell people what to call themselves and I’m aware that women often have trouble with people who don’t respect their credentials.”

I’m pretty much on board with both her and her father here, though I’d take issue with saying my refusal to call myself “Doctor. Coyne” is reverse snobbery. Rather, it’s part of my lifelong desire not to be seen as better than other people just because I got a fancy education. I remember that when I got my first job at the University of Maryland, I was given an empty lab on the second floor of the Zoology Building. But it was in a box containing all the application folders for everyone who had applied for the job I got. After a few days of resisting, I peeked into my own folder to see my letters of recommendation. And I’ll always remember Dick Lewontin’s letter, which, though highly positive, added something like this, “If Jerry has any faults, is that he is too self-denigrating, always underselling himself.”  Well, that may be true, but it’s better to undersell yourself than oversell yourself! I’ve always detested the pomposity of accomplished academics. Other academics think it lends cachet to their books (even “trade books”) by using “Dr.” in the title. More power to them, but I could never bring myself to do that.

One other interesting point: the AP Style Manual agrees with Epstein about the use of “Dr.”  According to the Newsweek piece:

The AP stylebook, a writing guide used by major U.S. publications including Newsweek, also suggests that the term doctor should not be used by those with academic doctoral degrees.

Its latest edition reads: “Use Dr. in first reference as a formal title before the name of an individual who holds a doctor of dental surgery, doctor of medicine, doctor of optometry, doctor of osteopathic medicine, doctor of podiatric medicine, or doctor of veterinary medicine.”

It adds: “Do not use Dr. before the names of individuals who hold other types of doctoral degrees.”

So you could say Epstein was adhering to that rule, but the tone of his piece is snarky and condescending. The opprobrium he’s earned for it is largely deserved.

I suppose I adhere to the AP dictum on this website, too, as it seems weird to call my colleagues “Dr.”, but less weird to call medical doctors “Dr. X”.

(Epstein also denigrates honorary doctorates, for they’re not markers of scholarly achievement—except at the University of Chicago, which may be the only school in the U.S. that confers honorary degrees only on scholars—never to actors, cartoonists, sports figures, and so on. But I don’t know anybody who calls themselves “Dr.” with only an honorary doctorate.)

So if Jill Biden wants to be called “Dr. Biden,” it’s churlish to refuse—after all, she did earn the right to use it. And it’s a matter of simple civility to address people how they want to be addressed.

I have only one caveat here: nobody—be they medical doctors or Ph.Ds—should ever put “Dr.” before their names on their bank checks. That’s where I draw the line. It looks like a move of pompous one-upsmanship—like you’re trying to lord it over salespeople, cashiers, and bank tellers.

Andrew Sullivan: The genetic underpinnings of IQ means we shouldn’t value it so much, that we should ditch the meritocracy, and that we should become more of a communist society

September 12, 2020 • 11:30 am

Andrew Sullivan has devoted a lot of the last two editions of The Weekly Dish to the genetics of intelligence, perhaps because he’s taken a lot of flak for supposedly touting The Bell Curve and the genetic underpinnings of IQ.  Now I haven’t read The Bell Curve, nor the many posts Sullivan’s devoted to the genetics of intelligence (see the long list here), but he’s clearly been on the defensive about his record which, as far as I can see, does emphasize the genetic component to intelligence. But there’s nothing all that wrong with that: a big genetic component of IQ is something that all geneticists save Very Woke Ones accept. But as I haven’t read his posts, I can neither defend nor attack him on his specific conclusions.

I can, however, briefly discuss this week’s post, which is an explication and defense of a new book by Freddie DeBoer, The Cult of Smart. (Note: I haven’t read the book, either, as it’s just out.) You can read Sullivan’s piece by clicking on the screenshot below (I think it’s still free for the time being):

The Amazon summary of the book pretty much mirrors what Sullivan says about it:

. . . no one acknowledges a scientifically-proven fact that we all understand intuitively: academic potential varies between individuals, and cannot be dramatically improved. In The Cult of Smart, educator and outspoken leftist Fredrik deBoer exposes this omission as the central flaw of our entire society, which has created and perpetuated an unjust class structure based on intellectual ability.

Since cognitive talent varies from person to person, our education system can never create equal opportunity for all. Instead, it teaches our children that hierarchy and competition are natural, and that human value should be based on intelligence. These ideas are counter to everything that the left believes, but until they acknowledge the existence of individual cognitive differences, progressives remain complicit in keeping the status quo in place.

There are several points to “unpack” here, as the PoMos say. Here is what Sullivan takes from the book, and appears to agree with:

1.) Intelligence is largely genetic.

2.) Because of that, intellectual abilities “cannot be dramatically improved”.

3.) Because high intelligence is rewarded in American society, people who are smarter are better off, yet they don’t deserve to be because, after all, they are simply the winners in a random Mendelian lottery of genes fostering high IQ (I will take IQ as the relevant measure of intelligence, which it seems to be for most people, including Sullivan).

4.) The meritocracy is thus unfair, and we need to fix it.

5.) We can do that by adopting a version of communism, whereby those who benefit from the genetic lottery get taxed at a very high rate, redistributing the wealth that accrues to them from their smarts. According to DeBoer via Sullivan,

For DeBoer, that means ending meritocracy — for “what could be crueler than an actual meritocracy, a meritocracy fulfilled?” It means a revolutionary transformation in which there are no social or cultural rewards for higher intelligence, no higher after-tax income for the brainy, and in which education, with looser standards, is provided for everyone on demand — for the sake of nothing but itself. DeBoer believes the smart will do fine under any system, and don’t need to be incentivized — and their disproportionate gains in our increasingly knowledge-based economy can simply be redistributed to everyone else. In fact, the transformation in the economic rewards of intelligence — they keep increasing at an alarming rate as we leave physical labor behind — is not just not a problem, it is, in fact, what will make human happiness finally possible.

If early 20th Century Russia was insufficiently developed for communism, in other words, America today is ideal. . .

Sullivan adds that the moral worth of smart people is no higher than that of people like supermarket cashiers, trash collectors, or nurses. (I agree, but I’m not sure that smart people are really seen as being more morally worthy. They are seen as being more deserving of financial rewards.)

6.) Sullivan says that his own admitted high intelligence hasn’t been that good for him, and he doesn’t see it as a virtue:

For me, intelligence is a curse as well as a blessing — and it has as much salience to my own sense of moral worth as my blood-type. In many ways, I revere those with less of it, whose different skills — practical, human, imaginative — make the world every day a tangibly better place for others, where mine do not. Being smart doesn’t make you happy; it can inhibit your sociability; it can cut you off from others; it can generate a lifetime of insecurity; it is correlated with mood disorders and anxiety. And yet the system we live in was almost designed for someone like me.

This smacks a bit of humblebragging, but I’ll take it on face value. It’s still quite odd, though, to see a centrist like Sullivan, once a conservative, come out in favor of communism and radical redistribution of wealth. So be it. But do his arguments make sense?

Now Sullivan’s emphasis on the genetic basis of intelligence is clearly part of his attack on the extreme Left, which dismisses hereditarianism because it’s said to imply (falsely) that differences between groups, like blacks and whites, are based on genetic differences. It also implies (falsely) that traits like intellectual achievement cannot be affected by environmental effects or environmental intervention (like learning). Here Andrew is right: Blank-Slateism is the philosophy of the extreme left, and it’s misguided in several ways. Read Pinker’s book The Blank Slate if you want a long and cogent argument about the importance of genetics.

But there are some flaws, or potential flaws, in Sullivan’s argument, which I take to be point 1-5 above.

First, intelligence is largely genetic, but not completely genetic. There is no way for a given person to determine what proportion of their IQ is attributable to genes and how much to environment or to the interaction between the two: that question doesn’t even make sense. But what we can estimate is the proportion of variation of IQ among people in a population that is due to variation in their genes. This figure is known as the heritability of IQ, and can be calculated (if you have the right data) for any trait. Heritability ranges from 0 (all variation we see in the trait is environmental, with no component due to genetics) to 1 (or 100%), with all the observed variation in the trait being due to variation in genes. (Eye color is largely at this end of the scale.)

A reasonable value for the heritability of IQ in a white population is around 0.6, so about 60% of the variation we see in that population is due to variation in genes, and the other 40% to different environments experienced by different people as well as to the differential interaction between their genes and their environments. That means, first of all, that an appreciable proportion of variation in intelligence is due to variations in people’s environments. And that means that while the IQ of a person doesn’t change much over time, if you let people develop in different environments you can change their IQ in different ways—up or down. IQ is not something that is unaffected by the environment.

Related to that is the idea that a person’s IQ is not fixed at birth by their genes, but can be changed by rearing them in different environments, so it’s not really valid to conclude (at least from the summary above) that “academic potential cannot be dramatically improved”. Indeed, Sullivan’s summary of DeBoer’s thesis is that the difference in IQ between blacks and whites (an average of 15 points, or one standard deviation) is not due to genes, but to different environments faced by blacks and whites:

DeBoer doesn’t explain it as a factor of class — he notes the IQ racial gap persists even when removing socio-economic status from the equation. Nor does he ascribe it to differences in family structure — because parenting is not that important. He cites rather exposure to lead, greater disciplinary punishment for black kids, the higher likelihood of being arrested, the stress of living in a crime-dominated environment, the deep and deadening psychological toll of pervasive racism, and so on: “white supremacy touches on so many aspects of American life that it’s irresponsible to believe we have adequately controlled for it in our investigations of the racial achievement gap.”

Every factor cited here is an environmental factor, not a genetic one. And if those factors can add up to lowering your IQ by 15 points, on what basis does DeBoer conclude (with Sullivan, I think), that you cannot improve IQ or academic performance by environmental intervention? Fifteen points is indeed a “dramatic improvement”, which according to DeBoer, we’d get by simply letting black kids grow up in the environment of white people.  (I note here that I don’t know how much, if any, of that 15-point difference reflects genetic versus environmental differences; what I’m doing is simply asserting that even DeBoer notes that you can change IQ a lot by changing environments.)

Further, what you do with your intelligence can be further affected by the environment. If you’re lazy, and don’t want to apply yourself, a big IQ isn’t necessarily going to make you successful in society. So there is room for further improvement of people by proper education and instilling people with motivation. This doesn’t mean that IQ isn’t important as a correlate of “success” (however it’s measured) in American society—just that environmental factors, including education and upbringing, are also quite important.

What about genetic determinism and the meritocracy? It’s likely that many other factors that lead to success in the U.S. have a high heritability as well. Musical ability may be one of these, and therefore those who get rich not because they have high IQs, but can make good music that sells, also have an “unfair advantage”. What about good looks? Facial characteristic are highly heritable, and insofar as good looks can give you a leg up as a model or an actor, that too is an unfair genetic win. (I think there are data that better-looking people are on average more successful.) In fact, since nobody is “responsible” for either their genes or their environments, as a determinist I think that nobody really “deserves” what they get, since nobody chooses to be successful or a failure. Society simply rewards those people who have certain traits, and punishes those who have other traits. With that I don’t have much quarrel, except about the traits that are deemed reward-worthy (viz., the Kardashians).

This means, if you take Sullivan and DeBoer seriously, we must eliminate not just the meritocracy for intelligence, but for anything: musical ability, good looks, athletic ability, and so on. In other words, everybody who is successful should be taxed to the extent that, after redistribution, everyone in society gets the same amount of money and the same goods. (It’s not clear from Sullivan’s piece to what extent things should be equalized, but if you’re a determinist and buy his argument, everyone should be on the same level playing field.)

After all, if “the smart don’t need to be incentivized”, why does anybody? The answer, of course, is that the smart do need to be incentivized, as does everyone else. The failure of purely communist societies to achieve parity with capitalistic ones already shows that. (I’m not pushing here for pure capitalism: I like a capitalistic/socialistic hybrid, as in Scandinavia.)  And I wonder how much of Sullivan’s $500,000 income he’d be willing to redistribute.

If you think I’m exaggerating Sullivan’s approbation of communism, at least in theory, here’s how he ends his piece, referring to his uneducated grandmother who cleaned houses for a living.

My big brain, I realized, was as much an impediment to living well as it was an advantage. It was a bane and a blessing. It simply never occurred to me that higher intelligence was in any way connected to moral worth or happiness.

In fact, I saw the opposite. I still do. I don’t believe that a communist revolution will bring forward the day when someone like my grandmother could be valued in society and rewarded as deeply as she should have been. But I believe a moral revolution in this materialist, competitive, emptying rat-race of smarts is long overdue. It could come from the left or the right. Or it could come from a spiritual and religious revival. Either way, Freddie DeBoer and this little book are part of the solution to the unfairness and cruelty of it all. If, of course, there is one.

Let’s forget about the “spiritual and religious revival” (I wrote about that before), and realize that what we have here is a call for material equality, even if people aren’t morally valued as the same. And why should we empty the rat-race just of smarts? Why not empty it of everything that brings differential rewards, like writing a well-remunerated blog? In the end, Sullivan’s dislike of extreme leftism and its blank-slate ideology has, ironically, driven him to propose a society very like communism.

Are people becoming more talkative during the pandemic?

July 28, 2020 • 8:15 am

I’ve noticed in the last couple of months that people I talk to, either over the phone or in person, seem to have become much more loquacious, to the point where  it seems that 90% or more of the conversational airtime is taken up by one person’s words. (To be sure, I’m often laconic.) Now I haven’t quantified this, though I could do so, at least over the phone with a stopwatch. But subjectively, it seems to me a real temporal change.

The first thing to determine is whether the subjective change is an objective change. To determine that, I would have to have timed participation in conversations over the last year or so, and compared the conversational “pie” before and after lockdown. And I don’t have that data. 

In the absence of hard data, it’s possible that I’ve simply become more peevish and impatient, so that it only seems that people are monopolizing conversations more. And indeed, I think I have become more peevish, though I think many people have changed in this way as well.

But let’s assume it’s real: that the proportion of conversational time in a two-person chat has become more unequal since March.  If that’s the case, why?

The only explanation I can imagine is that people who are more socially isolated have become more eager to talk, and that’s manifested in a higher degree of conversational dominance. Of course if two such chatty people meet, it could be a festival of interruptions and “talking over,” but I tend to become monosyllabic, and this is exacerbated when I am peevish.  My philosophy has always been that in a conversation, you learn nothing by talking but only by listening.

At any rate, am I imagining this or have others noticed it?