Well, a paper criticizing the “woke” aspects of science has finally appeared in a peer-reviewed scientific journal, though peer-reviewed critiques of scientific censorship or ideological pressure have appeared in the Journal of Controversial Ideas (a push for judging science on merit rather than ideology), and in the Skeptical Inquirer (an explication of how evolutionary biology is being distorted by ideology). I was an author of both of those papers (the second was reviewed, but not by a group of scientists in the field), but I’m not on the present one (I wish I were!).
The article below, which just came out in the Proceedings of the National Academy of Sciences (PNAS), a prestigious journal, has a panoply of authors, many of whom you will recognize. It was certainly peer-reviewed, and its topic is the censorship of scientific papers, defined as “actions aimed at obstructing particular scientific ideas from reaching an audience for reasons other than low scientific quality.” It presents the problem, shows who the censors are, gives examples of censorship and studies of the problem as a whole, analyzes the motives of censors, explains why censorship is bad for both science and society, and suggests some fixes that might reduce censorship.
Click below to see the paper, and then below that to see an article about the paper, written by two of its authors, in The Chronicle of Higher Education. If you want just a quick take, read the Chronicle article, but the PNAS one is accessible to the nonscientific reader.
The two main conclusions of the PNAS paper are these:
a. Censorship of papers is increasing rapidly, and often takes the form of “soft” censorship, which is censorship based on social opprobrium, rather than outright banning by authorities (“hard censorship”)
b. The censors are usually fellow scientists, and usually act not out of malicious motives, but out of “prosocial ones”; that is, they try to keep stuff out of the literature because they think it’s harmful for society.
The diagram below, from the paper, is really a summary of its points—except for fixes of the problem.
As I said, most censorship is soft; as the paper notes:
Contemporary scientific censorship is typically the soft variety, which can be difficult to distinguish from legitimate scientific rejection. Science advances through robust criticism and rejection of ideas that have been scrutinized and contradicted by evidence. Papers rejected for failing to meet conventional standards have not been censored. However, many criteria that influence scientific decision-making, including novelty, interest, “fit”, and even quality are often ambiguous and subjective, which enables scholars to exaggerate flaws or make unreasonable demands to justify rejection of unpalatable findings.
And it’s also prosocial: meant to prevent the “harm” that we so often see claimed to occur when one’s own ideology is violated:
But censorship can be prosocially motivated. Censorious scholars often worry that research may be appropriated by malevolent actors to support harmful policies and attitudes. Both scholars and laypersons report that some scholarship is too dangerous to pursue, and much contemporary scientific censorship aims to protect vulnerable groups. Perceived harmfulness of information increases censoriousness among the public, harm concerns are a central focus of content moderation on social media , and the more people overestimate harmful reactions to science, the more they support scientific censorship. People are especially censorious when they view others as susceptible to potentially harmful information In some contemporary Western societies, many people object to information that portrays historically disadvantaged groups unfavorably and academia is increasingly concerned about historically disadvantaged groups Harm concerns may even cause perceptions of errors where none exist.
Prosocial motives for censorship may explain four observations: 1) widespread public availability of scholarship coupled with expanding definitions of harm has coincided with growing academic censorship; 2) women, who are more harm-averse and more protective of the vulnerable than men, are more censorious; 3) although progressives are often less censorious than conservatives, egalitarian progressives are more censorious of information perceived to threaten historically marginalized groups; and 4) academics in the social sciences and humanities (disciplines especially relevant to humans and social policy) are more censorious and more censored than those in STEM .
Now the data adduced in the paper largely involve not censorship of papers, but censorship of academics, expecially that compiled by the Foundation for Individual Rights and Expression (FIRE). These cases are not censorship in the strict sense used by the authors (scientific papers), but are still attempts to keep academics’ ideas in all areas from reaching the public. The caption for the three plots given below (the paper has three more) is “Characteristics of higher education scholars targeted for their pedagogy and/or critical inquiry between 2000 and June, 2023 (n = 486) and characteristics of their targeters.”
The figures beow are FIRE’s data on not just science, but all form of scholarship:
First, the rise in censorship; the figures for this year are incomplete, and there was a drop between 2021 and 2022. But look at the increase since 2000:
Below: which disciplines are targeted (blue means the targeted scholar was attacked by someone from his/her left, and red denotes attacks from his/her right. Overall, and as I’ve noted often, most attacks came from the left. Note too that the humanities experience more targeting incidents than does science.
Finally, the topics targeted for censorship. As you might expect, race and gender are the top two, though institutional policy is a close third. As race and gender are closely connected with claims of oppression, it’s not surprising that prosocially-motivated attacks on scholarship involve trying to prevent harm to minorities.
The diagrem below, taking into account all attempts at censorship, show that most come from the left of the attacker (blue) compared to the right. (Gray is either unknown or “neither”). This again is no surprise; the right is not only less often represented in colleges, but is also less likely to engage in prosocially motivated censorship::
The PNAS article is copiously documented (there are 130 references), and I like it. But there are two problems that I think slightly reduce its effectiveness. The first is that the article lacks tangible examples of how odious this kind of censorship can be. Examples really hit home, especially when you see how hypocritical and sneaky authors and journals can be, even when acting prosocially. In fact, only one case is described in both the paper and the Chronicle article below, but it’s a doozy, well known among many of us. This was an article which was retracted not because it had scientific problems, but because its conclusions violated what gender ideologues want to see. It also led to a shameful call for censorship in general of articles that might be “harmful”.
The Chronicle summary (click to read):
Both the paper and the Chronicle article have nearly word-for-word identical descriptions of the incident (this, by the way, is self-plagiarism), but the Chronicle piece has links, so I’ll excerpt that one:
Moral motives have long influenced scientific decision-making. What’s new is that journals are now explicitly endorsing moral concerns as legitimate reasons to suppress science. Following the publication (and retraction) of an article reporting that the mentees of male mentors, on average, had more scholarly success than did the mentees of female mentors, Nature Communications released an editorial promising increased attention to potential harms. A subsequent Nature editorial stated that authors, reviewers, and editors must consider the potentially harmful implications of research, and a Nature Human Behaviour editorial declared the publication might reject or retract articles that have the potential to undermine the dignity of particular groups of people. In effect, editors are granting themselves vast leeway to censor high-quality research that offends their own moral sensibilities, or those of their most sensitive readers.
The paper, found at the first link (and now retracted) found that in mentor/mentee relationships in science, the quality of the mentor had a positive effect on the career of the mentee, BUT, thethe paper also reported this:
We also find that increasing the proportion of female mentors is associated not only with a reduction in post-mentorship impact of female protégés, but also a reduction in the gain of female mentors. While current diversity policies encourage same-gender mentorships to retain women in academia, our findings raise the possibility that opposite-gender mentorship may actually increase the impact of women who pursue a scientific career. These findings add a new perspective to the policy debate on how to best elevate the status of women in science.
That is, same-sex mentorship of women seemed to be less helpful for their careers than being mentored by a male. Now this is, of course, ideologically unacceptable, and, though as far as I know the data were sound, it raised a ruckus. As the Nature Communications editors noted when retracting the paper:
They retracted the paper simply because of criticisms that the results weren’t ideologically comfortable, and before the criticisms were considered. Also, have a look at the two editorials, especially the Nature Human Behavior one which became the subject to considerable pushback, including this tweet by Steve Pinker (an author of the present PNAS manuscript); see also my post about the fracas, which contains another long tweet by Michael Shermer.
Journalists & psychologists take note: Nature Human Behavior is no longer a peer-reviewed scientific journal but an enforcer of a political creed. I won't referee, publish, or cite (how do we know articles have been vetted for truth rather than political correctness)? https://t.co/3qXFGizt6h pic.twitter.com/G5BgB2hpqD
— Steven Pinker (@sapinker) August 26, 2022
At any rate, I’d like to have seen more examples of censored papers that would drive home the repugnance of censorship and the urgency of fixing it. One that came immediately to mind was James Damore’s firing at Google for suggesting that inequities in representation may be due to preferences rather than bias. Anna Krylov, one of the authors of the PNAS paper, tells me she’s writing a blog post for the Heterodox STEM site that will give several more examples of censorship, and I’ll highlight them when her piece appears.
Finally, what are the harms of censorship and how can we fix them? I won’t go into detail about this (the paper does more), except to say that the harms are obvious: censorship keeps the truth hidden, and the truth not only will out, but may be valuable. While it is possible that some solid science should be suppressed if it offends certain groups or leads to “harm”, I can’t think of any scientific result that really should be censored because of its implications. Readers may want to suggest some below.
Second, scientific censorship could harm the public’s trust in the field and the trust of the scientific literature by scientists. As the PNAS paper notes,
Censorship may be particularly likely to erode trust in science in contemporary society because scientists now have other means (besides academic journals) to publicize their findings and claims of censorship. If the public routinely finds quality scholarship on blogs, social media, and online magazines by scientists who claim to have been censored, a redistribution of authority from established scientific outlets to newer, popular ones seems likely. Given the many modes of dissemination and public availability of data, proscribing certain research areas for credentialed scientists may give extremists a monopoly over sensitive research. Scientific censorship may also reduce trust in the scientific literature among scientists, exacerbating hostility and polarization. If particular groups of scholars feel censored by their discipline, they may leave altogether, creating a scientific monoculture that stifles progress.
So what’s to be done? The PNAS article gives a whole laundry list of fixes, nearly all of which are good. They include making reviews of papers, both accepted and rejected, public; third-party audits of scientific journals to measure the quality of their editorial practice, independence of sociopolitical pressures, and so on; and making serious calls for retractions of papers available publicly available to concerned scholars. This is all under the rubric of transparency, and names could be anonymous.
The only “fix” that sounds hard to implement is testing the proposition that some science creates more harm than good. The authors suggest that there might be some way to measure this, but I’m not convinced:
Scholars should empirically test the costs and benefits of censorship against the costs and benefits of alternatives. They could compare the consequences of retracting an inflammatory paper to 1) publishing commentaries and replies, 2) publishing opinion pieces about the possible applications and implications of the findings, or 3) simply allowing it to remain published and letting science carry on. Which approach inspires more and better research? Which approach is more likely to undermine the reputation of science? Which approach minimizes harm and maximizes benefits? Given ongoing controversies surrounding retraction norms, an adversarial collaboration (including both proponents and opponents of harm-based retractions) might be the most productive and persuasive approach to these research questions.
Frankly, I don’t think this is feasible; such controlled tests can’t be done! When Luana Maroja and I wrote our paper on the ideological erosion of science, we discussed whether any solid scientific result should be censored because of its possible harms. After much discussion, we agreed on “no.”
Readers may dissent, and dissent is welcome in the comments. But the point of this post is that censorship is pervasive in science, in general it’s harmful since, on the grounds of preserving a favored ideology, it prevents the dissemination of truth, and that scientists should stop it. That, of course, would mean keeping the tentacles of the ideological octopus off of science, but that doesn’t seem to be in the offing. I hope that the new PNAS paper will help keep those suckers out of our field.