Censorship in science: a new paper and analysis

November 25, 2023 • 12:00 pm

Well, a paper criticizing the “woke” aspects of science has finally appeared in a peer-reviewed scientific journal, though peer-reviewed critiques of scientific censorship or ideological pressure have appeared in the Journal of Controversial Ideas (a push for judging science on merit rather than ideology), and in the Skeptical Inquirer (an explication of how evolutionary biology is being distorted by ideology). I was an author of both of those papers (the second was reviewed, but not by a group of scientists in the field), but I’m not on the present one (I wish I were!).

The article below, which just came out in the Proceedings of the National Academy of Sciences (PNAS), a prestigious journal, has a panoply of authors, many of whom you will recognize.  It was certainly peer-reviewed, and its topic is the censorship of scientific papers, defined as “actions aimed at obstructing particular scientific ideas from reaching an audience for reasons other than low scientific quality.”  It presents the problem, shows who the censors are, gives examples of censorship and studies of the problem as a whole, analyzes the motives of censors, explains why censorship is bad for both science and society, and suggests some fixes that might reduce censorship.

Click below to see the paper, and then below that to see an article about the paper, written by two of its authors, in The Chronicle of Higher Education. If you want just a quick take, read the Chronicle article, but the PNAS one is accessible to the nonscientific reader.

The two main conclusions of the PNAS paper are these:

a. Censorship of papers is increasing rapidly, and often takes the form of “soft” censorship, which is censorship based on social opprobrium, rather than outright banning by authorities (“hard censorship”)

b. The censors are usually fellow scientists, and usually act not out of malicious motives, but out of “prosocial ones”; that is, they try to keep stuff out of the literature because they think it’s harmful for society.

The diagram below, from the paper, is really a summary of its points—except for fixes of the problem.

As I said, most censorship is soft; as the paper notes:

Contemporary scientific censorship is typically the soft variety, which can be difficult to distinguish from legitimate scientific rejection. Science advances through robust criticism and rejection of ideas that have been scrutinized and contradicted by evidence. Papers rejected for failing to meet conventional standards have not been censored. However, many criteria that influence scientific decision-making, including novelty, interest, “fit”, and even quality are often ambiguous and subjective, which enables scholars to exaggerate flaws or make unreasonable demands to justify rejection of unpalatable findings.

And it’s also prosocial: meant to prevent the “harm” that we so often see claimed to occur when one’s own ideology is violated:

But censorship can be prosocially motivated. Censorious scholars often worry that research may be appropriated by malevolent actors to support harmful policies and attitudes. Both scholars and laypersons report that some scholarship is too dangerous to pursue, and much contemporary scientific censorship aims to protect vulnerable groups. Perceived harmfulness of information increases censoriousness among the public, harm concerns are a central focus of content moderation on social media , and the more people overestimate harmful reactions to science, the more they support scientific censorship. People are especially censorious when they view others as susceptible to potentially harmful information  In some contemporary Western societies, many people object to information that portrays historically disadvantaged groups unfavorably and academia is increasingly concerned about historically disadvantaged groups Harm concerns may even cause perceptions of errors where none exist.

Prosocial motives for censorship may explain four observations: 1) widespread public availability of scholarship coupled with expanding definitions of harm has coincided with growing academic censorship; 2) women, who are more harm-averse and more protective of the vulnerable than men, are more censorious; 3) although progressives are often less censorious than conservatives, egalitarian progressives are more censorious of information perceived to threaten historically marginalized groups; and 4) academics in the social sciences and humanities (disciplines especially relevant to humans and social policy) are more censorious and more censored than those in STEM .

Now the data adduced in the paper largely involve not censorship of papers, but censorship of academics, expecially that compiled by the Foundation for Individual Rights and Expression (FIRE).  These cases are not censorship in the strict sense used by the authors (scientific papers), but are still attempts to keep academics’ ideas in all areas from reaching the public. The caption for the three plots given below (the paper has three more) is “Characteristics of higher education scholars targeted for their pedagogy and/or critical inquiry between 2000 and June, 2023 (n = 486) and characteristics of their targeters.”

The figures beow are FIRE’s data on not just science, but all form of scholarship:

First, the rise in censorship; the figures for this year are incomplete, and there was a drop between 2021 and 2022.  But look at the increase since 2000:

Below: which disciplines are targeted (blue means the targeted scholar was attacked by someone from his/her left, and red denotes attacks from his/her right. Overall, and as I’ve noted often, most attacks came from the left. Note too that the humanities experience more targeting incidents than does science.

Finally, the topics targeted for censorship. As you might expect, race and gender are the top two, though institutional policy is a close third.  As race and gender are closely connected with claims of oppression, it’s not surprising that prosocially-motivated attacks on scholarship involve trying to prevent harm to minorities.

The diagrem below, taking into account all attempts at censorship, show that most come from the left of the attacker (blue) compared to the right.  (Gray is either unknown or “neither”).  This again is no surprise; the right is not only less often represented in colleges, but is also less likely to engage in prosocially motivated censorship::

The PNAS article is copiously documented (there are 130 references), and I like it. But there are two problems that I think slightly reduce its effectiveness.  The first is that the article lacks tangible examples of how odious this kind of censorship can be. Examples really hit home, especially when you see how hypocritical and sneaky authors and journals can be, even when acting prosocially. In fact, only one case is described in both the paper and the Chronicle article below, but it’s a doozy, well known among many of us. This was an article which was retracted not because it had scientific problems, but because its conclusions violated what gender ideologues want to see. It also led to a shameful call for censorship in general of articles that might be “harmful”.

The Chronicle summary (click to read):

Both the paper and the Chronicle article have nearly word-for-word identical descriptions of the incident (this, by the way, is self-plagiarism), but the Chronicle piece has links, so I’ll excerpt that one:

Moral motives have long influenced scientific decision-making. What’s new is that journals are now explicitly endorsing moral concerns as legitimate reasons to suppress science. Following the publication (and retraction) of an article reporting that the mentees of male mentors, on average, had more scholarly success than did the mentees of female mentors, Nature Communications released an editorial promising increased attention to potential harms. A subsequent Nature editorial stated that authors, reviewers, and editors must consider the potentially harmful implications of research, and a Nature Human Behaviour editorial declared the publication might reject or retract articles that have the potential to undermine the dignity of particular groups of people. In effect, editors are granting themselves vast leeway to censor high-quality research that offends their own moral sensibilities, or those of their most sensitive readers.

The paper, found at the first link (and now retracted) found that in mentor/mentee relationships in science, the quality of the mentor had a positive effect on the career of the mentee, BUT, thethe paper also reported this:

We also find that increasing the proportion of female mentors is associated not only with a reduction in post-mentorship impact of female protégés, but also a reduction in the gain of female mentors. While current diversity policies encourage same-gender mentorships to retain women in academia, our findings raise the possibility that opposite-gender mentorship may actually increase the impact of women who pursue a scientific career. These findings add a new perspective to the policy debate on how to best elevate the status of women in science.

That is, same-sex mentorship of women seemed to be less helpful for their careers than being mentored by a male.  Now this is, of course, ideologically unacceptable, and, though as far as I know the data were sound, it raised a ruckus. As the Nature Communications editors noted when retracting the paper:

They retracted the paper simply because of criticisms that the results weren’t ideologically comfortable, and before the criticisms were considered. Also, have a look at the two editorials, especially the Nature Human Behavior one which became the subject to considerable pushback, including this tweet by Steve Pinker (an author of the present PNAS manuscript); see also my post about the fracas, which contains another long tweet by Michael Shermer.

At any rate, I’d like to have seen more examples of censored papers that would drive home the repugnance of censorship and the urgency of fixing it. One that came immediately to mind was James Damore’s firing at Google for suggesting that inequities in representation may be due to preferences rather than bias.  Anna Krylov, one of the authors of the PNAS paper, tells me she’s writing a blog post for the Heterodox STEM site that will give several more examples of censorship, and I’ll highlight them when her piece appears.

Finally, what are the harms of censorship and how can we fix them?  I won’t go into detail about this (the paper does more), except to say that the harms are obvious: censorship keeps the truth hidden, and the truth not only will out, but may be valuable. While it is possible that some solid science should be suppressed if it offends certain groups or leads to “harm”, I can’t think of any scientific result that really should be censored because of its implications. Readers may want to suggest some below.

Second, scientific censorship could harm the public’s trust in the field and the trust of the scientific literature by scientists. As the PNAS paper notes,

Censorship may be particularly likely to erode trust in science in contemporary society because scientists now have other means (besides academic journals) to publicize their findings and claims of censorship. If the public routinely finds quality scholarship on blogs, social media, and online magazines by scientists who claim to have been censored, a redistribution of authority from established scientific outlets to newer, popular ones seems likely. Given the many modes of dissemination and public availability of data, proscribing certain research areas for credentialed scientists may give extremists a monopoly over sensitive research. Scientific censorship may also reduce trust in the scientific literature among scientists, exacerbating hostility and polarization. If particular groups of scholars feel censored by their discipline, they may leave altogether, creating a scientific monoculture that stifles progress.

So what’s to be done? The PNAS article gives a whole laundry list of fixes, nearly all of which are good. They include making reviews of papers, both accepted and rejected, public; third-party audits of scientific journals to measure the quality of their editorial practice, independence of sociopolitical pressures, and so on; and making serious calls for retractions of papers available publicly available to concerned scholars. This is all under the rubric of transparency, and names could be anonymous.

The only “fix” that sounds hard to implement is testing the proposition that some science creates more harm than good. The authors suggest that there might be some way to measure this, but I’m not convinced:

Scholars should empirically test the costs and benefits of censorship against the costs and benefits of alternatives. They could compare the consequences of retracting an inflammatory paper to 1) publishing commentaries and replies, 2) publishing opinion pieces about the possible applications and implications of the findings, or 3) simply allowing it to remain published and letting science carry on. Which approach inspires more and better research? Which approach is more likely to undermine the reputation of science? Which approach minimizes harm and maximizes benefits? Given ongoing controversies surrounding retraction norms, an adversarial collaboration (including both proponents and opponents of harm-based retractions) might be the most productive and persuasive approach to these research questions.

Frankly, I don’t think this is feasible; such controlled tests can’t be done! When Luana Maroja and I wrote our paper on the ideological erosion of science, we discussed whether any solid scientific result should be censored because of its possible harms. After much discussion, we agreed on “no.”

Readers may dissent, and dissent is welcome in the comments.  But the point of this post is that censorship is pervasive in science, in general it’s harmful since, on the grounds of preserving a favored ideology, it prevents the dissemination of truth, and that scientists should stop it.  That, of course, would mean keeping the tentacles of the ideological octopus off of science, but that doesn’t seem to be in the offing. I hope that the new PNAS paper will help keep those suckers out of our field.

27 thoughts on “Censorship in science: a new paper and analysis

    1. The Chronicle is accessible for free — you just need to register and login into the account.

  1. An oft-cited example of quote “science doing harm” is in the misuse of genetics. The historical misuse was the eugenics movement, and the potential future misuse is described in science fiction like Brave New World or Gattaca.

    I’d say that is similar to the way that nuclear fission can be misused, or gunpowder chemistry. The science itself is neutral, and the censors really fear the humans who might turn it to (perhaps well-intentioned) evil purpose. I think that’s where we agree.

    1. What aspect of nuclear fission was ever “misused”? The whole point of the research from the get-go was to build a bomb before the Germans did, and then, when it became clear the Germans were out of the race, to build it to avoid having to invade Japan. Other than the basic science interest in nuclear physics, there was no other “use” of fission other than as a weapon. Someone eventually figured out that you could use fission heat to boil water into steam more cheaply than using coal —ha!— but this came after The Bomb, not before it. And do nuclear power reactors making steam in warships count as “misuses?”

      To the point of censorship, all this research was a censored military secret. The scientists and engineers recruited to the Manhattan Project were prohibited from publishing; some details of the plutonium fission primaries believed to be used in all nuclear weapons remain classified to this day. This didn’t stop the Soviets from discovering the secrets in real time. They had two spies working without each other’s knowledge at Los Alamos and probably one at University of Chicago.

      Additionally, so far as can be known, no country that has built a nuclear weapon has failed to succeed in detonating one on the first try. This even though the United States collaborated with Britain in the early years only and with no one else after that, and despite there being no open-source nuclear fission research publications providing scientific details of advancements.

      The cat was out of the bag once Neils Bohr made his famous blackboard calculation that the nuclear mass “lost” as fission energy from a single uranium nucleus would be enough to make an adjacent grain of sand jump. From there on it was just a question of whether the extra neutrons released would create a self-sustaining chain reaction and release enough aggregate energy to make whole cities jump. When all the top physicists in the the United States, including many emigres from persecuted Europe, suddenly stopped publishing to the detriment of their academic careers, the scientific community must have known that something was up. The censorship didn’t stop the research. If anything it accelerated it given the focus on getting the job done without having to write grant applications and submit manuscripts for peer review.

      I request indulgence for this over-comment. The idea that censorship of fission research would have prevented the arms race can’t go unrebutted.

      (Source for this is Richard Rhodes’s two books on the making of the atomic and hydrogen bombs.)

  2. Check the colours assigned to direction of attack in the histograms you reproduce – they’re the opposite of the assignments in the pie graph, but all are part of the same figure in the original publication.

  3. While I lean heavily on the side of free speech, I can see instances where I think significant safeguards should be put in place regarding publication. For an extreme example, publishing mutations necessary for Ebola to significantly increase non-symptomatic transmission from human to human or allow insect to human transmission.

    1. You raise a good point,Trew. But how would the government censors — the only censoring bodies who could be given statutory power to censor — know about the findings in time to prevent their publication? It’s one thing for the government through the classification process to prohibit the publication of information it itself generated or to punish it after the fact (as with plutonium implosion.) Quite another for the government to be informed of all scientific research findings and apply prior restraint against those it wanted to suppress for pro-social reasons.

      Which brings us to the point that gain-of-function research would not likely be published in the open literature anyway but would kept secret by the government that funded it. Publishing the relevant sequences (absent censorship) would actually help the other side develop counter-measures such as mosquito repellants, vaccines, or targeted antivirals, or to build a virus of its own as a deterrent. (And of course elucidation and publication of the virulence properties of spike protein in SARS-Cov2 actually did lead directly to the vaccines, whether or not the virus itself grew out of gain-of-function research. I know you are distinguishing between gain of function and wild-type pathogenicity but the distinction might not always be easy to make in practice.)

      A civilian lab would not be very likely to work out gain-of-function sequences and publish them just to spite the world. Where would the lab get the money?, since research is funded by granting agencies who demand financial accountability, not by secret dark money from the nefarious NGOs.

      I agree that certain research, such as gain-of-function, should be prohibited or tightly regulated in the public interest (as human and animal research is.) That way, people doing unethical or illegal research can be punished for merely doing it, whether they try to publish the results or not. That’s different from censorship.
      Society gets to decide what research it wants to fund even if it can’t stop publication of a discovery.

      1. We addressed this point in the paper. P. 5:
        “It may be reasonable to consider potential harms before disseminating science that poses a clear and present danger (6), when harms are extreme, tangible, and scientifically demonstrable, such as scholarship that increases risks of nuclear war, pandemics, or other existential catastrophes.”

        1. Leslie – Unfortunately, I do not have as much faith in finances and ethics preventing the adoption of powerful technologies and data for nefarious purposes. For example, crispr gene edited embryos were created and articles published despite significant concerns by the scientific community regarding ethics. That said, I do not know what can be done to stop such research and publishing.

          Lee – The scenario about Ebola given above was directed at the Language from the post.

          “When Luana Maroja and I wrote our paper on the ideological erosion of science, we discussed whether any solid scientific result should be censored because of its possible harms. After much discussion, we agreed on “no.””

  4. “Overall, and as I’ve noted often, most attacks came from the left.”

    That is not what the above plots show. Those are stacked barplots. Only in history and education a higher proportion of attacks comes from the left (red).

      1. Sorry for my mistake. I am not American. To me, and to the rest of the world, red signifies left, so I mixed up the sides.

        1. Attacks from the left are actually way worse than the graphs make it seem. The numbers are correct, but: 1. Nearly all attacks from the right are from outiside of academia; academics can easily ignore most of these and wear them as badges of courage; 2. Most attacks from the left are from within academia; these are much harder to ignore because academia is a social reputuational system where everything from jobs to grants to publications to promotions hinges, in substantial part, on what other people think of you. Nearly all retractions for reasons other than data fraud of which I am aware are from the left. If anyone knows of ANY from the right, I’d love to hear about it.

          1. Indeed. In their minds, people seem to want to harken decades back to when the right concerned itself with things like pornography and tried to suppress it. But you have to go very far back I think. In more recent times (but still long ago) was it not Tipper Gore who was concerned with explicit lyrics in music? In the present day, I’d be very interested to hear examples censorship from the right, since I find no advocates for it in the circles I pay attentions to. People sometimes conflate not wanting once’s own children, in the public schools, to be taught something with censorship, but that’s quite a different thing.

      2. Isn’t it possible that you changed up the description of colors in the text? You wrote that red means left wing attacks, but the legend of the pie chart says the opposite.

  5. After a very quick scan (so I might have missed something) of the PNAS article, I’m quite puzzled by the interpretation of the blue/red portions of the graph, initially confused by the conventional reverse colour equations in my country, red = left, blue = right: what allowance, if any, in interpreting the origin of targeting has been/can be/should be made for the substantial differences in distributions of left-right political orientation in various academic disciplines? Or is this an irrelevant concern?

  6. There’s a lot of reading and I read the PNAS article carefully in its entirety, as well as Jerry’s commentary here.

    The PNAS paper is excellent in that it describes many of the issues, has a reasonable taxonomy of “censorship,” and provides some avenues for further work. Its greatest value, I think, is in bringing the problem to the forefront in one of the country’s most prestigious journals, and in framing the problem of “censorship” as one that can be studied empirically and then addressed.

    I’m not convinced that the remedies and recommendations are good enough, nor are the examples good enough to raise sufficient awareness of the many injustices that (in my view) have resulted *in* censorship (researchers whose work has been suppressed or worse) and *from* censorship (knowledge that has been suppressed, preventing it from entering the corpus of knowledge available for human advancement).

    Overall, the PNAS paper is a start. I had hoped for an unequivocal statement that ideological considerations should be kept out of science. I wish it were more strident in stating the potential risks of censorship and its prevalence. But then again, part of their point is that the empirical evidence is not yet strong enough to be be sure.

    1. One more thing (sorry). In thinking about the paper further, I realized that it is far too broad, covering “censorship” of all sorts: stemming from the blind review process, from people favoring “prestigious” journals, from those giving more credence to research done at more prominent schools, etc. That’s all true, and it is probably worth paying attention to. But the PNAS paper does not address head-on the censorship problem that is staring us in the face today and that is actually the source of all the fuss—the censorship that takes place in the service of “equity.” That’s what this paper should have been about. In casting their net so broadly, they missed the mark.

      I’ll shut up now.

  7. The paper’s authors have done a good and important job.

    However, I have some sympathy with those who were against the publication of some sections on the Kinsey Report. The notorious Tables 31 and 34 clearly document sexual experimentation conducted on babies (the youngest was just two months old!) and young children by paedophiles. Should the data be suppressed? I don’t know – but, as with some of the Nazi experiments, it should never have been collected in the first place.

    1. There is a norm against publishing unethical research today, on the grounds that unethical (or illegal) conduct ought not to be rewarded with publications. I wouldn’t call that practice censorship as it is specific punishment or deterrence against present-day conduct according to present-day norms, not for speech/writing itself. I think you are asking if journals or universities should adopt a policy of prohibiting present-day authors of manuscripts from citing Kinsey’s work. I would say No, because the work has already been done and Kinsey cannot be deterred from recidivism, being dead. The practical objection is that much of this tainted research would ever never be known about, as a warning to us moderns, if mention of it were suppressed.

      The issue of publishing the results of “experiments” done on prisoners in Nazi Germany has been discussed and is controversial.
      https://www.nejm.org/doi/full/10.1056/nejm199005173222006#:~:text=Introduction,results%20obtained%20from%20those%20studies.
      Many letter-writers were furious with the author and the Journal for publishing the article.

  8. Hi Jerry! Thanks for the excellent coverage of our paper. I actually agree with your criticism, that we did not get concrete/specific enough to hit people between the eyes.

    I did do that here:
    https://unsafescience.substack.com/p/the-new-book-burners
    Its a post of a book chapter. If you want to get right to examples, and skip the psychology on demonization and tribalism, do a ctrl-F search for:
    “Book Burning Peer Reviewed Articles”

    This is also excellent and in a similar spirit:
    https://link.springer.com/article/10.1007/s12144-023-04739-2

  9. Consider this sentence from the paper’s Abstract:

    Our analysis suggests that scientific censorship is often driven by scientists, who are primarily motivated by self-protection, benevolence toward peer scholars, and prosocial concerns for the well-being of human social groups.

    It is all very well, and often unquestioned, to have prosocial concerns for the well-being of human social groups… but this is ‘racism’ or its shadow form ‘anti-racism’. Or sexism/anti-sexism etc., pick your own human social groups to taste. If you draw a conclusion based on a group characteristic you are ignoring those individuals within the ‘group’ who are not stereotypical. The politics of identity undermines good science, good medicine, good legal processes.

    At one time the unbearably progressive believed in the ‘blank slate’. Now it appears the assertion has broadened to a ‘blank slate’ for each ‘group’.

Comments are closed.