There’s never an end to science-dissing these days, and it comes largely from humanities scholars who are distressed by comparing the palpable progress in science with the stagnation and marginalization of their discipline—largely through its adoption of the methods of Postmodernism. (Curiously, the decline in humanities, which I believe coincides with university programs that promote a given ideology rather than encourage independent thought, is in opposition to the PoMo doctrine that there are different “truths” that emanate from different viewpoints.)
At any rate, much of the criticism of science comes in the form of accusations of “scientism”, defined, according to the article below in the Washington Post, as “the untenable extension of scientific authority into realms of knowledge that lie outside what science can justifiably determine.”
We’ve heard these assertions about scientism for years, and yes, there are times when scientists have made unsupported claims with social import. The eugenics movement and racism of early twentieth-century biologists is one, and some of the excesses of evolutionary psychology comprise another. One form of scientism I’ve criticized has been the claim (Sam Harris is one exponent) that science and objective reason can give us moral values; that is, we can determine what is right and wrong by simply using a calculus based on “well being” or a similar currency. I won’t get into why I think that’s wrong, but there are few scientists or philosophers that espouse this moral form of scientism.
But these days, claims of “scientism” are more often used the way dogs urinate on fire hydrants: to mark territories in the humanities. And that, it seems is what Aaron Hanlon, an assistant professor of English at Colby College is doing. In fact, he could have used science to buttress his main claim—that numbers make fake papers more readily accepted in journals—but didn’t. When you do, as I did, his main claim collapses.
The photo of Alexandria Ocasio-Cortez is there because she said (correctly) that algorithms themselves aren’t pure science, but reflect the intentions and perhaps the prejudices of people who construct them. From that Hanlon goes on to indict science for having a deceptive authority because it relies on numbers. But his example doesn’t have much to do with what Ocasio-Cortez said.
First, though, I note that Hanlon makes one correct point: that moral judgments, while they may rely on science (he uses claims that AI might replace human judges), aren’t scientific judgments that can be adjudicated empirically. I agree. But so do most people.
With few exceptions, most scientists and philosophers think that morality is at bottom based on human preferences. And though we may agree on many of those preferences (e.g., we should do what maximizes “well being”), you can’t show using data that one set of preferences is objectively better than another. (You can show, though, that the empirical consequences of one set of preferences differ from those of another set.) The examples I use involve abortion and animal rights. If you’re religious and see babies as having souls, how can you convince those folks that elective abortion is better than banning abortion? Likewise, how do you weigh human well being versus animal well being? I am a consequentialist who happens to agree with the well-being criterion, but I can’t demonstrate that it’s better than other criteria, like “always prohibit abortion because babies have souls.”
But that’s not Hanlon’s main point. His point rests on the “grievance studies” hoax perpetrated by Peter Boghossian, Helen Pluckrose, and James Lindsay (BP&L), in which they submitted phony papers, some having fabricated data, to different humanities journals. Some got accepted. From this Hanlon draws two false conclusions: that having numbers (faked data) increases the chance of a bad paper being accepted to a humanities journal, that that “we’re far too deferential to the mere idea of science.” Hanlon says this:
In actual fact, “social justice” jargon wasn’t enough — as the hoaxers initially thought — to deceive, but sprinkling in fake data did the trick better than jargon or political pieties ever could. Like Ocasio-Cortez’s critics, who trust too easily in the appearance of scientific objectivity, the hoaxed journals were more likely to buy outrageous claims if they were backed by something that looked like scientific data. It’s not that the hoax was an utter failure, nor that we shouldn’t worry about the vulnerabilities it exposed. It’s that, ironically, scientism and misplaced scientific authority actually contribute to those vulnerabilities and undermine science in the process.
But putting the numbers of accepted vs. rejected papers divided by whether or not they included faked data into a Fisher’s exact test (papers with data: 3 accepted, 2 rejected; papers without data: 4 accepted, 11 rejected), there’s no significant difference (p = 0.2898, far from significance). So using numbers in the “hoax papers” didn’t make a significant difference. Ergo, we have no evidence that using fake data improved a paper’s acceptance. That what science can tell you.
But it hardly matters, as the point of the hoax wasn’t to show that using data helped mislead reviewers. Even if there was a difference, it wouldn’t affect BP&L’s point: that palpably ridiculous papers, with or without numbers, were accepted by humanities journals because they conformed to the journals’ ideology. In fact, if you think about another famous hoax—Alan Sokal’s famous Social Text hoax of 1996—it involved a paper that used verbal arguments rather than data. So it’s not numbers that matter. Nevertheless, Hanlon wants to claim that scientism is still at play:
So what does the latest hoax tell us about the extension of scientism into academic fields that aren’t reducible to purely scientific explanations?
Part of the answer lies in a prior hoax, perpetrated by New York University physicist Alan Sokal in 1996. Sokal got an article laden with nonsensical jargon and specious arguments accepted at Social Text, a leading (though not peer-reviewed) cultural theory journal. The infamous “Sokal Hoax” was instructive, too, because, as Social Text editors Bruce Robbins and Andrew Ross explained after Sokal went public about his actions, they didn’t accept his article out of fealty to its politics or its jargon, but rather out of trust in — perhaps even reverence for — an eminent scientist’s engagement with cultural theory.
Remember that the more recent hoaxers didn’t just content themselves with verbal nonsense (as Sokal did); they also faked data, and not in a way that reviewers should necessarily dismiss without a good reason to do so. Columbia University sociologist Musa al-Gharbi found that the hoaxers’ “purported empirical studies (with faked data) were more than twice as likely to be accepted for publication as their nonempirical papers,” which lends support to this possibility. It’s entirely possible that reviewers took these submissions seriously out of respect for scientific conclusions, not out of anti-science bias. This would also align with broader research showing that political ideology is not actually what causes people to distrust science.
So if you use numbers, you’re damned for scientism, and if you don’t use numbers, you’re damned for scientism because you’re a scientist. You can’t win!
But were there any dangers in promulgating false data the way that BP&L did? No, because their papers never entered the literature. The trio of hoaxers promptly informed the journals of the hoax after the papers were accepted, and, as far as I know, none of those papers stand as published contributions.
There are other wonky statements in Hanlon’s paper as well, but I’ll give just two:
But the question of whether AI judges should replace human judges is a complex civic and moral question, one that is by definition informed but not conclusively answerable by scientific facts. It’s here that observations like Ocasio-Cortez’s become so important: If racist assumptions are baked into our supposedly objective tools, there’s nothing anti-scientific about pointing that out. But scientism threatens to blind us to such realizations — and critics such as Lindsay, Pluckrose and Boghossian suggest that keeping our eyes open is some sort of intellectual failing.
First of all, scientism doesn’t blind us to realizing that bias might occur. Scientists in love with their own theories may tend to hang onto them in the face of countervailing data, but eventually the truth will out. We no longer think that races form a hierarchy of intelligence, with whites on top; we no longer think that the Piltdown man was a forerunner of modern humans, and so on. It is scientists, by and large, who dispel these biases. More important, BP&L did not suggest that keeping our eyes open was “some sort of intellectual failing.” It was in fact the opposite: they suggest that keeping our eyes open makes us see how ridiculous are papers written to conform to an ideology, papers that make crazy assertions that would startle anybody not already in the asylum.
Finally, Hanlon tries to exculpate the hoaxed journals because they are “interdisciplinary”:
Indeed, one of the liabilities of interdisciplinary gender studies journals like those that fell for the hoax is that, as I’ve argued, they’re actually not humanities journals, nor are they strictly social science journals. As such, they conceivably receive submissions that make any combination of interpretive claims, claims of cultural observation, and empirical or data-based claims. For all of their potential benefits, these interdisciplinary efforts — which have analogues in the humanities as well — also run into methodological and epistemological challenges precisely because of their reverence for science and scientific methods, not because of anti-science attitudes.
No, these journals fell for the hoaxes not because of their reverence for “science and scientific methods” (we have no data supporting that claim), but because the papers BP&L submitted were accepted because of reverence for their ideology, which was Authoritarian Leftist “grievance” work, in line with what these journals like.
This attitude—that we should go easier on work that conforms to what we believe, or what we’d like to think—is the real danger here. And there’s a name for it: it’s called confirmation bias. And it’s more of a danger in the humanities than in the sciences, simply because in science you can check somebody else’s work with empirical methods.