More science-dissing: WaPo’s misguided criticism of “scientism”

January 29, 2019 • 10:45 am

There’s never an end to science-dissing these days, and it comes largely from humanities scholars who are distressed by comparing the palpable progress in science with the stagnation and marginalization of their discipline—largely through its adoption of the methods of Postmodernism. (Curiously, the decline in humanities, which I believe coincides with university programs that promote a given ideology rather than encourage independent thought, is in opposition to the PoMo doctrine that there are different “truths” that emanate from different viewpoints.)

At any rate, much of the criticism of science comes in the form of accusations of “scientism”, defined, according to the article below in the Washington Post, as “the untenable extension of scientific authority into realms of knowledge that lie outside what science can justifiably determine.”

We’ve heard these assertions about scientism for years, and yes, there are times when scientists have made unsupported claims with social import. The eugenics movement and racism of early twentieth-century biologists is one, and some of the excesses of evolutionary psychology comprise another. One form of scientism I’ve criticized has been the claim (Sam Harris is one exponent) that science and objective reason can give us moral values; that is, we can determine what is right and wrong by simply using a calculus based on “well being” or a similar currency. I won’t get into why I think that’s wrong, but there are few scientists or philosophers that espouse this moral form of scientism.

But these days, claims of “scientism” are more often used the way dogs urinate on fire hydrants: to mark territories in the humanities. And that, it seems is what Aaron Hanlon, an assistant professor of English at Colby College is doing. In fact, he could have used science to buttress his main claim—that numbers make fake papers more readily accepted in journals—but didn’t. When you do, as I did, his main claim collapses.


The photo of Alexandria Ocasio-Cortez is there because she said (correctly) that algorithms themselves aren’t pure science, but reflect the intentions and perhaps the prejudices of people who construct them. From that Hanlon goes on to indict science for having a deceptive authority because it relies on numbers. But his example doesn’t have much to do with what Ocasio-Cortez said.

First, though, I note that Hanlon makes one correct point: that moral judgments, while they may rely on science (he uses claims that AI might replace human judges), aren’t scientific judgments that can be adjudicated empirically. I agree. But so do most people.

With few exceptions, most scientists and philosophers think that morality is at bottom based on human preferences. And though we may agree on many of those preferences (e.g., we should do what maximizes “well being”), you can’t show using data that one set of preferences is objectively better than another. (You can show, though, that the empirical consequences of one set of preferences differ from those of another set.) The examples I use involve abortion and animal rights. If you’re religious and see babies as having souls, how can you convince those folks that elective abortion is better than banning abortion? Likewise, how do you weigh human well being versus animal well being? I am a consequentialist who happens to agree with the well-being criterion, but I can’t demonstrate that it’s better than other criteria, like “always prohibit abortion because babies have souls.”

But that’s not Hanlon’s main point. His point rests on the “grievance studies” hoax perpetrated by Peter Boghossian, Helen Pluckrose, and James Lindsay (BP&L), in which they submitted phony papers, some having fabricated data, to different humanities journals. Some got accepted. From this Hanlon draws two false conclusions: that having numbers (faked data) increases the chance of a bad paper being accepted to a humanities journal, that that “we’re far too deferential to the mere idea of science.” Hanlon says this:

In actual fact, “social justice” jargon wasn’t enough — as the hoaxers initially thought — to deceive, but sprinkling in fake data did the trick better than jargon or political pieties ever could. Like Ocasio-Cortez’s critics, who trust too easily in the appearance of scientific objectivity, the hoaxed journals were more likely to buy outrageous claims if they were backed by something that looked like scientific data. It’s not that the hoax was an utter failure, nor that we shouldn’t worry about the vulnerabilities it exposed. It’s that, ironically, scientism and misplaced scientific authority actually contribute to those vulnerabilities and undermine science in the process.

But putting the numbers of accepted vs. rejected papers divided by whether or not they included faked data into a Fisher’s exact test (papers with data: 3 accepted, 2 rejected; papers without data: 4 accepted, 11 rejected), there’s no significant difference (p = 0.2898, far from significance). So using numbers in the “hoax papers” didn’t make a significant difference. Ergo, we have no evidence that using fake data improved a paper’s acceptance. That what science can tell you.

But it hardly matters, as the point of the hoax wasn’t to show that using data helped mislead reviewers. Even if there was a difference, it wouldn’t affect BP&L’s point: that palpably ridiculous papers, with or without numbers, were accepted by humanities journals because they conformed to the journals’ ideology. In fact, if you think about another famous hoax—Alan Sokal’s famous Social Text hoax of 1996—it involved a paper that used verbal arguments rather than data. So it’s not numbers that matter. Nevertheless, Hanlon wants to claim that scientism is still at play:

So what does the latest hoax tell us about the extension of scientism into academic fields that aren’t reducible to purely scientific explanations?

Part of the answer lies in a prior hoax, perpetrated by New York University physicist Alan Sokal in 1996. Sokal got an article laden with nonsensical jargon and specious arguments accepted at Social Text, a leading (though not peer-reviewed) cultural theory journal. The infamous “Sokal Hoax” was instructive, too, because, as Social Text editors Bruce Robbins and Andrew Ross explained after Sokal went public about his actions, they didn’t accept his article out of fealty to its politics or its jargon, but rather out of trust in — perhaps even reverence for — an eminent scientist’s engagement with cultural theory.

Remember that the more recent hoaxers didn’t just content themselves with verbal nonsense (as Sokal did); they also faked data, and not in a way that reviewers should necessarily dismiss without a good reason to do so. Columbia University sociologist Musa al-Gharbi found that the hoaxers’ “purported empirical studies (with faked data) were more than twice as likely to be accepted for publication as their nonempirical papers,” which lends support to this possibility. It’s entirely possible that reviewers took these submissions seriously out of respect for scientific conclusions, not out of anti-science bias. This would also align with broader research showing that political ideology is not actually what causes people to distrust science.

So if you use numbers, you’re damned for scientism, and if you don’t use numbers, you’re damned for scientism because you’re a scientist. You can’t win!

But were there any dangers in promulgating false data the way that BP&L did? No, because their papers never entered the literature. The trio of hoaxers promptly informed the journals of the hoax after the papers were accepted, and, as far as I know, none of those papers stand as published contributions.

There are other wonky statements in Hanlon’s paper as well, but I’ll give just two:

But the question of whether AI judges should replace human judges is a complex civic and moral question, one that is by definition informed but not conclusively answerable by scientific facts. It’s here that observations like Ocasio-Cortez’s become so important: If racist assumptions are baked into our supposedly objective tools, there’s nothing anti-scientific about pointing that out. But scientism threatens to blind us to such realizations — and critics such as Lindsay, Pluckrose and Boghossian suggest that keeping our eyes open is some sort of intellectual failing.

First of all, scientism doesn’t blind us to realizing that bias might occur. Scientists in love with their own theories may tend to hang onto them in the face of countervailing data, but eventually the truth will out. We no longer think that races form a hierarchy of intelligence, with whites on top; we no longer think that the Piltdown man was a forerunner of modern humans, and so on. It is scientists, by and large, who dispel these biases. More important, BP&L did not suggest that keeping our eyes open was “some sort of intellectual failing.” It was in fact the opposite: they suggest that keeping our eyes open makes us see how ridiculous are papers written to conform to an ideology, papers that make crazy assertions that would startle anybody not already in the asylum.

Finally, Hanlon tries to exculpate the hoaxed journals because they are “interdisciplinary”:

Indeed, one of the liabilities of interdisciplinary gender studies journals like those that fell for the hoax is that, as I’ve argued, they’re actually not humanities journals, nor are they strictly social science journals. As such, they conceivably receive submissions that make any combination of interpretive claims, claims of cultural observation, and empirical or data-based claims. For all of their potential benefits, these interdisciplinary efforts — which have analogues in the humanities as well — also run into methodological and epistemological challenges precisely because of their reverence for science and scientific methods, not because of anti-science attitudes.

No, these journals fell for the hoaxes not because of their reverence for “science and scientific methods” (we have no data supporting that claim), but because the papers BP&L submitted were accepted because of reverence for their ideology, which was Authoritarian Leftist “grievance” work, in line with what these journals like.

This attitude—that we should go easier on work that conforms to what we believe, or what we’d like to think—is the real danger here. And there’s a name for it: it’s called confirmation bias. And it’s more of a danger in the humanities than in the sciences, simply because in science you can check somebody else’s work with empirical methods.

Medusa Magazine says it’s not a hoax

June 27, 2017 • 1:30 pm

. . . but of course that means nothing, for if it were a hoax—and the evidence is strong on this one—they would resolutely deny it.  If they admitted it were a hoax, that hoax would be over for good, and there would be no point in continuing to add to the site.

A person at the New Zealand site Whale Oil  wrote Medusa, asking them if they were genuine, as if their answer would settle the issue. Part of Whale Oil‘s post:

I came across an online Feminist magazine called Medusa Magazine with the byline Feminist Revolution now. As I scanned the headlines I wondered if it was a satirical site as once before I fell for a poorly written piece of satire thinking that it was a genuine piece. I didn’t want to make the same mistake with this site so I e-mailed them to check.

They were kind enough to reply.

_____________________

Hello,
Medusa Magazine is a blog that espouses feminist ideology. We make no apologies for this, and we stand by everything we publish.
That being said, the views expressed in each article belong to the author(s) alone. We would however never have published any of the articles if we didn’t think they had any value to add to intellectual discourse. Even the articles that you describe as “over the top” have started a discussion and debate online about important issues that need to be discussed.
Say Hi to your readers from us.

Cheers.
______________________

The thing is*, is that they continue to publish articles, and they’re close enough to the real thing to fool some people. But I’ve decided that they’re too over-the-top to be real. And, as a reader pointed out (see link in first line), the domain is registered to someone who would be expected to satirize feminism.

 

 

*deliberate infelicity

 

A mini-Sokal hoax: abstract of physics paper written by computer, and in complete gibberish, accepted for conference on physics

October 22, 2016 • 11:15 am

In the famous Alan Sokal hoax, now twenty years old, a physicist got a bogus, post-modern paper accepted by the pomo journal Social Text. Now the tables are turned—sort of. This time, as the Guardian reported yesterday, a non-physicist hoaxed a physics conference by submitting an abstract, immediately accepted, that was written almost completely by computer. It was complete gibberish, proving that nobody looked at the paper, and that the conference was probably just a garbage meeting designed to make money.

I didn’t know what iOS autocomplete was, but apparently it’s an Apple program that can be used to finish written text with stuff that’s generated by computer (correct me if I’m wrong).  And a professor used it to write an entire paper. From the Guardian:

Christoph Bartneck, an associate professor at the Human Interface Technology laboratory at the University of Canterbury in New Zealand, received an email inviting him to submit a paper to the International Conference on Atomic and Nuclear Physics in the US in November.

“Since I have practically no knowledge of nuclear physics I resorted to iOS autocomplete function to help me writing the paper,” he wrote in a blog post on Thursday. “I started a sentence with ‘atomic’ or ‘nuclear’ and then randomly hit the autocomplete suggestions.

“The atoms of a better universe will have the right for the same as you are the way we shall have to be a great place for a great time to enjoy the day you are a wonderful person to your great time to take the fun and take a great time and enjoy the great day you will be a wonderful time for your parents and kids,” is a sample sentence from the abstract.

It concludes: “Power is not a great place for a good time.”

Bartneck made a video, posted in his website, showing how he did it:

But wait! There’s more!

Bartneck illustrated the paper – titled, again through autocorrect, “Atomic Energy will have been made available to a single source” – with the first graphic on the Wikipedia entry for nuclear physics.

He submitted it under a fake identity: associate professor Iris Pear of the US, whose experience in atomic and nuclear physics was outlined in a biography using contradictory gender pronouns.

The nonsensical paper was accepted only three hours later, in an email asking Bartneck to confirm his slot for the “oral presentation” at the international conference.

“I know that iOS is a pretty good software, but reaching tenure has never been this close,” Bartneck commented in the blog post.

The conference itself, to be held in Georgia in mid-November (see link above), looks pretty dicey. For one thing, read its call for abstracts:

And, as the Guardian notes:

The International Conference on Atomic and Nuclear Physics. . . is organised by ConferenceSeries: “an amalgamation of Open Access Publications and worldwide international science conferences and events”, established in 2007.

It also has a $1099 speaker registration fee.

The Guardian describes what Bartneck wrote as a paper, but it’s actually an abstract of a paper, complete with a bogus diagram and a phony photo of the author. You can see it here, and I’ve put a screenshot below:

screen-shot-2016-10-22-at-11-10-58-am

I get invitations all the time from bogus organizations that invite me to submit papers or give talks on forestry, molecular biology, immunology, and all sorts of things for which I have no credentials at all. There are a lot of organizations out there preying on scientists who, I guess, think that going to such meetings gives them professional credibility. And it must work, or why would these meetings and journals continue to exist?

h/t: Barry