More science-dissing: WaPo’s misguided criticism of “scientism”

January 29, 2019 • 10:45 am

There’s never an end to science-dissing these days, and it comes largely from humanities scholars who are distressed by comparing the palpable progress in science with the stagnation and marginalization of their discipline—largely through its adoption of the methods of Postmodernism. (Curiously, the decline in humanities, which I believe coincides with university programs that promote a given ideology rather than encourage independent thought, is in opposition to the PoMo doctrine that there are different “truths” that emanate from different viewpoints.)

At any rate, much of the criticism of science comes in the form of accusations of “scientism”, defined, according to the article below in the Washington Post, as “the untenable extension of scientific authority into realms of knowledge that lie outside what science can justifiably determine.”

We’ve heard these assertions about scientism for years, and yes, there are times when scientists have made unsupported claims with social import. The eugenics movement and racism of early twentieth-century biologists is one, and some of the excesses of evolutionary psychology comprise another. One form of scientism I’ve criticized has been the claim (Sam Harris is one exponent) that science and objective reason can give us moral values; that is, we can determine what is right and wrong by simply using a calculus based on “well being” or a similar currency. I won’t get into why I think that’s wrong, but there are few scientists or philosophers that espouse this moral form of scientism.

But these days, claims of “scientism” are more often used the way dogs urinate on fire hydrants: to mark territories in the humanities. And that, it seems is what Aaron Hanlon, an assistant professor of English at Colby College is doing. In fact, he could have used science to buttress his main claim—that numbers make fake papers more readily accepted in journals—but didn’t. When you do, as I did, his main claim collapses.


The photo of Alexandria Ocasio-Cortez is there because she said (correctly) that algorithms themselves aren’t pure science, but reflect the intentions and perhaps the prejudices of people who construct them. From that Hanlon goes on to indict science for having a deceptive authority because it relies on numbers. But his example doesn’t have much to do with what Ocasio-Cortez said.

First, though, I note that Hanlon makes one correct point: that moral judgments, while they may rely on science (he uses claims that AI might replace human judges), aren’t scientific judgments that can be adjudicated empirically. I agree. But so do most people.

With few exceptions, most scientists and philosophers think that morality is at bottom based on human preferences. And though we may agree on many of those preferences (e.g., we should do what maximizes “well being”), you can’t show using data that one set of preferences is objectively better than another. (You can show, though, that the empirical consequences of one set of preferences differ from those of another set.) The examples I use involve abortion and animal rights. If you’re religious and see babies as having souls, how can you convince those folks that elective abortion is better than banning abortion? Likewise, how do you weigh human well being versus animal well being? I am a consequentialist who happens to agree with the well-being criterion, but I can’t demonstrate that it’s better than other criteria, like “always prohibit abortion because babies have souls.”

But that’s not Hanlon’s main point. His point rests on the “grievance studies” hoax perpetrated by Peter Boghossian, Helen Pluckrose, and James Lindsay (BP&L), in which they submitted phony papers, some having fabricated data, to different humanities journals. Some got accepted. From this Hanlon draws two false conclusions: that having numbers (faked data) increases the chance of a bad paper being accepted to a humanities journal, that that “we’re far too deferential to the mere idea of science.” Hanlon says this:

In actual fact, “social justice” jargon wasn’t enough — as the hoaxers initially thought — to deceive, but sprinkling in fake data did the trick better than jargon or political pieties ever could. Like Ocasio-Cortez’s critics, who trust too easily in the appearance of scientific objectivity, the hoaxed journals were more likely to buy outrageous claims if they were backed by something that looked like scientific data. It’s not that the hoax was an utter failure, nor that we shouldn’t worry about the vulnerabilities it exposed. It’s that, ironically, scientism and misplaced scientific authority actually contribute to those vulnerabilities and undermine science in the process.

But putting the numbers of accepted vs. rejected papers divided by whether or not they included faked data into a Fisher’s exact test (papers with data: 3 accepted, 2 rejected; papers without data: 4 accepted, 11 rejected), there’s no significant difference (p = 0.2898, far from significance). So using numbers in the “hoax papers” didn’t make a significant difference. Ergo, we have no evidence that using fake data improved a paper’s acceptance. That what science can tell you.

But it hardly matters, as the point of the hoax wasn’t to show that using data helped mislead reviewers. Even if there was a difference, it wouldn’t affect BP&L’s point: that palpably ridiculous papers, with or without numbers, were accepted by humanities journals because they conformed to the journals’ ideology. In fact, if you think about another famous hoax—Alan Sokal’s famous Social Text hoax of 1996—it involved a paper that used verbal arguments rather than data. So it’s not numbers that matter. Nevertheless, Hanlon wants to claim that scientism is still at play:

So what does the latest hoax tell us about the extension of scientism into academic fields that aren’t reducible to purely scientific explanations?

Part of the answer lies in a prior hoax, perpetrated by New York University physicist Alan Sokal in 1996. Sokal got an article laden with nonsensical jargon and specious arguments accepted at Social Text, a leading (though not peer-reviewed) cultural theory journal. The infamous “Sokal Hoax” was instructive, too, because, as Social Text editors Bruce Robbins and Andrew Ross explained after Sokal went public about his actions, they didn’t accept his article out of fealty to its politics or its jargon, but rather out of trust in — perhaps even reverence for — an eminent scientist’s engagement with cultural theory.

Remember that the more recent hoaxers didn’t just content themselves with verbal nonsense (as Sokal did); they also faked data, and not in a way that reviewers should necessarily dismiss without a good reason to do so. Columbia University sociologist Musa al-Gharbi found that the hoaxers’ “purported empirical studies (with faked data) were more than twice as likely to be accepted for publication as their nonempirical papers,” which lends support to this possibility. It’s entirely possible that reviewers took these submissions seriously out of respect for scientific conclusions, not out of anti-science bias. This would also align with broader research showing that political ideology is not actually what causes people to distrust science.

So if you use numbers, you’re damned for scientism, and if you don’t use numbers, you’re damned for scientism because you’re a scientist. You can’t win!

But were there any dangers in promulgating false data the way that BP&L did? No, because their papers never entered the literature. The trio of hoaxers promptly informed the journals of the hoax after the papers were accepted, and, as far as I know, none of those papers stand as published contributions.

There are other wonky statements in Hanlon’s paper as well, but I’ll give just two:

But the question of whether AI judges should replace human judges is a complex civic and moral question, one that is by definition informed but not conclusively answerable by scientific facts. It’s here that observations like Ocasio-Cortez’s become so important: If racist assumptions are baked into our supposedly objective tools, there’s nothing anti-scientific about pointing that out. But scientism threatens to blind us to such realizations — and critics such as Lindsay, Pluckrose and Boghossian suggest that keeping our eyes open is some sort of intellectual failing.

First of all, scientism doesn’t blind us to realizing that bias might occur. Scientists in love with their own theories may tend to hang onto them in the face of countervailing data, but eventually the truth will out. We no longer think that races form a hierarchy of intelligence, with whites on top; we no longer think that the Piltdown man was a forerunner of modern humans, and so on. It is scientists, by and large, who dispel these biases. More important, BP&L did not suggest that keeping our eyes open was “some sort of intellectual failing.” It was in fact the opposite: they suggest that keeping our eyes open makes us see how ridiculous are papers written to conform to an ideology, papers that make crazy assertions that would startle anybody not already in the asylum.

Finally, Hanlon tries to exculpate the hoaxed journals because they are “interdisciplinary”:

Indeed, one of the liabilities of interdisciplinary gender studies journals like those that fell for the hoax is that, as I’ve argued, they’re actually not humanities journals, nor are they strictly social science journals. As such, they conceivably receive submissions that make any combination of interpretive claims, claims of cultural observation, and empirical or data-based claims. For all of their potential benefits, these interdisciplinary efforts — which have analogues in the humanities as well — also run into methodological and epistemological challenges precisely because of their reverence for science and scientific methods, not because of anti-science attitudes.

No, these journals fell for the hoaxes not because of their reverence for “science and scientific methods” (we have no data supporting that claim), but because the papers BP&L submitted were accepted because of reverence for their ideology, which was Authoritarian Leftist “grievance” work, in line with what these journals like.

This attitude—that we should go easier on work that conforms to what we believe, or what we’d like to think—is the real danger here. And there’s a name for it: it’s called confirmation bias. And it’s more of a danger in the humanities than in the sciences, simply because in science you can check somebody else’s work with empirical methods.

74 thoughts on “More science-dissing: WaPo’s misguided criticism of “scientism”

  1. three points:

    [1.] when would “blind reverence” for anything be good? We do not learn from the article. but that’s headline writing, I guess.

    [2.] “reverence” : isn’t science fundamentally irreverent? If something doesn’t work, even if it is “revered”, so what – it doesn’t work, and is either shelved or discarded.

    [3.] I see a failure to distinguish “scientism” from “scientific literacy” in this article. For instance, if the reviewers of BP&L’s paper had been scientifically literate, the article would not have been published. Likewise, “[AH]: Ocasio-Cortez’s critics, who trust too easily in the appearance of scientific objectivity”, simply lack scientific literacy – a literacy which would help to know when to say “I don’t know”.

    1. the quote that I think was stimulating the idea for [2] was what follows, and apologies if this is too long:

      “In general we look for a new law by the following process. First we guess it. Then we compute the consequences of the guess to see what would be implied if this law that we guessed is right. Then we compare the result of the computation to nature, with experiment or experience, compare it directly with observation, to see if it works. If it disagrees with experiment it is wrong. In that simple statement is the key to science. It does not make any difference how beautiful your guess is. It does not make any difference how smart you are, who made the guess, or what his name is – if it disagrees with experiment it is wrong. That is all there is to it.”

      -Richard Feynman
      The Character of Physical Law
      chapter 7, “Seeking New Laws,” p. 156 [as presented in edited book]

      source:

      https://en.wikiquote.org/wiki/Richard_Feynman

      1. Feynman is a great physicist, but his philosophical analysis of his own thought process misses something we’ve talked about elsewhere: the relativity (not subjectivity) of wrong. Namely comparing the results may tell you that you are on to something but not quite there. That’s the most interesting case of all.

        1. Forgive this if it comes across belligerent, I’m trying some examples:

          Phlogiston – wrong.

          Evolution – not wrong.

          God – wrong.

          I’m not seeing it…

          BUT I only was filling out where my notion for my [2] came from…

          1. Newton’s laws of gravitation – correct to (whatever amount), relative to GR.

            More crucially, these aren’t specific hypotheses, which is what law statements are.

  2. The problem with the AI “biases” is that you and AOC are not speaking the same language. When you say that the AI reflects the “biases of the creators”, you mean it in the mathematical sense. That is, if the creators set criteria x, y, and z, the AI will reflect those criteria.

    When people like AOC say that AI reflects the “biases of the creators”, she means that the AI is racist, because the creators are racist. (because, in her mind, of course they’re racist)

    These are two VERY different things.

    I saw an article the other day about Amazon’s Facial Recognition being less able to distinguish dark skinned faces compared to lighter skinned faces. The writers implied that this was proof that Amazon was racist, rather than the simpler explanation that darker surfaces reflect less light. It’s a valid criticism when talking about the effectiveness of facial recognition, but to leap from that to accusations of racism is absurd.

    1. I saw an article the other day about Amazon’s Facial Recognition being less able to distinguish dark skinned faces compared to lighter skinned faces. The writers implied that this was proof that Amazon was racist, rather than the simpler explanation that darker surfaces reflect less light.

      Ummm, so use images in IR illuination? When I got security cameras (2 houses ago, or 3?), the norm was for IR cameras because they work without calling attention to themselves and screaming “Rob me, I’ve got something worth stealing!”.
      Or … maybe too many people who are constructing / marketing “security systems” are selling “security theatre”. Probably couldn’t get a job as “an assistant professor of English” and weren’t much good at flipping burgers.

      1. But these algorithms are intended to work with normal phone/tablet/computer cameras or with existing photos (like Apple’s Photos app can find all of your pictures with a particular person in them, once you’ve given it enough training data).

        IR cameras aren’t really an option for these use cases.

        /@

  3. Aaron Hanlon: “If racist assumptions are baked into our supposedly objective tools, there’s nothing anti-scientific about pointing that out.”

    Yet another strawman. Nobody knowledgeable in AI would claim there’s any supposed objectivity in “the tools” (in fact they know that human subjectivity being infused into the system is one of the major problems), so of course they wouldn’t claim it’s anti-scientific to point out what is common knowledge in the field.

    1. I think you’re missing the point here, or deliberately deflecting it. Hanln, and AOC, is basically making the same point as you are – but Hanlon is speaking back against the US Senator (Or was it Congressman) who specifically denied what you and he and AOC are all saying.

      So it’s not such a strawman, given where the public debate is at right now..

      1. I might have missed the point, but your accusation that I deliberately deflected it is an accusation that I was writing in bad faith, and I was not. That’s a Roolz violation, and so you must hie elsewhere.

        And, by the way, Hanlon makes other points, like putting numbers in papers makes them likelier to be accepted. Scientific analysis shows he’s wrong. He’s making more than one point and, for an English professor, has written a curiously disjointed essay.

        So many people cannot argue civilly, or they accuse the proprietor of malfeasance. . . You are among these.

  4. one of the liabilities of interdisciplinary gender studies journals like those that fell for the hoax is that, as I’ve argued, they’re actually not humanities journals, nor are they strictly social science journals. As such, they conceivably receive submissions that make any combination of interpretive claims, claims of cultural observation, and empirical or data-based claims.

    So (they should) employ editors and reviewers from all relevant subfields. If a publication needs a physicist to review a paper, get a physicist. If they also need an anthropologist for the same paper, get an anthropologist. This “it’s interdisciplinary!” argument is not a legiitimate defense of having a lower acceptance standard, it’s just defending academic laziness on the part of the journal.

      1. As someone who has done interdisciplinary work in the past, I have always striven for meeting the standards of *all* the fields touched, not the intersection of them (which is null in any new area – almost by definition).

  5. “Ergo, we have no evidence that using fake data improved a paper’s acceptance.”

    I think it would be more correct to say that the data do provide evidence that using fake data improved a paper’s acceptance rate, but that the evidence is not statistically significant (due to the very small sample size). The rate of acceptance is much higher for papers with data than for papers without data. It’s weak evidence, but it is evidence, and the magnitude of the effect is large. I’d bet 4:1 that a larger experiment would confirm the effect.

    1. You could say that the difference would occur nearly 30% of the time by chance even if there was no difference. That’s not very impressive! And, as I said, even if there were a significant difference, it wouldn’t rally matter.

      1. Yes, I completely agree. My point is just that even non-significant results are evidence. And in this case, with such a large effect size, it’s a good bet that the conclusion is right.

        1. The estimate of the effect size is based on the same small sample as is the hypothesis test. Consequently, we should not, like Trump, be confident of its size.

          1. IIRC I saw it mentioned somewhere that one of the reviewers raised concerns that one of the papers was too reliant on data.

            I’m sorry I can’t easily confirm the details – I am on mobile phone on limited data in emergency accommodation (since November) after a house fire (lost all my computer gear, etc).

      2. Just to split hairs, if a one-tailed test is appropriate, and I think it is, the p-value for Fisher’s exact test drops to 0.207. The one-tailed test is justified because the question is not whether numerical data AFFECT acceptance rates, but rather whether they INCREASE rates. 21% is still not significant, but it is suggestive.

        1. Ah, but is that a post hoc decision that it should be 1-tailed? I’m sure if the effect had been in the other direction then they could have a put a different anti-science spin on it.

    2. Statistically significance is rather arbitrary; “the 5% threshold used in the life sciences is about 2 standard deviations. The physics standard for claiming detection of a new particle is 5 standard deviations, or a probability of around 0.0000006 that the result occurred by chance.”

      IIRC, the guy who came up with the idea used .05 because it made the arithmetic easy.

      1. In most of the courses where I teach this stuff I emphasise confidence intervals over hypothesis tests, and when teaching the latter I always get the students to think about whether the semi-arbitrary choice of 5% (2 standard errors) really makes sense.

        1. Yes, exactly. The null hypothesis testing model is rarely the correct one in biology; nearly always the right model is the estimation of a meaningful parameter, with confidence intervals expressing our uncertainty in the estimate. But is is very difficult to get biologists to realize that.

  6. The photo of Alexandria Ocasio-Cortez is there because she said (correctly) that algorithms themselves aren’t pure science, but reflect the intentions and perhaps the prejudices of people who construct them.

    Pointing to “algorithms” is a widespread and inaccurate description of the problem. In most AI technology, algorithms are only part of the system, the other major part being the data. (Algorithms + Data Structures = Programs is still fundamental.)

    In most AI systems the data is in the form of trained neural networks, simulated evolved populations, etc. – some form of encoded knowledge. These knowledge bases are where the human prejudices are inevitably captured, not in the algorithms, for the most part.

    The unwanted prejudices encoded in these systems don’t come from those that build these systems (although the system builders have a responsibility to weed it out), it comes from the data they are using. And the data is skewed (more available images of white westerners than other cultures, more available writing in English, etc.).

    Using the term “algorithm” to describe AI systems is inaccurate and obfuscates the problem.

  7. Even were Hanlon’s acceptance-rate data significant, the culprit here wouldn’t be “scientism”; the culprit here is the scientific illiteracy (and, perhaps, the illiteracy tout court) of the academics making the decisions for the hoaxed journals.

  8. The BP&L paper accepted by the journal “Gender, Place, and Culture”, dealing with canine “queer performativity” in a Portland dog park, included “data” on dog behaviors. How scientistic! In the better world Professor Hanlon prefers, there would perhaps have been no such “data”, just the writers’ feelings about the dogs’ feelings.

    On the other hand, one point related to
    Hanlon’s could be made. Some Humanities departments openly boast of competition with STEM for gravy. Some years ago, I noticed a breathless account of a text analysis unit at the University of Rummidge in the UK which paraded the number of £ its wonderful computer facilities cost. David Lodge couldn’t have invented anything better.

    1. Is this tongue in cheek? If so for the benefit of people like me who have been caught out … City of Rummidge is fictional, but it does bear close resemblance to Brum.

      🙂 🙂

  9. The key term in this story is Confirmation Bias and I think it has taken over in many areas besides good science. If I may provide an example in law verses history. The far right is expert in working confirmation bias and did so with gun rights leading up to the Heller opinion we live under today.

    One area of focus to gain public opinion was the use of law schools. In the 1990s the NRA paid 3 lawyers approx. a million dollars to write thirty articles for law review. All of them designed to generate an alternative body of legal scholarship. Unlike historical journals where submissions must undergo peer review, law school publications are run by students requiring no editorial screening by academic professionals.

    Fast forward to 2008 where Scalia used law office history to build his case for Heller. Scalia’s methods are accepted and even admired by many legal scholars while professional historians regard it as flagrant violation of the requirements of their craft.

  10. If Hanlon’s beef is that making something look ‘sciency’ leads to some people letting their critical faculties drop then he has a point. The editors of the journals that accepted PB&Ls submissions may have been fooled by the numerical ‘data’ included, into thinking that the papers had some academic merit in the same way that an advert with a man in a white coat pronouncing that a toilet cleaner ‘kills all germs dead’ persuades some shoppers to buy the brand in question.

    But if this is ‘blind reverence for science’ it is not the fault of science itself which, as pointed out above, does not trade in blind reverence.

  11. Scientism is a word coined in the 19th century. It was often used by religious writers do characterize Darwinian evolution. The word does not have a pretty history.

  12. Whether it was intended this way or not, making the journal article fiasco a main focus here reads as trying to make excuses and reverse blame (it wasn’t too much post modernism that caused this, it was too little). I think this is a bad idea – doubling down and not admitting mistakes in the face of valid criticisms just makes the group doing it look insecure and unwilling to listen. There are few things that feel worse than having to admit a mistake to a group of people who are going to gleefully gloat some version of “I told you so!” over it, I get that – but it makes the group that does it look like the bigger person in the long run. Besides that, when accusing people of scientism, there are age old tried and true arguments that are almost impossible to refute, so why reinvent the wheel, lol! (Said with humor, as I think Russell intended below, as that doesn’t always convey in writing – but also an acknowledgment that it’s better for subjective realms to continue to address the subjective, where I would argue that, outside of the inevitable missteps, they have made quite a bit of progress. In the experiential realm, the last few decades have seen beautiful art, architecture, films, literature, and so on; culture has become overall far more inclusive and less focused on retributive justice, and so on. No one can really argue rationally that science should dictate what kind of art we want – or if they do, that is dystopian scientism – and so the humanities and related subjects always have a relatively undisputed realm there.)

    Every one knows that “mind” is what an idealist thinks there is nothing else but, and “matter” is what a materialist thinks the same about. The reader knows also, I hope, that idealists are virtuous and materialists are wicked. – Bertrand Russell, The History of Western Philosophy

  13. “One form of scientism I’ve criticized has been the claim (Sam Harris is one exponent) that science and objective reason can give us moral values; that is, we can determine what is right and wrong by simply using a calculus based on “well being” or a similar currency.”

    While my hunch is that you are correct, I don’t think the argument against it is very strong right now, and certainly don’t see this as an obviously illegitimate area of scientific inquiry.

    Why not let science see what it can do? Certainly science can uncover what are universal moral precepts, and can probe the conditions about how they are applied and interact, and can formulate testable hypotheses about how they may have come about. It may turn out that all science can do is provide a description of a universal moral sense, but even that would be useful in, say, tailoring laws and policies so that they better comport with that moral sense.

  14. A few criticisms (constructive, I intend and hope):

    1. The title is misleading; the subject is an opinion piece published in an opinion section of the Washington Post. The opinions expressed are those of the author, and do not necessarily reflect those of the editors of the Washington Post. Therefore, referring to “WaPo’s misguided criticism” implies that the editors agree with the author, when that has not been established.

    2. Accepting the made-up term “scientism” (w/o the quotation marks, as in the 3rd paragraph and a few subsequent uses), falls into the trap used e.g. for indoctrination by evangelical religious “schools” which start (so I’m told) by “explaining” that “isms” like naziism, communism, fascism, etc. are “evil” and then proceed to refer to made-up terms “Darwinism”, “evolutionism”, and “scientism” as a form of implied guilt by association. I suspect that authors in the humanities that use the term have a similar agenda; the implication that science (as “scientism”) is an ideology rather than a process.

    3. The paragraphs about morality don’t seem to me to add much to the discussion of criticism of science (as “scientism”); the one mentioning Harris mentions only a black-and-white binary view of morality as “right and wrong” as distinct from a spectrum which includes zero-sum situations and the distinctions between the [morally] obligatory, recommended, permissible, deprecated, and forbidden. The paragraph mentioning abortion introduces a can of worms (define “soul”, demonstrate that such a thing as a “soul” exists, demonstrate that babies (i.e. the results of live human births) have such a thing, explain precisely how and why that applies to a fetus, and how a spontaneous abortion (the result of the overwhelming majority of human embryos) is or is not a moral issue for a supposed deity, etc.).

  15. Scientism is, sadly, rife and rampant, going by the technobabble in TV ads. “If you can’t blind them with brilliance, then baffle them with bullshit” applies. Notice how everyone touting some overpriced toothpaste with some miracle chemical ingredient wears a white lab coat? (Also, woo of all sorts, from magic Gwyneth Paltrow vagina eggs to $6000 speaker cables**, err, interconnects).

    It seems the editors of some humanities journals are just as dumb as the average TV viewer when it comes to swallowing nonsense they don’t understand. (I wonder if scientific editors are as bad? Probably some are).

    So Hanlon has a point about scientism.

    But the answer isn’t to mistrust all scientific claims automatically, it’s to consider their credibility and mistrust implicit appeals to authority (e.g. white lab coats worn in adverts). Just as one mistrusts Nigerian bank officers with $35 million to launder.

    I think more education in basic scientific principles and critical thinking would help.

    cr
    ** I just Googled. $40,000.

    1. I imagine that people who spend $40,000 on speaker cables somehow manage to convince themselves that they can hear a commensurate improvement in the sound coming out of the speakers!

      1. I think one of the phenomena at work there is “motivated reasoning”.

        Convoluted with “sunk cost” as well.

        I think a similar thing came up on the “overpriced stuff”(?) Post – $40 movie date- but in that case, the participants knew it was overpriced, but I think the movie execs know that and still get you coming back for more, not immediately but much later on.

      2. “I imagine that people who spend $40,000 on speaker cables somehow manage to convince themselves that they can hear a commensurate improvement in the sound coming out of the speakers!”

        They do exactly that.

        And to anyone who contemplates the possible sources of distortion in the recording process, the amplifier, and particularly the acoustics of loudspeaker construction and the listening room, the idea that speaker cables could have any significant influence is ludicrous. (Provided only that they have sufficiently low resistance, capacitance and inductance – which, frankly, any bit of wire of sufficient thickness automatically has. To put it another way, you’d have to try very hard indeed to manufacture a speaker cable that was defective in that respect).

        cr

        1. The question is what is it about speaker cables where a perceived upgrade is more important than upgrading, say, any of the other items you mentioned?

          I guess it’s that cables look cheap – and they are important to produce sound – so it must be easy to upgrade them. So the consumer simply looks for cables that look better – if they look good, they must be good.

          1. Well, cables are cheap – that is, perfectly technically satisfactory cables are cheap, the dedicated ‘audiophile’ ones certainly are not.

            I don’t know why they’re the subject of so much attention. There must be some psychological explanation for it but I can’t figure it out.

            cr

  16. ‘I am a consequentialist who happens to agree with the well-being criterion, but I can’t demonstrate that it’s better than other criteria, like “always prohibit abortion because babies have souls.”’

    Easy, on that point. Use the reductio ad absurdum argument. If foetuses (NOT babies!) have souls, when do they acquire it? When the sperm hits the egg? How does _one cell_ have a soul when it doesn’t even have a nervous system?

    It seems to me trivially easy to demonstrate that elective abortion is a net good, since the only people who suffer are women who are denied it, and the adverse consequences for them are enormous. And, in a free society, why should other people be allowed to impose their own religious beliefs on anyone?

    Obviously this argument won’t be accepted by the religiously blinkered, but then no rational argument will ever convince them.

    cr

    1. “How does _one cell_ have a soul when it doesn’t even have a nervous system?”

      I don’t believe that we have souls but this surely does not constitute a logical proof of the fact of their non-existence. Why is a nervous system presumed to be necessary for possession of a soul? It seems to me that theologians, sophisticated or otherwise, would simply deflect this argument by suggesting that science simply cannot penetrate ‘the mystery of the soul’, where it resides or how and when it comes to be associated with a body. Personally I prefer to base my lack of belief in the soul on the fact that there is simply no evidence for its existence and leave it at that.

      1. Well, that too.

        The point is, in answer to PCC’s statement ‘I am a consequentialist who happens to agree with the well-being criterion, but I can’t demonstrate that it’s better than other criteria, like “always prohibit abortion because babies have souls.”’ – I think that he’s incorrect, IMO. The well-being criteria is easily demonstrated with masses of actual evidence, primarily the effect on a woman of an unwanted pregnancy. The ‘soul’ argument has, as you say, no evidence at all.

        The preponderance of the evidence is all one way, for any rational viewpoint.

        Now that may not be sufficient to convince a religiously blinkered person that he’s wrong, but then no rational argument is capable of persuading someone who doesn’t want to listen.

        cr

        1. Even if you reject the soul argument, the animal welfare argument remains. How do you weigh the suffering of animals raised for food against the human pleasure of eating them. It is simply impossible to perform a calculus of well being when you can’t fathom the consciousness of an animal.

          Besides, “well being of society” is itself a preference.

          1. I really can’t see how animal welfare has any bearing on elective abortion.

            But I suspect we’re talking about different things.

            Morality based on human preferences? Well, with respect to elective abortion, humans are the only species whose preferences are relevant. Human wellbeing is the only consideration. So the grounds for debate are simple.

            With respect to animal rights, obviously animals could claim to have an interest too. And the question of relative weighting then arises.

            So think the two fields of debate are qualitatively different.

            cr

          2. Oh, and I agree that “It is simply impossible to perform a calculus of well being when you can’t fathom the consciousness of an animal.”

            cr

          3. On maximizing well-being as a criteria for morality… I think when you think of animal rights, you get into topics like the fact that there are (according to Google) about a quintillion insects alive in the world today, to humans 7 billion. That’s not even factoring in other forms of life. Assuming those insects feel even rudimentary pain and pleasure, and given how vastly they outnumber us, is it immoral that we more or less never think about insects? Aside from not exterminating them, should we factor them in to urban planning and reduce housing for humans if it would harm insects? What if we could increase the total amount of well-being in the world exponentially by building giant buildings full of nothing but happy, well cared for insects, then why aren’t we doing that?

            I think another example that it’s important to think about is human rights. The idea of total or collective well-being is already present at relatively (I say relatively because I don’t think any human culture has taken the concept to its logical extreme, every culture that I know of acknowledges individual preference to at least some extent,) higher levels in deeply collectivist cultures. If you look at a deeply collectivist approach to solving problems and find yourself largely in agreement, then your intuitions may well skew this way – but my guess is that most Westerners, when seeing such principles actually in action, are not fans of the idea.

            Last but not least, words like ‘well-being’ are hard to define. Drug addicts presumably feel great well-being while high. Would it be moral to open centers wherein thousands of people could live in a near vegetative state on IV drips of drugs, attended to by robots, because everyone there felt blissed out? I think the vast majority of people would say no, meaning our intuitions about morality go further than just well-being.

      2. No, but physics does. (See above. And here’s the link – you do have to sit through nearly an hour on quantum field theory* to get to the punchline, but it’s time well spent.)

        /@

        * Not to be confused with quantum felid theory, which resolves the paradox of Schrödinger’s cat.

  17. Scientism works quite well if you exclude “Interpretive Science” and embrace science that comes up with testable causal explanations.

    Scientism doesn’t align well with human psychology; but this just shows how non-intuitive scientific theories are.

    Reductive science has given us by far the best picture of the nature of reality. And for me it’s obvious that science is the only way to improve on this picture.

  18. Aaron Hanlon opens with Alexandria Ocasio-Cortez A.I. story and the keyword “racism” to tap into current click habits. Then he sets her up against a supposedly racist or Trumpian Opposition, which is at once the side of “scientism”. Once that context is established, he drops Boghossian, Lindsay and Pluckrose into it, suggesting they are on the Right Wing side, maybe somehow racists — at least makes it seem as if they are on the “bad side”. But what exactly have they got to do with the AI or AOC? The obvious purpose is to make the bad context rub off on them, to establish a one-to-one correspondence between politics and epistemology Alan Sokal decried a generation ago. Never mind the bizarre “AI judges” example.

    And so we continue with Alan Sokal’s hoax, where the author takes the Social Text editors by their word. Interesting! Aaron Hanlon not only believes them, he also argues like they did. As I pointed out a few times, the Woke Faction generally does: it’s the very same dualistic “culture war” in which allegedly everything from epistemology to politics neatly align with two sides, and the natural sciences are located on the “right wing”, while the postmodernists fancy themselves as “the left” which I maintain is arrant nonsense, even when I appear to be almost alone in rejecting this narrative (Sokal and Chomsky did).

    The technique employed here is also stereotypical. I encountered this first with the Woke Atheists and their habit to conjure up distant cases and contexts where some bad people, racists, misogynists, sexists did something questionable, and then, with flimsiest reasons Richard Dawkins or some other Witch of the Week is brought onto the scene by the second half of the blog post, as if they had anything to with the previous context. The purpose of this genre was most obviously to smear. I sometimes felt that I was maybe expecting too much of american atheism and skepticism. After all, this should be a serious mark against such a writer, but apparently this was acceptable and mainstream, and as we see here is publishable by a once reputable outlet.

    The nature of the new hoax and the data usage is also mischaracterised. As before with Sokal‘s hoax, Aaron Hanlon appears to have not the faintest idea what the paper says. Even when the data was somehow collected correctly, would it support the thesis and the conclusion? I know I run long, but let me quote what it says.

     “Dog Park”
    Title: Human Reactions to Rape Culture and Queer Performativity in Urban Dog Parks in Portland, Oregon By Helen Wilson, Ph.D., Portland Ungendering Research (PUR) Initiative (fictional) [appears in] Gender, Place, and Cultur Status: Accepted & Published Recognized for excellence. Expression of concern raised on it following journalistic interest leading us to have to conclude the project early.

    Thesis: That dog parks are rape-condoning spaces and a place of rampant canine rape culture and systemic oppression against “the oppressed dog” through which human attitudes to both problems can be measured. This provides insight into training men out of the sexual violence and bigotry to which they are prone.
    Purpose: To see if journals will accept arguments which should be clearly ludicrous and unethical if they provide (an unfalsifiable) way to perpetuate notions of toxic masculinity, heteronormativity, and implicit bias.

    Maybe I am expecting too much from the Washington Post and Aaron Hanlon. But I am charitable and think that Hanlon‘s Razor applies.

  19. ” Like Ocasio-Cortez’s critics, who trust too easily in the appearance of scientific objectivity…”

    Stop criticizing her for getting facts wrong or not knowing things she’s talking about! You’re just being too scientific to understand.

  20. Science can’t respond to religious dogma with hope of any negotiation because religious faith or beliefs are departures from logic. Logic can’t engage with non logic. There’s no complexity there. Thus logical discussion with anti abortionists can’t occur. Decisions about well being are split between those making logical decisions and those using religion. (Well being needs to be defined.) Perhaps: reasonably shared happiness The logical choices can be labeled scientific but there will always be some decisions which result in not absolute well being for all. All logical people know that finally they will die… the ultimate absence of well being. Degrees of this less than total well being are inevitable. As the Dalai says suffering is inevitable

    On Wed, 30 Jan 2019 at 00:52, Why Evolution Is True wrote:

    > whyevolutionistrue posted: “There’s never an end to science-dissing these > days, and it comes largely from humanities scholars who are distressed by > comparing the palpable progress in science with the stagnation and > marginalization of their discipline—largely through its adoption of ” >

  21. As I’ve commented on reddit just now, it’s suspicious that when a claim is accused of scientism, it’s when that belief is dearly held by the accuser, and when that belief has no evidence going for it.

    -Ryan

  22. The humanities are many disciplines and those “grievance studies” journals do not represent the humanities. As for Hanlon’s article, whatever its faults, he is trying to strike a concilatory tone, not diss science. There’s nothing “postmodern” about it.

    1. Did you read what I wrote? I didn’t accuse Hanlon of postmodernism; I accused the humanities of being infected with postmodernism. And, frankly, I don’t see the conciliatory tone. He may have been trying to do that, but he didn’t succeed. The piece is a dog’s breakfast that doesn’t cohere (odd for an English professor), and he makes claims about numbers that are not supported statistically. Yes, I know the humanities are many disciplines, but much of them are being infected by authoritarian Leftist ideology, and it’s sad. The grievance studies do represent at least some of the humanities, for I’ve highlighted equally ludicrous but serious published papers in some of the fields that are indistinguishable from the hoax papers. There is a cancer in the body of the humanities. You may disagree, but that’s one reason why they’re waning on college campuses.

  23. At any rate, much of the criticism of science comes in the form of accusations of “scientism”, defined, according to the article below in the Washington Post, as “the untenable extension of scientific authority into realms of knowledge that lie outside what science can justifiably determine.”

    Here’s the thing that always bugged me with that – just because science isn’t necessarily the final word on something, it doesn’t mean that science doesn’t in fact have authority beyond its original realm.

    If a book claiming to deliver ultimate divine moral truths says the Earth is flat, then we’re going to go with the scientific that shows that the Earth is not, in fact, flat.

    All the claims of the holy book can still be debated upon, but we can safely bin the idea that the Earth is flat.

    Mathematics doesn’t have the final word on everything, yet I don’t think any of us would argue there is some great philosophical, spiritual or artistic truth in 2+2=5.

    And what is what scientism generally boils down to so far as I can see – the fact that a lot of philosophers and people in the humanities are lazy asses who don’t like doing the work to get stuff right, and really don’t like it being pointed out when they’re wrong.

    When they complain about scientism what they’re really trying to do is shuffle their claims out of the realms of the verifiable, because their claims are garbage and so are they.

  24. The WP is using a limited and misleading definition of “scientism.”

    A better one would be: presenting the language and trappings of science, and reverence for those things, without actually using science. Consider it a form of credentialism. Science is a method, not a style, and someone in a lab coat who uses profound words and has the right letters from the right schools after his name may have training in science, but he also has the same flaws as any other human and is not necessarily using science. Anyone should be free to call him on that, and a scientist who supports his work not with data and logical conclusions from it but with “Because I’m a scientist” should be universally recognized as a fraud.

    Early in my studies I learned that “If you say so” is the worst thing you can say to a scientist. No way. If I have to hear something from his mouth to believe it, rather than his work speaking for itself, he hasn’t done his job.

    1. Your suggested definition is better but, sadly, not how most people use the word. They use it in the exact sense used here… as a sort of slur against those of us who prefer clear thinking and evidence to the ambiguities of theology and other similar forms of woo.

Leave a Reply to Mike Anderson Cancel reply

Your email address will not be published. Required fields are marked *