Steve Pinker talks with Helen Pluckrose for Counterweight

July 11, 2021 • 8:45 am

You all know Steve Pinker, and surely nearly all of you have heard of Helen Pluckrose, who not only participated in the “Grievance Studies Affair“, but coauthored with James Lindsay the book Cynical Theories: How Activist Scholarship Made Everything about Race, Gender, and Identity and has now founded the humanist but anti-woke organization Counterweight.

Here Helen has an eight-minute interview with Steve Pinker. (Note that there’s a photo of Cape Cod in the background, where Steve and Rebecca repair to their second home.) It’s mostly about wokeness and how to combat it.

 

h/t: Paul

The absence of objective morality

June 21, 2021 • 9:25 am

What does it mean to say that there’s an “objective morality”? The Stanford Encyclopedia of Philosophy reports this view as “moral realism” and characterizes it like this:

Moral realists are those who think that, in these respects, things should be taken at face value—moral claims do purport to report facts and are true if they get the facts right. Moreover, they hold, at least some moral claims actually are true. That much is the common and more or less defining ground of moral realism (although some accounts of moral realism see it as involving additional commitments, say to the independence of the moral facts from human thought and practice, or to those facts being objective in some specified way).

This is the stand taken by Sam Harris in his book The Moral Landscape, and it’s a view with which I disagree. Although some philosophers agree with Sam that morality is “factual” in this way—and by that I don’t mean that the existence of a moral code is a fact about society but that you can find objective ways to determine if a view is right or wrong—I can’t for the life of me see how one can determine objectively whether statements like “abortions of normal fetuses are wrong” are true or false. In the end, like many others, I see morality as a matter of preference. What is moral is what you would like to see considered good behavior, but as different people differ on what is right and wrong, I see no way to adjudicate statements like the one about abortion.

I’ve said all this before, but it came to mind last night when I was reading Anthony Grayling’s comprehensive book The History of Philosophy. (By the way, that book has convinced me that there is virtually no issue in philosophy that ever gets widespread agreement from nearly all respectable philosophers, so in that way philosophy differs from science. That is not to say that philosophy is without value, but that its value lies in teaching us how to think rigorously and to parse arguments, not to unearth truths about the cosmos.)

It’s clear that empirical observation can inform moral statements. If you think that it’s okay to kick a dog because it doesn’t mind it, well, just try kicking a dog. But in the end, saying whether it’s right or wrong to do things depends on one’s preferences. True, most people agree on their preferences, and their concept of morality by and large agrees with Sam’s consequentialist view that what is the “right” thing to do is what maximizes “well being”.  But that is only one criterion for “rightness”, and others, like deontologists such as Kant, don’t agree with that utilitarian concept. And of course people disagree violently about things like abortion—and many other moral issues.

One problem with Sam’s theory, or any utilitarian theory of morality, is how to judge “well being”. There are different forms of well being, even in a given moral situation, and how do you weigh them off against one another? There is no common currency of well being, though we know that some things, like torturing or killing someone without reason, clearly does not increase well being of either that person or of society. Yet there is no objective way to weigh one form of well being against another. Abortion is one such situation: one weighs the well being of the fetus, which will develop into a sentient human, against that of the mother, who presumably doesn’t want to have the baby.

But to me, the real killer of objective morality is the issue of animal rights—an issue that I don’t see as resolvable, at least in a utilitarian way. Is it moral to do experiments on primates to test human vaccines and drugs? If so, how many monkeys can you put in captivity and torture before it becomes wrong?  Is it wrong to keep lab animals captive just to answer a scientific question with no conceivable bearing on human welfare, but is just a matter of curiosity? Is it moral to eat meat? Answering questions about animal rights involves, if you’re a Harris-ian utilitarian, being able to assess the well being of animals, something that seems impossible. We do not know what it is like to be a bat.  We have no idea whether any creatures value their own lives, and which creatures feel pain (some surely do).

But in the end, trying to find a truly factual answer to the statement, “Is it immoral for humans to eat meat?”  or “is abortion wrong?”, or “is capital punishment wrong?” seems a futile effort. You can say that eating meat contributes to deforestation and global warming, and that’s true, but that doesn’t answer the question, for you have to then decide whether those effects are “immoral”. Even deciding whether to be a “well being” utilitarian is a choice. You might instead be a deontologist, adhering to a rule-based and not consequence-based morality.

You can make a rule that “anybody eating meat is acting immorally,” but on what do you base that statement? If you respond that “animals feel pain and it’s wrong to kill them,” someone might respond that “yes, but I get a lot of pleasure from eating meat.” How can you objectively weigh these positions? You can say that culinary enjoyment is a lower goal than animal welfare, but again, that’s a subjective judgment.

By saying I don’t accept the idea of moral claims representing “facts”, I’m not trying to promote nihilism. We need a moral code if, for nothing else, to act as a form of social glue and as a social contract. Without it, society would degenerate into a lawless and criminal enterprise—indeed, the idea of crime and punishment would vanish. All I’m arguing is that such claims rest at bottom on preference alone. It’s generally a good thing that evolution has bequeathed most of us with a similar set of moral preferences. I hasten to add, though, that what feelings evolution has instilled in us aren’t necessarily ones we should incorporate into morality, as some of them (widespread xenophobia, for instance) are outmoded in modern society. Others, like caring for one’s children, are good things to do.

In the end, I agree with Hume that there’s no way to derive an “ought” from an “is”. “Oughts” have their own sources, while “is”s may represent in part our evolutionarily evolved behaviors derived from living in small groups of hunter-gatherers. But that doesn’t make them evolutionary “oughts.”

I’m not a philosopher—and I’m sure it shows!—and I know there are famous philosophers, like Derek Parfit, who are moral realists, but my attempt to read the late Parfit’s dense, two-volume treatise On What Matters, said to contain his defense of moral realism, was defeated.

Kenan Malik on judging yesterday’s figures by today’s morality

May 10, 2021 • 12:00 pm

Over at the Guardian, Kenan Malik writes with his usual good sense about judging historical figures by today’s morality—something we just read about this morning vis-à-vis Darwin and other evolutionists.. Malik’s particular subjects are Napoleon and Churchill, both in the process of being found “problematic”. Unlike many of the “decolonizers”, Malik is willing to tolerate some ambiguity. And why shouldn’t we, given that morality evolves and was never in the past identical to what it is now?

 

Malik on Napoleon, with a soupçon of Churchill:

Those who laud [Napoleon’s] legacy claim that he projected France on to the world stage and laid the foundations of a strong state. The Napoleonic Code, a sweeping array of laws covering citizenship, individual rights, property, the family and colonial affairs, established a new legal framework for France and influenced law-making from Europe to South America.

Stacked against this are shockingly reactionary policies. Napoleon reintroduced slavery into French colonies in 1802, eight years after the national assembly had abolished it, imposed new burdens on Jews, reversing rights they had gained after the revolution, strengthened the authority of men over their families, depriving women of individual rights and crushed republican government.

To the far right in France, Napoleon is an unalloyed hero. To many others, his is a troubling legacy. To be wrapped in a complex legacy is not, however, unique to Napoleon. It is the fate of most historical figures, whether Churchill or Wilberforce, Jefferson or Roosevelt, Atatürk or Nkrumah, all of whose actions and beliefs remain contested. Biographies rarely cleave neatly into “good” or “bad”.

Many, though, feel the need to see history in such moral terms, to paint heroes and villains in black and white, to simplify the past as a means of feeding the needs of the present. National and imperial histories have long been whitewashed, the darker aspects obscured. How many people in Britain know of the Morant Bay Rebellion in Jamaica or of the “Black War” in Tasmania? We want to preserve our national heroes untainted, none more so than Churchill, the man who saved “the nation… and the entire world”. Attempts to reassess his legacy can be dismissed as “profoundly offensive” or as “rewriting history”.

At the same time, those who seek to address many of these questions often themselves impose a cartoonish view of the past and its relationship to the present, from the call to take down Churchill’s statues to the mania for renaming buildings. The complexities of history fall foul of the political and moral needs of the present.

There’s more, but you get the gist of it.

The more I think about it, the more opposed I am to spending a lot of time denigrating people whose ideas we teach in class: people like Ronald Fisher, Charles Darwin, Thomas Jefferson, and so on. Yes, a mention or two might be sufficient, but today’s “decolonized curricula” seem to spend more time on the odious history of famous people who advanced good ideas than on the ideas themselves. And yes, Charles Darwin confected (along with A. R. Wallace) the theory of evolution, but he was also a racist, somewhat of a misogynist, and one who believed that white races would supplant others. (I can hear the Discovery Institute licking its lips now: “Coyne admits Darwin was a racist!”).

But these are issues not for science classes, but for ethics classes, where nuances can be discussed and developed. And remember, as Steve Pinker reminds us endlessly, morality has changed substantially, and improved, in the past few centuries. What that means is that things that we do now (is meat-eating one?) will be regarded as odious in 200 years. Who can we celebrate today, knowing that in the future they can (and probably will) be demonized. Will Joe Biden be seen as a barbarian because he enjoyed an occasional pork chop? What this all means is that there is nobody we can admire today except insofar as they conform to a quotidian morality that we know will be supplanted.

Further, I can’t help but feel that a lot of those engaged in denigrating people like Darwin, R. A. Fisher, and George Washington are doing so for performative reasons: to tell us, “Look, I can see how much better we (and I) am today than our forebears.”  Now clearly, for someone like Hitler or the slaveholders of the South, we need not celebrate them at all, for there’s no good that they did to be celebrated. But to deny Darwin some encomiums for what he did, or even Jefferson? For those people surely did do some good things, and their statues are not erected to celebrate the bad things they did.

As I’ve said repeatedly, here are my criteria for celebrating, via statues, plaques, and so on, somebody of the past:

My criteria so far have been twofold. First, is the statue celebrating something good a person did rather than something bad? So, for example, Hitler statues fail this test, though I hear that some Nazi statues are simply left to molder and degenerate rather than having been pulled down. Second, were the person’s contributions on balance good or bad? So, though Gandhi was a bit of a racist towards blacks as a barrister in South Africa, the net amount of good he did in bringing India into existence through nonviolent protest seems to me to heavily outweigh his earlier missteps. Gandhi statues should stay.

What if someone was bad on balance but did a very good thing—should that person be celebrated? That would be a judgment call.

In general, I err on the side of preserving history, for statues and buildings are a mark of history—of times when our morality was different from what it is today. And it’s useful to be reminded of that rather than simply having our past, especially the bad bits, erased. History, after all, isn’t all beer and skittles. We don’t want a lot of Winston Smiths operating in our culture.

So no, Sheffield, don’t spend a lot of time reminding me what a racist and misogynist Darwin was, all of which will be done at the expense of telling us what Darwin accomplished. All you’re doing in effect is showing that, over time, morality has improved.

A Guardian “long read” on free will

April 27, 2021 • 9:15 am

Several readers sent me a link to a new Guardian piece on free will by journalist Oliver Burkeman (some added that I’m quoted a couple of times, which is true). It’s a “long read” for those with a short attention span, but I have to say that it’s a very good piece, covering all the bases: the definitions, the consequences of contracausal free will, the “solution” of compatibilism, the implications for moral responsibility and for judicial punishment; yes, it’s all there.  And although Burkeman’s personal take, given at the end, is a bit puzzling, it’s a very good and fair introduction to the controversies about free will.

Click on the screenshot to read:

 

As I said, I have mostly praise for Burkeman’s piece, as he’s clearly done his homework and manages to condense a messy controversy into a readable piece.  So take my few quibbles in light of this general approbation.

First, though, I must note Burkeman’s opening, which, surprisingly, shows the hate mail philosophers have received for promulgating determinism. (Burkeman notes, correctly, that even compatilists who broach a new kind of free will are still determinists.) Although I was once verbally attacked by a jazz musician who said I’d taken away from him the idea that he had complete freedom to extemporize his solos, I’ve never received the kind of mail that Galen Strawson has:

. . . . the philosopher Galen Strawson paused, then asked me: “Have you spoken to anyone else yet who’s received weird email?” He navigated to a file on his computer and began reading from the alarming messages he and several other scholars had received over the past few years. Some were plaintive, others abusive, but all were fiercely accusatory. “Last year you all played a part in destroying my life,” one person wrote. “I lost everything because of you – my son, my partner, my job, my home, my mental health. All because of you, you told me I had no control, how I was not responsible for anything I do, how my beautiful six-year-old son was not responsible for what he did … Goodbye, and good luck with the rest of your cancerous, evil, pathetic existence.” “Rot in your own shit Galen,” read another note, sent in early 2015. “Your wife, your kids your friends, you have smeared all there [sic] achievements you utter fucking prick,” wrote the same person, who subsequently warned: “I’m going to fuck you up.” And then, days later, under the subject line “Hello”: “I’m coming for you.” “This was one where we had to involve the police,” Strawson said. Thereafter, the violent threats ceased.

Good lord! Such is the resistance that people have to hearing that they don’t have “contracausal” (you-could-have-chosen-otherwise) free will. Regardless of what compatibilists say, belief in contracausal free will is the majority view in many places (see below).

There are only a few places where Burkeman says things I disagree with. One is how he treats the issue of “responsibility”. My own view, as someone Burkeman calls “one of the most strident of the free will skeptics,” is that while we’re not morally responsible for our misdeeds, which implies we could have chosen a different path, we are what Gregg Caruso calls “answerably responsible”. That is, as the agent of good or bad deeds, whatever actions society deems appropriate in response to our acts must devolve upon our own bodies. Therefore, if we break the law, we can receive punishment—punishment to keep us out of society where we might transgress again, sequestering us until we are deemed “cured” and unlikely to transgress again, and punishment to deter others. (Caruso, also a free-will skeptic, disagrees that deterrence should be an aim of punishment, since it uses a person as an instrument to affect the behavior of others.) Caruso holds a “quarantine” model of punishment, in which a transgressor is quarantined just as Typhoid Mary should be quarantined: to effect possible cures and protect society from infection. Burkeman describes Caruso’s model very well.

What is not justified under punishment (and most compatibilists, including Dan Dennett, agree) is retributive punishment: punishment meted out by assuming that you could have chosen to behave other than how you did. That assumption is simply wrong, and so is retributivism, which is largely the basis of how courts in the West view punishment.

As for praise or blame, or responsibility itself, Burkeman somehow thinks they would disappear even under a hard-core deterministic view of society:

Were free will to be shown to be nonexistent – and were we truly to absorb the fact – it would “precipitate a culture war far more belligerent than the one that has been waged on the subject of evolution”, Harris has written. Arguably, we would be forced to conclude that it was unreasonable ever to praise or blame anyone for their actions, since they weren’t truly responsible for deciding to do them; or to feel guilt for one’s misdeeds, pride in one’s accomplishments, or gratitude for others’ kindness. And we might come to feel that it was morally unjustifiable to mete out retributive punishment to criminals, since they had no ultimate choice about their wrongdoing. Some worry that it might fatally corrode all human relations, since romantic love, friendship and neighbourly civility alike all depend on the assumption of choice: any loving or respectful gesture has to be voluntary for it to count.

But no, praise and blame are still warranted, for they are environmental influences that can affect someone’s behavior.  It is okay to praise someone for doing good and to censure them for doing bad, because this might change their brains in a way to make them liable to do less bad and more good in the future. (Granted, we have no free choice about whether to praise or blame someone.) The only thing that’s not warranted in Burkeman’s list is retributive punishment. Gratitude, pride, guilt, and so on are useful emotions, for even if we had no choice in what we did, these emotions drive society in positive directions, reinforcing good acts and discouraging bad ones.

Burkeman goes on, emphasizing the danger to society of promulgating determinism—a determinism that happens to be true. As the wife of the Bishop of Worcester supposedly said about Darwin’s view that we’re descended from apes,

“My dear, descended from the apes! Let us hope it is not true, but if it is, let us pray that it will not become generally known.”

This appears to be the view of not only Burkeman, it seems, but also of Dan Dennett. As Burkeman notes “Dennett, although he thinks we do have [compatibilist] free will, takes a similar position, arguing that it’s morally irresponsible to promote free-will denial.”

Morally irresponsible to promulgate denial of contracausal free will? Morally irresponsible to promulgate the truth? Or does he mean morally irresponsible to deny compatibilist notions of free will like Dennett’s? Either way, I reject the idea that we must hide the truth, or quash philosophical discussion, because it could hurt society.

Burkeman goes on about morality:

By far the most unsettling implication of the case against free will, for most who encounter it, is what it seems to say about morality: that nobody, ever, truly deserves reward or punishment for what they do, because what they do is the result of blind deterministic forces (plus maybe a little quantum randomness). “For the free will sceptic,” writes Gregg Caruso in his new book Just Deserts, a collection of dialogues with his fellow philosopher Daniel Dennett, “it is never fair to treat anyone as morally responsible.”

The operant word here is “deserves”—the idea of “desert” that’s the topic of a debate between Caruso and Dennett that I recently reviewed.  If you mean by “deserve” the fact that you’re deemed “answerably responsible,” and thus can undergo punishment for something bad you did, or can justifiably be praised, then yes, there is good justification for holding people answerably responsible for their good and bad deeds, and taking action accordingly.

There is much to argue with in the piece, not with Burkeman, but with some of the compatibilists he quotes. One of them is Eddy Nahmias:

“Harris, Pinker, Coyne – all these scientists, they all make the same two-step move,” said Eddy Nahmias, a compatibilist philosopher at Georgia State University in the US. “Their first move is always to say, ‘well, here’s what free will means’” – and it’s always something nobody could ever actually have, in the reality in which we live. “And then, sure enough, they deflate it. But once you have that sort of balloon in front of you, it’s very easy to deflate it, because any naturalistic account of the world will show that it’s false.”

Here Nahmias admits that determinism reigns, and implicitly that contracausal free will is nonexistent. But what I don’t think he grasps is that the naturalistic view of will, determinism, while accepted by him and his fellow compatibilists, is flatly rejected by a large majority of people—and in several countries (see the study of Sarkissian et al., though I note that when presented with concrete moral dilemmas, people tend to become more compatibilistic). Contracausal free will is the bedrock of Abrahamic religions, which of course have many adherents. Those who proclaim that everybody accepts pure naturalism and the deterministic behavior it entails—that denying that is “an easily deflatable balloon”—probably don’t get out often enough.

Likewise, though who say a society grounded on determinism will be a dreadful society full of criminals, rapists, and murderers are wrong, I think. This is for two reasons. First of all, know quite a few free-will skeptics, including Caruso, Alex Rosenberg, Sam Harris, myself, and others, and if free-will skepticism had a palpable effect on someone’s behavior, I can’t see it. It’s an unfounded fear.

The other reason is that there’s an upside in being a determinist. We still have our illusions of free will, so we can act as if our choices are contracausal even if, intellectually, we know they’re not. Hard determinists like myself are not fatalists who go around moaning, “What’s the use to tell the waiter what I want? It’s all determined, anyway.”

And there’s the improvement in the penal system that comes with accepting deteriminism: there’s a lot to be said for Caruso’s “quarantine” model, which is more or less in effect in places like Norway, though I still adhere to the value of deterrence. And, as Burkeman says eloquently, a rejection of free will paradoxially makes us “free” in the sense that we can be persuaded to give up unproductive retributive attitudes and overly judgmental behavior:

In any case, were free will really to be shown to be nonexistent, the implications might not be entirely negative. It’s true that there’s something repellent about an idea that seems to require us to treat a cold-blooded murderer as not responsible for his actions, while at the same time characterising the love of a parent for a child as nothing more than what Smilansky calls “the unfolding of the given” – mere blind causation, devoid of any human spark. But there’s something liberating about it, too. It’s a reason to be gentler with yourself, and with others. For those of us prone to being hard on ourselves, it’s therapeutic to keep in the back of your mind the thought that you might be doing precisely as well as you were always going to be doing – that in the profoundest sense, you couldn’t have done any more. And for those of us prone to raging at others for their minor misdeeds, it’s calming to consider how easily their faults might have been yours. (Sure enough, some research has linked disbelief in free will to increased kindness.)

. . . . Yet even if only entertained as a hypothetical possibility, free will scepticism is an antidote to that bleak individualist philosophy which holds that a person’s accomplishments truly belong to them alone – and that you’ve therefore only yourself to blame if you fail. It’s a reminder that accidents of birth might affect the trajectories of our lives far more comprehensively than we realise, dictating not only the socioeconomic position into which we’re born, but also our personalities and experiences as a whole: our talents and our weaknesses, our capacity for joy, and our ability to overcome tendencies toward violence, laziness or despair, and the paths we end up travelling. There is a deep sense of human fellowship in this picture of reality – in the idea that, in our utter exposure to forces beyond our control, we might all be in the same boat, clinging on for our lives, adrift on the storm-tossed ocean of luck.

I agree with this. And there’s one more benefit: if you are a free-will skeptic, you won’t always be blaming yourself for choices you made in the past on the grounds that you made the “wrong choice.” You didn’t have an alternative! This should mitigate a lot of people’s guilt and recrimination, and you can always learn from your past mistakes, which might alter your behavior in a permanent way. (This is an environmental influence on your neural program: seeing what worked and what didn’t.)

In light of Burkeman’s paean to free-will skepticism, then, it’s very odd that he says the following at the end:

Those early-morning moments aside, I personally can’t claim to find the case against free will ultimately persuasive; it’s just at odds with too much else that seems obviously true about life.

The deterministic case against contracausal free will is completely persuasive, and I think Burkeman agrees with that. So exactly what “case against free will” is he talking about? Is he adhering to compatibilism here? He doesn’t tell us. What, exactly, is at odds with what seems “obviously true about life”? But so much that “seems obviously true” is wrong as well, like the view that there’s an “agent”, a little person, sitting in our head that directs our actions. I would have appreciated a bit more about what, after doing a lot of research on the free-will controversy, Burkeman has really come to believe.

h/t: Pyers, David

Ross Douthat laments the “elite’s” loss of faith

April 11, 2021 • 9:45 am

The answer to Ross Douthat’s title question below is, of course, “no”: the meritocracy, which I suppose one can define as either the rich or the educated, are increasingly giving up religion. And, if history be any guide, they’re unlikely to go back to it. Click on the screenshot below to read Douthat’s elegy for the loss of religion among America’s elite, his reasons why it’s happening, and his straw-grasping about how the meritocracy might come back to God. (Douthat is, of course, a staunch Catholic.)

Last year, by even Douthat’s admission, only 47% of Americans belonged to a church, mosque, or synagogue.  Two years ago, in an article called “In U.S., decline of Christianity continues at rapid pace,” the Pew Research Center presented the following graphs. As American Christianity has declined quickly, the proportion of “nones”—those who see themselves as agnostics, atheists, or holding “no religion in particular”—is growing apace. (remember, this is over only a dozen years).

The fall in religiosity has been faster among the younger than the older, among Democrats than among Republicans, and among those with more education rather than less.

Douthat calls these data “grim.” Here’s his worry:

A key piece of this weakness is religion’s extreme marginalization with the American intelligentsia — meaning not just would-be intellectuals but the wider elite-university-educated population, the meritocrats or “knowledge workers,” the “professional-managerial class.”

Most of these people — my people, by tribe and education — would be unlikely models of holiness in any dispensation, given their ambitions and their worldliness. But Jesus endorsed the wisdom of serpents as well as the innocence of doves, and religious communities no less than secular ones rely on talent and ambition. So the deep secularization of the meritocracy means that people who would once have become priests and ministers and rabbis become psychologists or social workers or professors, people who might once have run missions go to work for NGOs instead, and guilt-ridden moguls who might once have funded religious charities salve their consciences by starting secular foundations.

But this all sounds good to me! Isn’t it better to have more psychologists, social workers, and professors instead of more clerics? At least the secular workers are trained to do their job, and don’t have a brief to proselytize or inculcate children with fairy tales.

But no, not to Douthat. Implicit in his column is the worry that without religion, America would be less moral. (He doesn’t state this outright, but absent that belief his column makes no sense. Unless, that is, he’s interested in saving souls for Jesus.)

As a Christian inhabitant of this world, I often try to imagine what it would take for the meritocracy to get religion. There are certain ways in which its conversion doesn’t seem unimaginable. A lot of progressive ideas about social justice still make more sense as part of a biblical framework, which among other things might temper the movement’s prosecutorial style with forgiveness and with hope. Meanwhile on the meritocracy’s rightward wing — meaning not-so-woke liberals and Silicon Valley libertarians — you can see people who might have been new atheists 15 years ago taking a somewhat more sympathetic look at the older religions, out of fear of the vacuum their decline has left.

You can also see the concern with morality as Douthat proffers two reasons why, he thinks, the elite are prevented from hurrying back to Jesus, Moses, or Muhammad:

One problem is that whatever its internal divisions, the American educated class is deeply committed to a moral vision that regards emancipated, self-directed choice as essential to human freedom and the good life. The tension between this worldview and the thou-shalt-not, death-of-self commandments of biblical religion can be bridged only with difficulty — especially because the American emphasis on authenticity makes it hard for people to simply live with certain hypocrisies and self-contradictions, or embrace a church that judges their self-affirming choices on any level, however distant or abstract.

Again, I’m baffled about why Douthat sees religiously-based morality, particularly of the Catholic variety, as superior to humanistic morality. After all, only religious “morality” prescribes how and with whom you can have sex, the supposed “role” of women as breeders and subservient partners, the demonization of gays, the criminality of abortion, the desirability of the death penalty, and the immorality of assisted dying.  What kind of morality do you expect to get by following the dictates of a bunch of superstitious people from two millennia ago, people who had to posit an angry god to explain what they didn’t understand about the cosmos? You get the brand of religion that Douthat wants us all to have! For he sees religiously deontological morality as better than think-for-yourself morality: the “the thou-shalt-not, death-of-self commandments of biblical religion.”

And it’s clear, as Douthat continues his risible lament for the loss of faith, that he sees no contradiction between rationality and superstition, though the conflict between them, and the increasing hegemony of science in explaining stuff previously within God’s bailiwick, is what is driving the educated to give up their faith:

A second obstacle [to the elite regaining faith] is the meritocracy’s anti-supernaturalism: The average Ivy League professor, management consultant or Google engineer is not necessarily a strict materialist, but they have all been trained in a kind of scientism, which regards strong religious belief as fundamentally anti-rational, miracles as superstition, the idea of a personal God as so much wishful thinking.

Thus when spiritual ideas creep back into elite culture, it’s often in the form of “wellness” or self-help disciplines, or in enthusiasms like astrology, where there’s always a certain deniability about whether you’re really invoking a spiritual reality, really committing to metaphysical belief.

There are two misconceptions in two paragraphs. The first is that professors indoctrinate students with the belief that there is no God—we are training them in atheism, materialism, and scientism. But we don’t do that: the students give up God because, as they learn more, they also grasp that, as Laplace supposedly replied to Napoleon, we “have no need of that hypothesis.” If there were actual evidence for miracles and a theistic god, people wouldn’t abandon their faith.

Further, although some of the “nones” are spiritual in the sense of embracing stuff like astrology or crystal therapy, I see no evidence of a rise in embracing of woo as profound as the decline in religiosity.  The example of Scandinavia, which converted from religiosity to atheism in about 250 years, shows not only that religion isn’t needed to create a moral, caring society (indeed, it shows that religion is inimical to this), but also that religion needn’t be replaced by other forms of woo. As far as I know, the Danes and Swedes aren’t fondling their crystals with alacrity.

Nothing will shake Douthat’s faith in God, nor his faith in faith as an essential part of society—in this he resembles his co-religionist Andrew Sullivan—but he does adhere to a form of intelligent design held by those sentient people who are still religious:

Yes, science has undercut some religious ideas once held with certainty. But our supposedly “disenchanted” world remains the kind of world that inspired religious belief in the first place: a miraculously ordered and lawbound system that generates conscious beings who can mysteriously unlock its secrets, who display godlike powers in miniature and also a strong demonic streak, and whose lives are constantly buffeted by hard-to-explain encounters and intimations of transcendence. To be dropped into such a world and not be persistently open to religious possibilities seems much more like prejudice than rationality.

I don’t seem to have had those hard-to-explain encounters or intimations of transcendence. I must be missing my sensus divinitatis! What Douthat takes as evidence for God, like the tendency of humans to be clever but sometimes nasty, can be understood by a combination of our evolutionary heritage and our cultural overlay. The same holds for “a system that generates conscious beings.” It’s evolution, Jake!

In the end, Douthat is as baffled by we secularists’ rejection of God as I am by his credulous acceptance of the supernatural as the only plausible explanation for the Universe:

And my anthropological understanding of my secular neighbors particularly fails when it comes to the indifference with which some of them respond to religious possibilities, or for that matter to mystical experiences they themselves have had.

Like Pascal contemplating his wager, it always seems to me that if you concede that religious questions are plausible you should concede that they are urgent, or that if you feel the supernatural brush you, your spiritual curiosity should be radically enhanced.

Well, as a scientist one must always give a degree of plausibility to any hypothesis, but when that degree is close to zero on the confidence scale, we need consider it no further. Based on the evidence, the notion of a god is as implausible as notions of fairies, leprechauns, or other such creatures.  And if the plausibility is close to zero, then so is the urgency.  And even if the questions are urgent, which I don’t believe since the world’s well being doesn’t depend on them, they are also unanswerable, making them even less urgent. Would Douthat care to tell me why he thinks the Catholic god is the right one rather than the pantheon of Hindu gods, including the elephant-headed Ganesha? Isn’t it urgent to decide which set of beliefs is right?

But maybe it’s because I never felt the supernatural brush me.

Amen.

 

h/t: Bruce

Another criterion for judging whether to “cancel” someone

September 15, 2020 • 11:00 am

Although I don’t spend a lot of time calling for people to be unpersoned, canceled, or have their statues toppled or namesakes changed, I do try to discern whether “cancellation” calls are justified or unwarranted. Clearly there’s no good criteria that will work all the time, so it usually comes down to a judgment call.  In general, I tend to side with those who want history left as it is, but sometimes qualified, as with statues of Confederates famous for defending the South. (I favor “counterstatues” or explanatory plaque.) But in many cases, such as the Teddy Roosevelt statue at the American Museum of Natural History, I see no need for revision (see Greg’s post on that here).

My criteria so far have been twofold. First, is the statue celebrating something good a person did rather than something bad? So, for example, Hitler statues fail this test, though I hear that some Nazi statues are simply left to molder and degenerate rather than having been pulled down. Second, were the person’s contributions on balance good or bad? So, though Gandhi was a bit of a racist towards blacks as a barrister in South Africa, the net amount of good he did in bringing India into existence through nonviolent protest seems to me to heavily outweigh his earlier missteps. Gandhi statues should stay.

What if someone was bad on balance but did a good thing—should that person be celebrated? That would be a judgment call.

In general, I err on the side of preserving history, for statues and buildings are a mark of history—of times when our morality was different from what it is today. And it’s useful to be reminded of that rather than simply having our past, especially the bad bits, erased. History, after all, isn’t all beer and skittles. We don’t want a lot of Winston Smiths operating in our culture.

Now, in an article in Quillette, Steven Hales, described as “Professor and Chair of Philosophy of Bloomsberg University of Pennsylvania”, and author of The Myth of Luck: Philosophy, Fate, and Fortune, has added another criterion, one that seems sensible to me.  Well, it’s not really a criterion for determining who should be cancelled, but a way to look at the supposed missteps of figures from the past. I have tweaked it to make it a criterion.

Click on the screenshot to read:

Hales analyzes morality as analogous to science. Science has improved over time in helping us understanding nature, but we don’t denigrate scientists who made honest errors in the past. (Miscreant scientists, like Lysenko, are a different case.) Similarly, morality improves over time. (I don’t think there’s an objective morality, but surely the way we run society has allowed flourishing of more people over the past few centuries.) To Hales, it makes as little sense to denigrate those who went along with the morality of their time as to denigrate those scientists who accepted the “received wisdom” of their time. As he says:

All of which to say, there is a vital difference between being wrong and being blameworthy. Einstein struggled to admit the fact of quantum entanglement, but that does not entail his blameworthiness as a scientist. In one clear sense, he was on the “wrong side” of quantum history, but that doesn’t necessarily merit demotion from the pantheon. Scientific praiseworthiness or blameworthiness is determined not by the standards of our times, but of theirs. While you can hardly blame Darwin for not knowing the unit of natural selection, you would certainly blame a modern biology undergraduate if she did not know about DNA. Nonetheless, it is Darwin who deserves our admiration and praise, even if today’s undergrad knows more than he did.

And so we should judge people by the “average” moral standard of the time, which I interpret as meaning that if someone wasn’t considered immoral in their own society, but had values and beliefs that were fairly standard, then we can’t fault them too much today, for people are products of their genes and environments.

Hales:

Anyone who thinks that right moral thinking is obvious, and is incredulous at the horrible beliefs of the past, is the unwitting heir to a philosophical fortune hard-earned by their forebears. The arc of the universe may bend towards justice, but it is a long arc. As with scientists, moral actors of the past also fall into the great, the average, and the bad. Our judgment of them shouldn’t be by the standards of our own times, but the standards of theirs. By the moral understanding of his day, Vlad the Impaler was still a monster. But should we say the same of St. Paul, who in his Letter to Philemon, returns Onesimus, a runaway slave, to his owner instead of providing the slave with safe harbor? While Paul’s letter includes a request for Christian mercy, he omits condemnation for the horror of slavery. Paul was no slave trader, but the moral views displayed here were typical for his time.

By these lights, Hume, who gave approbation to a slaveholder, wasn’t the monster he seems to be today, as acceptance of slavery wasn’t seen as immoral back then as it is now. Morality has evolved for the better. I think it’s misguided, then, to “cancel” Hume, as they’re trying at Edinburgh with a building name, because of one “misstep” in a life that was otherwise very useful and salubrious.

Now of course this criterion has its own problems, the most obvious being “what was the ‘received’ moral wisdom of the time?” For example, Darwin was not in the majority of Brits of his time in being an abolitionist. Should we expect people of Darwin’s era, then, to adhere to the “best” morality, or simply to an “average” morality—one that wouldn’t get its adherent labeled as immoral in his society? Since there are always some angels in society, however rare, I’d go with the latter criterion.

This doesn’t solve all the issues, for of course the Nazis adhered to the average anti-Semitic morality of their times, and we don’t want people to put up statues to Nazis or label buildings “Goebbels Hall.” How do we judge an “average” morality? Morality among all humans on the planet in a time when people can read, learn and think, or the morality obtaining in one’s immediate surroundings? I have no answer.

Nor do I know how to combine Hales’s criterion with the ones I’ve held previously. All I know is that I have a mental algorithm about who should be canceled, and few people fall on the “yes” side, mainly those with no redeeming lives, acts, or thoughts.  Nor should we laud people today for things that were once considered okay, but now are seen as bad. Hume deserves to stay because he was not only a great man and a great philosopher, but also because he wasn’t the equivalent of a Nazi.  Finally, I don’t have problems getting rid of art that shows things that really are considered universally offensive: like a mural showing a lynching in the South.  Clearly, we will never get everyone to agree on these issues.

But as for Darwin, Gandhi, Jefferson, George Washingon and yes, with qualification, Robert E. Lee—let them stay. As they say, those who don’t remember history are doomed to repeat it.

Baptist leader tells us that God doesn’t want us to sacrifice the old

March 26, 2020 • 12:30 pm

Here we have the New York Times once again pandering to religion, publishing an article that says we should help save lives, including the lives of the elderly, not because of humanistic values, but because God says so.  The author, Russell Moore, is described as “the president of the Ethics and Religious Liberty Commission of the Southern Baptist Convention.”

Read and scowl:

 

Moore’s point, which many people have discussed without invoking religion or God, is whether we’re going to let people go back to work prematurely because the preservation of the economy (and other social values) is more important than the lives that would be lost by an early ending of the quarantine. Well, that’s basically true, but surely we’ll have to resume normal life before the world is entirely cleansed of Covid-19, so that itself is a form of tradeoff. A more important issue at the moment is how do we give care to young versus old people, or people who are immunologically compromised, when care is limited?

We have only a certain number of ventilators, and if there are two people competing for one, one 25 and the other 80, who do you choose? Reason would suggest that you’ll create the most well being, on average, by saving the greatest number of years to come. And that would favor the younger over the older, those likely to survive over those likely to die. That is the only humane decision, and you don’t need religion to make it (simple utilitarianism will do). Already, Italy is prioritizing Covid-19 care for those under 60, giving older people palliative care. When there are limited resources, priority must be given.

Of course Moore is correct that we shouldn’t—as Trump appears to want—blithely allow older people to die in the service of the Dow Jones Industrial Average, but such advice doesn’t require invoking God. So why does Moore stick the divine in?

For example:

A pandemic is no time to turn our eyes away from the sanctity of human life.

As opposed to other kinds of life?

We already are hearing talk about weighing the value of human life against the health of the nation’s economy and the strength of the stock market. It’s true that a depression would cause untold suffering for people around the world, hitting the poor the hardest. Still, each human life is more significant than a trillion-dollar gross national product. Stocks and bonds are important, yes, but human beings are created in the image of God.

There Moore is using the Bible as his source of ethics. Because humans (but not gorillas or ducks) are created in the eyes of God, we cannot automatically prioritize the economy and the fabric of society over people’s lives. But you don’t need the Bible for that. Try John Rawls, or Peter Singer (both atheists). And don’t forget that giving human life the highest priority over everything, including suffering, leads to spending millions of dollars to keep those in vegetative states alive, or to disallowing assisted suicide.

It goes on:

We must also reject suggestions that it makes sense to prioritize the care of those who are young and healthy over those who are elderly or have disabilities. Such considerations turn human lives into checkmarks on a page rather than the sacred mystery they are. When we entertain these ideas, something of our very humanity is lost.

Nope. Who gets the ventilator? The 25 year old or the 80 year old? Do we lose our humanity when we have to make such a choice? I don’t think so: we exercise our humanity.

But wait! There’s more!

. . .Vulnerability is not a diminishment of the human experience, but is part of that experience. Those of us in the Christian tradition believe that God molded us from dust and breathed into us the breath of life. Moreover, we bear witness that every human life is fragile. We are, all of us, creatures and not gods. We are in need of air and water and one another.

A generation ago, the essayist and novelist Wendell Berry told us that the great challenge of our time would be whether we would see life as a machine or as a miracle. The same is true now. The value of a human life is not determined on a balance sheet. We cannot coldly make decisions as to how many people we are willing to lose since “we are all going to die of something.”

You don’t need to see life as a miracle to come to ethical decisions about triage or ending pandemics.  You need consider only well being versus other things we value. After all, there are thousands of deaths every year due to car accidents, falls in the bathtub, accidental discharge of firearms, and so on. In 2000, 17,000 people committed suicide with a firearm.  Many people (though not I!) would say that the value of firearms outweighs those of the lives lost using them, and that the value of cars outweighs the 15,000 or so people killed in vehicular accidents every year. We make these decisions all the time, weighing known loss of life versus social goods. I don’t happen to think that we need guns, but I do think we need vehicles, despite Moore’s claim that every life is a sacred miracle.  And during this pandemic, as we’ve seen from Italy, you simply can’t treat everyone the same way. Does Moore think so? (He doesn’t say, but that’s the implication).

It angers me that Moore claims God and the Bible as his arbiter of moral behavior when humanistic values lead to exactly the same conclusions he reaches:

That means we must listen to medical experts, and do everything possible to avoid the catastrophe we see right now in Italy and elsewhere. We must get back to work, get the economy back on its feet, but we can only do that when doing so will not kill the vulnerable and overwhelm our hospitals, our doctors, our nurses, and our communities.

Duhhh! (But I note that the Italian form of triage is in effect “killing the vulnerable”, but through inaction rather than direct action. The result is the same).

Truly, I can see nothing in his article that a humanistic atheist like Peter Singer couldn’t write, and without invoking the false idea that we’re made in the image of God. (How does that matter, anyway? God, who made us in His image, saw fit to commit repeated genocides in the Old Testament, and that selfsame God allowed coronavirus to spread over the globe and kill tens of thousands.) The “image of god” idea grates on anyone who thinks we evolved, and on those who believe we can derive our ethics (better, ethics, actually) without consulting a nonexistent being in the sky.  So I could have written this last paragraph—except for the final seven words:

And along the way we must guard our consciences. We cannot pass by on the side of the road when the elderly, the disabled, the poor, and the vulnerable are in peril before our eyes. We want to hear the sound of cash registers again, but we cannot afford to hear them over the cries of those made in the image of God.

Why was this published?

More mishigas about free will, this time in the TLS

August 12, 2019 • 9:45 am

The Times Literary Supplement, which I used to write for, doesn’t often make its articles free online, but this one was (click on screenshot below to see it). And it’s about free will: a review of three books on the topic (The Limits of Free Will: Selected essays by Paul Russell, Aspects of Agency: Decisions, abilities, explanations, and free will by Alfred R. Mele, and Self-Determination: The ethics of action – Volume One by Thomas Pink). The reviewer, Jenann Ismael, is a professor at Columbia University, specializing in, as her website reports, “Philosophy of Physics; Philosophy of Science; Philosophy of Mind; Epistemology;  Metaphysics, with interests (and some expertise) in Cognitive Science, Philosophy of Literature, and Existentialism.” That’s a lot of expertise!

As is common in many book reviews, and in most of the good ones, the books themselves play a secondary role to the author’s ideas about the subject. The thing is, I’m not sure what the author’s ideas are, as she goes back and forth between hard determinism and “freedom”, trying, I guess, to forge some compatibilist view that gives us free will. It has something to do with “moral responsibility”, too; but the rather flabby article would have benefited from tighter writing and better editing.

Ismael starts out admitting that the laws of physics make it certain that we could not act other than what we did. She even goes so far as to claim that quantum indeterminacy would not affect her claim that everything we do was determined from the moment of the Big Bang. I don’t accept that, for I’m pretty sure (though I can’t prove it), that quantum indeterminacy made today’s actions fundamentally unpredictable, even if we knew the position of every particle in the Universe after the Big Bang. But since Ismael asserts that quantum mechanics and quantum field theory are not truly deterministic, I’m not sure how she claims that a rerun of the Big Bang would produce exactly the same results, right down to our choice of food the last time we went to a restaurant (or even if there would be restaurants!).

So be it. I’ll buy it since it’s irrelevant to her argument. For as Ismael admits, even quantum mechanics gives us no agency. In one of her better paragraphs, she says this:

Considering quantum mechanics helps us focus on the kind of control that seems essential to human freedom. We don’t want our actions to be controlled by the initial conditions of the universe, and we don’t want them to be controlled by random sub-microscopic events in the brain either. We want to control our own actions ourselves, and we think we do. We want to get ourselves into the causal chain. And we want our decisions to come from us.

But for her the important issue is that although determinism be true, and we couldn’t have chosen otherwise, it doesn’t square with our experience of agency:

This problem [of free will] has been around for millennia, but physics gives it a precise formulation and a concrete setting. It’s a beautiful problem because it brings physics into contact with issues of central human concern and forces us to think hard, in concrete detail, about what a scientific view of the world really entails about ourselves. The problem confronts us with a vision of human action that appears to be irreconcilable with the way we experience the world.

Well, lots of our experience is at odds with what science tells us. We experience a chair as a solid surface, yet most of it is empty space. And physics tells us that our experience of solidity is illusory, but also why we have that experience. In the case of free will, the so-called disconnect between our experience of agency and the reality of determinism may rest on evolution’s having instilled into our ancestors a sense of you-can-do-otherwise agency. It may have been illusory, but it may also have been adaptive. I can think of several reasons why selection would favor that cognitive illusion, but I won’t go into them here.

And there the issue should rest, but Ismael still can’t seem to reconcile our experience of agency with the reality of determinism. This, she says, tells us something important:

To most people, however, it seems literally unbelievable that the scales of fate don’t hang in the balance when making a difficult decision. And it is not just those dark nights of the soul where this matters. You think that you could cross the street here or there, pick these socks or those, go to bed at a reasonable hour or stay up, howl at the moon and eat donuts till dawn. Every choice is a juncture in history and it is up to you to determine which way to go.

Yet, if there is one foundational scientific fact, it is that things can’t happen that the laws of physics don’t allow. And the clash between these two things shows that there is something centrally important about ourselves and our position in the cosmos that we don’t understand.

Apparently—though in a way that she doesn’t make clear—the “centrally important” thing is our sense of moral responsibility—a sense that Ismael thinks is important to preserve. Again, I’d punt to evolution here, and simply say that “morality” is the word we use to describe the dos and don’ts of behavior instilled in us by both evolution and culture. Some animals have it, though not to the degree that we do, but a sense of “right and wrong” is not absolutely unique to humans. Still, the issue appears to keep Dr. Ismael awake at night.

She then describes in detail the murder of the Clutter family in Kansas in 1959—a story well known to those who have read Capote’s In Cold Blood. Surely Perry Smith and Richard Hickock, the murders, were morally responsible for that horrific crime, no?

As I’ve said many times, I don’t think adding the word “morally” to the word “responsible” adds anything. In fact, it’s misleading, for to most people, if moral responsibility means anything it means that you could have done other than what you chose to do. I prefer to simply use the word “responsible”. Or, if you insist, “responsible for violating the social norms considered part of ‘morality’.” To me, the term “moral responsibility” is heavily freighted with libertarian free will, and should be, if not abandoned, heavily qualified, as I’ve just done.  It is this feeling of moral responsibility that Ismael appears to find problematic in light of determinism:

It is the question of moral responsibility that transforms the problem from the relatively shallow one of reconciling the rigid necessity of physics with the felt spontaneity of action into one that engages with deep human questions about what we are, both as individuals and as a species. It also moves the question outside of the simple setting of physics. The question “what am I? And how do I fit into the universe?” is one of the oldest in philosophy. Linking the question to moral responsibility gives us more traction because it forces us to think about what makes another human being an appropriate target for moral emotions like praise and blame, not to mention love, admiration, anger and contempt. Science won’t answer these questions, but it provides us with the right setting in which to address them, if we do not want to rely on magical thinking.

Well, I think science could at least give us a purchase on these questions. Why do we even have notions of morality?  Do most people really think that being morally responsible means that, at the moment of your decision, you could have chosen to do something other than what you did? I don’t think philosophy has much to add to this; in fact, I think philosophy has actually muddled thinking about free will by dragging in the inevitable compromise of compatibilism: the “little people” notion that we must have some notion of free will, despite physics, because without it society will fall apart. (They used to say the same thing about ideas of God.) Philosophers can’t even agree on what compatibilistic free will is!

And so, at the end, Ismael proposes, but not explicitly, her own idea of compatibilist free will:

We are shaped by our native dispositions and endowments, but we do make choices, and our choices come from us to the extent that they are expressions of our hopes and dreams, values and priorities. These are things actively distilled out of a history of personal experience, and they make us who we are. Freedom is not a grandiose metaphysical ability to subvene the laws of physics. It is the day-to-day business of making choices: choosing the country over the city, children over career, jazz over opera, choosing an occasional lie over a hurtful truth, hard work over leisure. It is choosing that friend, this hairstyle, maybe tiramisu over a tight physique, and pleasure over achievement. It is all of the little formative decisions that when all is said and done, make our lives our own creations.

This is freedom? Where is the freedom? I scrutinized this paragraph over and over, and I find no “freedom” in it. What I see is (as is common for compatibilists) a redefinition of “freedom” in which there are no degrees of freedom, no scope to do otherwise. For Ismael, your predetermined choices are called “free” because they are your choices, stemming from your personal experience (which is determined) and your genetic endowment (which is also determined).

It takes a special kind of slippery philosophy to engage in this kind of rhetoric. And, in fact, virtually every sentient organism has this kind of free will, including microbes, whose lives are also their own creations.

Truly, the idea that we have free will because our choices are the result of our unique combination of genes and environments mystifies me. After all, that same combination is what makes our choices predetermined. What we see here is a kind of Orwellian doublespeak: “DETERMINISM IS FREEDOM’.

 

h/t: Michael

My newest piece in Quillette: Another response to John Staddon

May 11, 2019 • 10:30 am

My contretemps in the pages of Quillette continues with the psychobiologist John Staddon. I hope this is the end of it, as it’s no fun to write what I’ve written many times before to criticize a man who’s repeating old and tedious arguments that have been rebutted many times before. But so great is Staddon’s animus against atheism that he simply can’t learn.

As you may recall, Staddon originally wrote a piece in Quillette called “Is secular humanism a religion?” His answer was “yes,” even though his own concept of religion didn’t fit secular humanism in two of its three defining characteristics. But his main point was that secular humanism is religious because it has a morality—a morality that, as a conservative, he considered odious. (One of the supposedly repugnant aspects of secular morality was gay marriage.) He also argued that, like religion, secular humanism has “blasphemy rules,” like the criticism of those who wear blackface. That’s what’s known as “straining to support your argument”, and it causes mental hernias.

Well, I couldn’t let his piece stand, and so wrote a substantial reply, “Secular humanism is not a religion.” I won’t reiterate it here, as you can read it at the link or read about it on my website (here and here).

Staddon was apparently peeved that I didn’t swallow his half-digested pabulum, and so wrote a response to me called “Values, even secular ones, depend on faith: A reply to Jerry Coyne” (you can read my note about it here, which didn’t give a rebuttal because I knew I’d write one for Quillette). In this response, without admitting it, he retracts his original claim that secular humanism is a religion. He first argues that he didn’t choose the title (and that may have been true), but neglects to add that the very first sentence of his first piece, a sentence that he surely wrote himself, was this:

It is now a rather old story: secular humanism is a religion.

Oh well, let the readers be deceived. But he went on to claim that well, maybe secular humanism and its morality really isn’t religious, but they do have religious aspects: they’re based on faith. As Staddon said,

My argument is simple: religions have three characteristics: spiritual, mythical/historical, and moral. Secular humanism lacks the first two and is often quite critical of these aspects of religion. But they are largely irrelevant to politics. Hence the truth or falsity of religious myths is also irrelevant, as are Coyne’s disproofs of the existence of God. The fact that religious morals are derived from religious stories—myths in Mr. Coyne’s book—does not make them any more dismissible than Mr. Coyne’s morals, which are connected to nothing at all. In his own agnostic terms, all are matters of faith.

I couldn’t let that stand, either, as “faith” means something very different in secular humanistic ethics and religious ethics. And the claim that secular morality is based on “nothing at all” is completely stupid.

I explain the difference in the construals of “faith” in my article, while noting that, at bottom, any ethical system is based on “preferences”. In religion it’s for following the dictates of your particular sect, while in humanism it’s usually based on what kind of world you’d like to see and inhabit. There can be no claim that this and that morality is “objective and scientific” as all are grounded on preferences. (Some differ from me: Sam Harris and Derek Parfit, for instance, think that we can construct a perfectly objective morality.)

Nevertheless, secular morality can be based on a rational and coherent set of principles (I give one example in my piece), can be informed by science, and can also change based on changing mores. (When religious morality changes, that’s not based on changes in theology but on changes in secular morality that then force changes in theology. The Euthyphro Dilemma applies here.)

But I’m getting ahead of myself. You can read my response by clicking on the screenshot below.  And thanks to Rebecca Goldstein for discussing the issues with me; one can have no better critic.

As I found before, the commenters on my piece, already active, are disappointingly unthoughtful.

Once again, John Staddon maintains that religious morality is superior to secular morality

May 5, 2019 • 9:00 am

John Staddon and I have been having “words” in Quillette. It began with Staddon’s piece “Is Secular Humanism a Religion?“, a question he answered in the affirmative, even though secular humanism violated two of Staddon’s three defining traits of religion. I thus responded both here and then in a rebuttal in Quillette, “Secular Humanism is not a religion“.

Now Staddon has written a short reply to my critique, also in Quillette. See below; you can access it by clicking on the screenshot:.

First, Staddon denies that he ever claimed that secular humanism is a religion. That’s just not true, as you can see not just from his original title (which, he claims, was “misleading” and was chosen by Quillette), but also from the very first sentence of his article: “It is now a rather old story: secular humanism is a religion.”  Apparently the man cannot read his own piece! Or perhaps he reads it like he reads his Bible, picking and choosing the parts that support his argument while ignoring the rest.

But leave that aside, for in his new piece Staddon wants to emphasize the main point of his first piece: that, like religious ethics, secular ethics are based on faith and cannot be “proved”:

. . . in no case are secular commandments derivable from reason. Like religious “oughts” they are also matters of faith. Secular morals are as unprovable as the morals of religion.

In fact, Staddon sees religious morals as superior to secular ones because they rest on religious stories, stories that he admits are myths. But at least religious morals rest on something. Secular ethics, so he claims, are based on nothing:

My argument is simple: religions have three characteristics: spiritual, mythical/historical, and moral. Secular humanism lacks the first two and is often quite critical of these aspects of religion. But they are largely irrelevant to politics. Hence the truth or falsity of religious myths is also irrelevant, as are Coyne’s disproofs of the existence of God. The fact that religious morals are derived from religious stories—myths in Mr. Coyne’s book—does not make them any more dismissible than Mr. Coyne’s morals, which are connected to nothing at all. In his own agnostic terms, all are matters of faith.

. . . In other words, in all the ways that matter for action, secularists and religious believers do not differ.

I’m not going to give my counterarguments here, as I’m putting them in a short piece in Quillette, but I’ll let the readers have the pleasure of arguing whether secular ethics are indeed based on the same kind of faith as is religion, and that secular ethics “are connected to nothing at all.” I will show, as briefly as I can, that secular ethics are not connected to “nothing at all.”

Have at it.