Attack of the Lilliputians: Casey Luskin and Michael Egnor put misleading words and sentiments about free will in my mouth

July 14, 2022 • 1:30 pm

Why would two members of the ID creationist Discovery Institute keep attacking me for rejecting libertarian free will? After all, that issue has very little to do with evolution.  But they keep on trying to land blows, for the real object of the Discovery Institute goes way beyond the promotion of ID creationism in schools. Their goal is the elimination of materialism and naturalism as the basis of Enlightenment Now. (Read about the Wedge Strategy.) They’re upset at me because I adhere to views that don’t require or are associated with a God—and determinism (I’d call it “naturalism”) does just that. If we don’t have spooky free will, and, as I claim, all our behaviors and decisions occur according to the laws of physics, then you can’t “choose” whether to be good or evil, and choice of that sort is essential for the Abrahamic religions to function.

I’m not going to waste my time rebutting these clowns at length, as I’d simply have to reiterate what I’ve said before many times on this site, and since they clearly either don’t know, don’t understand, or deliberately ignore what I’ve said many times over, I just want to point out their article (one of several) to underscore a.) the mental thickness of the protagonists, b.) the religiosity of the protagonists, c.) the real reason why the Discovery Institute operates, and d.) to satisfy Egnor’s eternal desire to get attention by engaging in a dialogue with me. But they’re not going to get their wish on the last one, as I’m just going to show you what they say and let you, the reader here, figure out how I’ve already rebutted it.

Click to read—or hear, as there’s a link to a podcast.:

Here are some of their assertions. Now imagine that you were Professor Ceiling Cat (Emeritus). How would you respond?

Quotes are indented; I may be forced by the laws of physics in making a few remarks:

Luskin:

. . . These arguments have, of course, popped up in the legal system where the famous Darwin-defending lawyer, Clarence Darrow, the famous case back in the 1920s of the two boys who killed somebody just for fun. He argued in court that, “Hey, you can’t blame these boys for this sport killing that they undertook. They were just acting upon what their genes, and maybe their environment, forced them to do.” And he really argued that there is no free will. … Does Jerry Coyne have the right to condemn the Nazis if he denies free will?

Michael Egnor: The fact that Coyne’s denial of free will leaves him incapable of coherently accusing the Nazis of moral evil is enough to discard his denial of free will. That is, it is such a bizarre viewpoint that the Holocaust was not a moral evil — because there are no moral evils — that it really puts the denial of free will almost into a category of delusion.

Darrow wasn’t trying to free Leopold and Loeb: they had already pleaded guilty. He was trying to spare them the death penalty. But yes, Darrow was a “determinist”. But there are a gazillion reasons why a determinist like me would condemn the Nazis. And of course I do.

Michael Egnor: The fact is, we all know that it was horrendously evil. We all know that evil things really happen and that they really are evil. And if there is real evil, just as if there’s real good, then free will must exist. Because if we’re all just determined chemical bags, meat robots, there is no good or evil — we’re simply acting out our chemistry.

And of course, Coyne’s response to this has been that, although he believes that things such as the Holocaust were not morally evil — because there is no such thing as moral evil — he certainly believes that they weren’t… salubrious, is the term he uses. Which means that they didn’t work for the common good and should be condemned on that basis.

If you consider “morality” to be a subjective set of guidelines about what things are good and bad for society or individuals, as I do, then yes, the Nazis were immoral. However, I prefer to avoid the term “moral responsibility”, which presumes, as Luskin and Egnor believe, that people always have a choice between acting morally or immorally at any given moment. They don’t. I prefer the word “responsibility,” which means “the person did it; caused it to happen.” And you can be responsible in ways that mandate punishment, including imprisonment. “Moral responsibility” adds nothing to “responsibility” construed in this way.

Casey Luskin: I think that the Nazis probably believed that what they were doing was for the “common good.” So how do you define common good? On what basis do you condemn something if somebody believes what they’re doing is for the common good?

Of course, Dr. Egnor, all of this flows out of Jerry Coyne’s scientism. If you can’t scientifically prove that something is good or evil, then scientism dictates he can’t condemn it as good or evil. Obviously we have ways of determining whether things are good or evil that go beyond science. Jerry Coyne has to reject those ways of knowing because of his scientism.

Well, morality is a rather subjective set of beliefs, but one can use empirical evidence to bear on some questions of morality, depending on which version you adhere to. If you’re a consequentialist, as I am, then one might argue that the death penalty is “immoral” because it has net bad consequences compared to good ones. And I’m pretty sure that one could show that society (and its constituents) would be better off if murder remained illegal and was considered immoral (there are plenty of downsides and almost no upsides). But of course some people use other criteria besides utilitarian ones, like the Rawlsian “veil of ignorance” or even the Divine Command theory. In most cases, it is preference that dictates what people see as moral or immoral, and preferences differ. And these preferences—your basis for morality—cannot be subject to scientific test.

Egnor and Luskin, of course, think that good and evil are things that comport with what God wants or does not want. And if they cannot prove there’s a God, which they can’t, then they’re on even shakier ground than I!

One more before I grow ill:

Michael Egnor: Well, one of the points about Coyne’s denial of free will that I find in some ways the most frightening is that Coyne has suggested in several of his posts that, because he believes that there is no actual free will, we should change our approach to criminal justice — so that the approach to criminal justice does not entail retribution, but instead entails correction. That basically sort of like training animals. You’d want to train people to do better.
Of course, how one could define “better” in a world with no moral good or evil is a question Coyne doesn’t address.

But what is genuinely frightening about applying Coyne’s determinism and denial of free will to our society is that the most important consequence of the denial of free will is not that there therefore is no guilt. The most important consequence is that there is no innocence. It encourages, an approach to law enforcement that deals with people based on predictions of what they might do.

We ARE animals, and can be influenced by environmental circumstances—like jail. Sadly, our criminal justice system is, by and large, not set up to reform people, but to punish them. It’s also set up to deter others and to keep bad people out of circulation. And yes, there is “guilt”: it means “you did something deemed a crime.”

What a pair of morons! It’s even worse, though, if they knew how I’d respond to these things but have distorted my views to convince people that a secular Jewish evolutionist is, yes, EVIL.

I see I’ve offered some rebuttal after all!  But I couldn’t help it: it’s those damn laws of physics!

 

Peter Singer’s contrarian view on the Dobbs decision

July 4, 2022 • 10:20 am

Peter Singer, my favorite ethical philosopher and somewhat of a role model, has published a provocative article at Project Syndicate that has made me rethink the Dobbs decision that overturned Roe v. Wade. While I absolutely supported Roe v. Wade, and in fact would extend the two-trimester guidelines for legal abortion, I didn’t really see the “right to abortion” enshrined in the Constitution. Sure, you could slot it into the “right to privacy”, but that’s stretching it.  And that dies differ from the supposed “right to own guns”, as the Second Amendment specifies under what condition people can own guns: for a militia, not to carry them into a bar in Colorado.

The Supreme Court’s current brief is to rule on whether a law is constitutional, not to make new law. And if you take that view, then the Dobbs decision was correct, as it in effect affirmed that states could ban abortion, for the right to make such laws was not a subject of the Constitution. Ergo, Roe v Wade, which affirmed such a right, wasn’t decided properly.

Of course the Court’s ruling was also tempered by the strong Catholic beliefs of most justices, so it was largely a religious decision as well. But given that I am strongly pro-choice, what do I do? After thinking about it, I’m pondering the solution offered by Singer in this piece: let the democratic process, whether it be on the federal or state level, decide issues that aren’t addressed by the Constitution.

Click to read:

Singer:

Every woman should have the legal right safely to terminate a pregnancy that she does not wish to continue, at least until the very late stage of pregnancy when the fetus may be sufficiently developed to feel pain. That has been my firm view since I began thinking about the topic as an undergraduate in the 1960s. None of the extensive reading, writing, and debating I have subsequently done on the topic has given me sufficient reason to change my mind.

Yet I find it hard to disagree with the central line of reasoning of the majority of the US Supreme Court in Dobbs v. Jackson Women’s Health Organizationthe decision overturning Roe v. Wade, the landmark 1973 case that established a constitutional right to abortion. This reasoning begins with the indisputable fact that the US Constitution makes no reference to abortion, and the possibly disputable, but still very reasonable, claim that the right to abortion is also not implicit in any constitutional provision, including the due process clause of the Fourteenth Amendment.

The reasoning behind the decision in Roe to remove from state legislatures the power to prohibit abortion was clearly on shaky ground. Justice Byron White was right: The Roe majority’s ruling, he wrote in his dissenting opinion in the case, was the “exercise of raw judicial power.”

Singer continues:

The Supreme Court exercised that power in a way that gave US women a legal right that they should have. Roe spared millions of women the distress of carrying to term and giving birth to a child whom they did not want to carry to term or give birth to. It dramatically reduced the number of deaths and injuries occurring at that time, when there were no drugs that reliably and safely induced abortion. Desperate women who were unable to get a safe, legal abortion from properly trained medical professionals would try to do it themselves, or go to back-alley abortionists, all too often with serious, and sometimes fatal, consequences.

None of that, however, resolves the larger question: do we want courts or legislatures to make such decisions? Here I agree with Justice Samuel Alito, who, writing for the majority in Dobbs, approvingly quotes Justice Antonin Scalia’s view that: “The permissibility of abortion, and the limitations upon it, are to be resolved like most important questions in our democracy: by citizens trying to persuade one another and then voting.”

Now Singer points out the irony of the Court overturning Roe right after it affirmed, on Constitutional grounds, the right of citizens of New York to carry handguns, a right that isn’t really in the Constitution unless you stretch the Second Amendment like a Slinky.

I know what you’re thinking: “But if the states vote, I won’t get the laws I want: we’ll have a lot of states that ban abortion.” And that may be true, but if such things aren’t specified in the Constitution, then it’s either up to Congress or the states to decide the issue, not the Supreme Court. The Congress might just squeak through a national pro-choice law some day (not in the near future, sadly), but until then we should not let the Supreme Court strike down democratically enacted legislation. This is something Singer points out in his piece (my bolding):

There is an even more radical implication of the view that courts should not assume powers that are not specified in the Constitution: the Supreme Court’s power to strike down legislation is not in the Constitution. Not until 1803, fifteen years after the ratification of the Constitution, did Chief Justice John Marshall, in Marbury v. Madisonunilaterally assert that the Court can determine the constitutionality of legislation and of actions taken by the executive branch. If the exercise of raw judicial power is a sin, then Marshall’s arrogation to the court of the authority to strike down legislation is the Supreme Court’s original sin. Marbury utterly transformed the Bill of Rights. An aspirational statement of principles became a legal document, a role for which the vagueness of its language makes it plainly unsuited.

So whence does the Supreme Court derive its ability to overturn legislation not in the Constitution? It’s not in the Constitution itself, but is an assertion of one Justice in 1803. I’m taking Singer’s word for this, but I assume some readers will know this history.

Apparently, though Singer is not clear on this, laws that are clearly against what is specified in the Constitution can properly be struck down, for otherwise we’re left with conflicting legal assertions.

And now you’re probably asking yourself, as I did, “Well, if the court doesn’t rule on whether hazy laws are Constitutional, then what should it be doing?” That’s a good question, and Singer’s answer isn’t totally satisfying.  For if the Supreme Court (or apparently any court) can’t rule on whether every law adheres to the federal Constitution, can state courts rule on whether hazy state laws are constitutional? I suppose that depends on whether state judges are elected or appointed. If the former, then their rulings are part of the democratic process; if they’re not, then they have no business making such rulings (see below).

Singer’s Big Solution:

Supreme Court decisions cannot easily be reversed, even if it becomes clear that their consequences are overwhelmingly negative. Striking down the decisions of legislatures on controversial issues like abortion and gun control politicizes the courts, and leads presidents to focus on appointing judges who may not be the best legal minds, but who will support a particular stance on abortion, guns, or other hot-button issues.

The lesson to draw from the Court’s decisions on abortion, campaign finances, and gun control is this: Don’t allow unelected judges to do more than enforce the essential requirements of the democratic process. Around the world, democratic legislatures have enacted laws on abortion that are as liberal, or more so, than the US had before the reversal of Roe v. Wade. It should come as no surprise that these democracies also have far better laws on campaign financing and gun control than the US has now.

The part in bold, which is my emphasis, is not entirely clear, and that is Singer’s fault.  What does he mean by “enforce the essential requirements of the democratic process.” Couldn’t he list some appropriate actions? Does he mean that they can adjudicate laws that may have not been passed democratically, or laws that lower courts mistakenly construed? I’m pretty sure he means at least that “the Supreme Court should not determine the Constitutionality of laws to which the Constitution does not apply.” For Supreme Court justices, being appointed and not elected, shouldn’t be doing what they’re doing. (I can just imagine what the Supreme Court would look like if its judges were elected!)

This of course will radically overhaul the entire court system in the U.S., and not just federal courts. I’m just throwing this out there to see what readers think. Most of us are pro-choice and are angry as hell that the Supreme Court decided that Roe v. Wade didn’t really rest on a constitutional “right to privacy.” But remember that courts are political, and the Supreme Court in particular can willy-nilly rule on rights when the court itself isn’t accountable to the voters.

Tish Harrison Warren on why the best morality rests on the words and deeds of Jesus

June 20, 2022 • 12:30 pm

The weekly New York Times lucubrations of Anglican priest Tish Harrison Warren are anodyne and sometimes off-putting, yet I cannot resist reading them—for the same reason that you smell the milk when you know it’s gone bad.  This week, Warren interviews Rachael Denhollander, the first gymnast to publicly accuse team doctor Larry Nassar of sexual abuse. Denhollander is also going after the Southern Baptist Church because, it turns out, they’re as bad as Catholics regarding the sexual abuse by preachers.

Denhollander has done and is doing good stuff; my objection is that she seems to recognize sexual predation and its immorality only because Jesus says it’s bad (though I’m not sure he even deals with that issue in the Bible). Harrison and Denhollander seem to agree, in the end, that we must draw our morality from God because there’s something wrong with secular-based morality.

First, a small nit to pick. Warren’s questions are in bold; Denhollander’s responses in plain type:

Some brass tacks related to churches generally: If there is an abuse allegation in a church, what is the right response?

I think there are really two important parts to that question.

There is the policy question: On a very practical level, what am I to do? You need to report that allegation to the police if it is child abuse. As soon as the police have been notified and the alleged perpetrator knows that the police have been notified, you need to notify the church and protect the identity of the survivors.

One beef: in the U.S. we’re presumed innocent until we’ve been convicted by a judge or jury. (That’s why Denhollander says “alleged perpetrator”.) But she then goes on to mention the “survivors”, who really can’t be counted as “survivors” of a crime that hasn’t yet been established.  Using the very word “survivors” assumes that the people who are bringing charges in fact were victims of a crime. Sometimes that obvious, but sometimes it’s not, as the existence of a crime can rest solely on allegations.

But we needn’t dwell on that, for the main point comes at the end—about sources of morality.

You have been working alongside survivors in church settings for many years now. Why do you stay in the church with all the evil that you see there?

How do I know that the authority I’m seeing isn’t a good use of authority? How do I know that sexual abuse really is wicked and it ought to be treated that way? You can’t know a line is crooked unless you have some idea of a straight line. That is a paraphrase of a quote by C.S. Lewis, and it has really been a linchpin for me.

The reason I remain a Christian is because my faith is what allows me to say that what I’m watching right now is broken. These institutions and these responses to survivors aren’t right. And I know they’re not right because I have a perfect picture of what these things are supposed to be.

And so my allegiance is not to a church. My allegiance is not to a denomination. It’s not to a country. It’s not to a convention. My allegiance is to Christ. And when I look at my faith and when I look at the principles of Scripture, it gives me the ability to look at what’s happening and say, “This is not right,” and I know it’s not right because there really is a moral lawgiver, and there really is absolute truth. Because every other belief system outside of God leaves us essentially dependent on societal and cultural response to define right and wrong.

There are several things to “unpack” here, one being Denhollander’s claim that she’s a Christian because “her faith allows her to say what she’s watching is broken.” First of all, that’s just not true. Sexual abuse by clerics looks broken because it’s immoral by any standards: the use of one’s authority as a basis for sexual assault.  Do you need Christianity to see that? After all, the whole world (except for the Church itself) was horrified when the scandals of Catholic sexual abuse became public. You don’t have to be a Christian to see what’s “broken”!

Second, if Hollander had been a Christian several centuries ago, her faith would have told her that it’s the right thing to do to torture and burn heretics, engage in all kinds of acts that we’d find immoral today (using the Bible to condone slavery, for example), and perhaps ban books.

What has changed? Not Jesus or his words, but the secular world, whose morality evolves as Christian morality scurries behind to keep up. This alone show the verity of Socrates’s Euthyphro Argument: we don’t think something is right or wrong simply because God (or Jesus) tells us that it’s right or wrong, but because you’re using a social or secular morality to which one’s idea of God conforms.

An example of this is God telling Abraham to kill Isaac. Abraham, who apparently conformed to “divine command theory,” was about to do in his son, just because God said so. Most rational people find this horrible; they’d say “God wouldn’t order that” because he’s a good God. But God did order that, and our revulsion comes from the conflict between secular and “God-based” morality.

Religious “morality” changes from year to year not because we understand God’s or Jesus’s will better—the Bible is still the same—but because that we interpret theology in each era in a way that comports with our present morality.

Yes, Denhollander says that her allegiance is not to the law, or to a secular code of morality, but to Jesus, for the words of Jesus will show you what’s right and what’s wrong. This is the same Jesus who tacitly approved of slavery and told his followers to neglect their home and family and follow him. Of course nobody thinks that’s right any more.

The last sentence is assertive, but its thesis is dumb:

Because every other belief system outside of God leaves us essentially dependent on societal and cultural response to define right and wrong.

And what, exactly, is wrong with that? Should morality be absolutely constant as mores and facts change? With Jesus you get the former, with secular morality the latter? I know which one I prefer.

Phil Zuckerman on the advantages of secular morality

February 1, 2022 • 1:30 pm

“The question is not how can you be moral if you don’t believe in God, but how can you be moral if you believe in God.”  (Phil Zuckerman, below).

The most common criticism religionists make of atheists is embodied in the first part of the quote above, a quote from Phil Zuckerman in a speech he gave at the recent Freedom From Religion Foundation meeting.  The notion that atheism destroys morality has been dismantled several times, most recently in an exchange between Diane Morgan and Ricky Gervais in the terrific show “After Life.” I’ll let you listen for yourself: it’s in Season 3. And here Zuckerman does it not philosophically, but with data (or rather, assertions about data we don’t see).

As you may know, Zuckerman is a professor of secular studies at Pitzer College in California, and was the first person to become a full-time professor in that area.  Here’s a list of his books, of which I’ve read just one: the 2008 one, which shows how well two atheist countries, Sweden and Denmark, function without religion. (You can now add Iceland to that list.) It was that book that convinced me that there is no innate need for societies to be religious to function well. As Zuckerman remarks in his talk, and argues at length in Society without God. Scandinavia has some of the most “moral” countries on earth, yet they’re a pack of atheists. Moreover, Scandinavians have nothing I can see to “replace” religion: no “secular churches” or any of that nonsense. Yet religionists ignore this.

Zuckerman’s talk apparently relies heavily on his 2019 book below, but he mentions that he has a new book coming out, which surely has the data he mentions below.

His books (he’s been a busy atheist!)

  • Zuckerman, Phil (2019). What It Means to Be Moral: Why Religion Is Not Necessary for Living an Ethical Life. Berkeley: Counterpoint Press. ISBN 978-1640092747.
  • Zuckerman, Phil (2016). The Nonreligious: Understanding Secular People and Societies. London: Oxford University Press. ISBN 9780199924943.
  • Zuckerman, Phil (2014). Living the Secular Life: New Answers to Old Questions. London: Penguin Press. ISBN 9781594205088.
  • Zuckerman, Phil (2011). Faith no more : why people reject religion. New York: Oxford University Press. ISBN 9780199740017.
  • Zuckerman, Phil (2010). Atheism and secularity. Santa Barbara, California: Praeger. ISBN 9780313351815.
  • Zuckerman, Phil (2008). Society without God : what the least religious nations can tell us about contentment. New York: New York University Press. ISBN 9780814797143.

At any rate, in this talk Zuckerman makes the case that atheists, agnostics, and secular humanists have a set of values that leads to a better “morality” than that espoused by believers. He adduces data from a variety of areas—vaccination, acceptance of science, wearing masks, recognizing the existence and importance of global warming, acceptance of LGBTQ rights, animal rights, reproductive rights, and reparations for slavery—showing that nonbelievers seem to group on the “more moral” side. And even religionists who accept these values tend to have, as reader Sastra noted yesterday, a more “secularized” view of religion. It’s the Euthyphro argument of Plato: we can only get goodness from God if we assume God is, a priori, moral, and that view must come from non-religious values. Saying that morality comes from God devolves to the odious “Divine Command” argument espouse by people like William Lane Craig.

Zuckerman then asks why nonbelievers are more moral than religionists, and his response is that we’re motivated by empathy and compassion when constructing our morality, rather than by trying to obey the “will of God.” Well, perhaps, but if you derive God’s nature from secular considerations, as noted above, then there’s not much difference. But where there is a difference is that religion considers as part of morality notions like how to have sex, what to eat, what to wear, and so on—issues that really aren’t what most people consider within the ambit of morality.

Zuckerman also notes that religious folks are more tribalistic than nonbelievers, and tribalism breeds xenophobia and hence immorality.

In the end, I’m a big fan of Zuckerman, and the data may well show that the moral values of nonbelievers are sounder than those of nonbelievers. But the real question, which is very hard to answer, is this: “On the whole, is the average per capita amount of net good done by atheists better than the amount done by believers.” I believe the answer is “yes,” but I’d be hard pressed to prove it. Hitchens answered it with anecdotal data, citing people like Mother Theresa who pretended to be moral but didn’t really help people. But we need more systematic data. Perhaps Zuckerman provides these data in his new book.

After all, as Karl Marx said, “The philosophers have hitherto only interpreted the world in various ways. The point, however, is to change it.”

Steve Pinker talks with Helen Pluckrose for Counterweight

July 11, 2021 • 8:45 am

You all know Steve Pinker, and surely nearly all of you have heard of Helen Pluckrose, who not only participated in the “Grievance Studies Affair“, but coauthored with James Lindsay the book Cynical Theories: How Activist Scholarship Made Everything about Race, Gender, and Identity and has now founded the humanist but anti-woke organization Counterweight.

Here Helen has an eight-minute interview with Steve Pinker. (Note that there’s a photo of Cape Cod in the background, where Steve and Rebecca repair to their second home.) It’s mostly about wokeness and how to combat it.

 

h/t: Paul

The absence of objective morality

June 21, 2021 • 9:25 am

What does it mean to say that there’s an “objective morality”? The Stanford Encyclopedia of Philosophy reports this view as “moral realism” and characterizes it like this:

Moral realists are those who think that, in these respects, things should be taken at face value—moral claims do purport to report facts and are true if they get the facts right. Moreover, they hold, at least some moral claims actually are true. That much is the common and more or less defining ground of moral realism (although some accounts of moral realism see it as involving additional commitments, say to the independence of the moral facts from human thought and practice, or to those facts being objective in some specified way).

This is the stand taken by Sam Harris in his book The Moral Landscape, and it’s a view with which I disagree. Although some philosophers agree with Sam that morality is “factual” in this way—and by that I don’t mean that the existence of a moral code is a fact about society but that you can find objective ways to determine if a view is right or wrong—I can’t for the life of me see how one can determine objectively whether statements like “abortions of normal fetuses are wrong” are true or false. In the end, like many others, I see morality as a matter of preference. What is moral is what you would like to see considered good behavior, but as different people differ on what is right and wrong, I see no way to adjudicate statements like the one about abortion.

I’ve said all this before, but it came to mind last night when I was reading Anthony Grayling’s comprehensive book The History of Philosophy. (By the way, that book has convinced me that there is virtually no issue in philosophy that ever gets widespread agreement from nearly all respectable philosophers, so in that way philosophy differs from science. That is not to say that philosophy is without value, but that its value lies in teaching us how to think rigorously and to parse arguments, not to unearth truths about the cosmos.)

It’s clear that empirical observation can inform moral statements. If you think that it’s okay to kick a dog because it doesn’t mind it, well, just try kicking a dog. But in the end, saying whether it’s right or wrong to do things depends on one’s preferences. True, most people agree on their preferences, and their concept of morality by and large agrees with Sam’s consequentialist view that what is the “right” thing to do is what maximizes “well being”.  But that is only one criterion for “rightness”, and others, like deontologists such as Kant, don’t agree with that utilitarian concept. And of course people disagree violently about things like abortion—and many other moral issues.

One problem with Sam’s theory, or any utilitarian theory of morality, is how to judge “well being”. There are different forms of well being, even in a given moral situation, and how do you weigh them off against one another? There is no common currency of well being, though we know that some things, like torturing or killing someone without reason, clearly does not increase well being of either that person or of society. Yet there is no objective way to weigh one form of well being against another. Abortion is one such situation: one weighs the well being of the fetus, which will develop into a sentient human, against that of the mother, who presumably doesn’t want to have the baby.

But to me, the real killer of objective morality is the issue of animal rights—an issue that I don’t see as resolvable, at least in a utilitarian way. Is it moral to do experiments on primates to test human vaccines and drugs? If so, how many monkeys can you put in captivity and torture before it becomes wrong?  Is it wrong to keep lab animals captive just to answer a scientific question with no conceivable bearing on human welfare, but is just a matter of curiosity? Is it moral to eat meat? Answering questions about animal rights involves, if you’re a Harris-ian utilitarian, being able to assess the well being of animals, something that seems impossible. We do not know what it is like to be a bat.  We have no idea whether any creatures value their own lives, and which creatures feel pain (some surely do).

But in the end, trying to find a truly factual answer to the statement, “Is it immoral for humans to eat meat?”  or “is abortion wrong?”, or “is capital punishment wrong?” seems a futile effort. You can say that eating meat contributes to deforestation and global warming, and that’s true, but that doesn’t answer the question, for you have to then decide whether those effects are “immoral”. Even deciding whether to be a “well being” utilitarian is a choice. You might instead be a deontologist, adhering to a rule-based and not consequence-based morality.

You can make a rule that “anybody eating meat is acting immorally,” but on what do you base that statement? If you respond that “animals feel pain and it’s wrong to kill them,” someone might respond that “yes, but I get a lot of pleasure from eating meat.” How can you objectively weigh these positions? You can say that culinary enjoyment is a lower goal than animal welfare, but again, that’s a subjective judgment.

By saying I don’t accept the idea of moral claims representing “facts”, I’m not trying to promote nihilism. We need a moral code if, for nothing else, to act as a form of social glue and as a social contract. Without it, society would degenerate into a lawless and criminal enterprise—indeed, the idea of crime and punishment would vanish. All I’m arguing is that such claims rest at bottom on preference alone. It’s generally a good thing that evolution has bequeathed most of us with a similar set of moral preferences. I hasten to add, though, that what feelings evolution has instilled in us aren’t necessarily ones we should incorporate into morality, as some of them (widespread xenophobia, for instance) are outmoded in modern society. Others, like caring for one’s children, are good things to do.

In the end, I agree with Hume that there’s no way to derive an “ought” from an “is”. “Oughts” have their own sources, while “is”s may represent in part our evolutionarily evolved behaviors derived from living in small groups of hunter-gatherers. But that doesn’t make them evolutionary “oughts.”

I’m not a philosopher—and I’m sure it shows!—and I know there are famous philosophers, like Derek Parfit, who are moral realists, but my attempt to read the late Parfit’s dense, two-volume treatise On What Matters, said to contain his defense of moral realism, was defeated.

Kenan Malik on judging yesterday’s figures by today’s morality

May 10, 2021 • 12:00 pm

Over at the Guardian, Kenan Malik writes with his usual good sense about judging historical figures by today’s morality—something we just read about this morning vis-à-vis Darwin and other evolutionists.. Malik’s particular subjects are Napoleon and Churchill, both in the process of being found “problematic”. Unlike many of the “decolonizers”, Malik is willing to tolerate some ambiguity. And why shouldn’t we, given that morality evolves and was never in the past identical to what it is now?

 

Malik on Napoleon, with a soupçon of Churchill:

Those who laud [Napoleon’s] legacy claim that he projected France on to the world stage and laid the foundations of a strong state. The Napoleonic Code, a sweeping array of laws covering citizenship, individual rights, property, the family and colonial affairs, established a new legal framework for France and influenced law-making from Europe to South America.

Stacked against this are shockingly reactionary policies. Napoleon reintroduced slavery into French colonies in 1802, eight years after the national assembly had abolished it, imposed new burdens on Jews, reversing rights they had gained after the revolution, strengthened the authority of men over their families, depriving women of individual rights and crushed republican government.

To the far right in France, Napoleon is an unalloyed hero. To many others, his is a troubling legacy. To be wrapped in a complex legacy is not, however, unique to Napoleon. It is the fate of most historical figures, whether Churchill or Wilberforce, Jefferson or Roosevelt, Atatürk or Nkrumah, all of whose actions and beliefs remain contested. Biographies rarely cleave neatly into “good” or “bad”.

Many, though, feel the need to see history in such moral terms, to paint heroes and villains in black and white, to simplify the past as a means of feeding the needs of the present. National and imperial histories have long been whitewashed, the darker aspects obscured. How many people in Britain know of the Morant Bay Rebellion in Jamaica or of the “Black War” in Tasmania? We want to preserve our national heroes untainted, none more so than Churchill, the man who saved “the nation… and the entire world”. Attempts to reassess his legacy can be dismissed as “profoundly offensive” or as “rewriting history”.

At the same time, those who seek to address many of these questions often themselves impose a cartoonish view of the past and its relationship to the present, from the call to take down Churchill’s statues to the mania for renaming buildings. The complexities of history fall foul of the political and moral needs of the present.

There’s more, but you get the gist of it.

The more I think about it, the more opposed I am to spending a lot of time denigrating people whose ideas we teach in class: people like Ronald Fisher, Charles Darwin, Thomas Jefferson, and so on. Yes, a mention or two might be sufficient, but today’s “decolonized curricula” seem to spend more time on the odious history of famous people who advanced good ideas than on the ideas themselves. And yes, Charles Darwin confected (along with A. R. Wallace) the theory of evolution, but he was also a racist, somewhat of a misogynist, and one who believed that white races would supplant others. (I can hear the Discovery Institute licking its lips now: “Coyne admits Darwin was a racist!”).

But these are issues not for science classes, but for ethics classes, where nuances can be discussed and developed. And remember, as Steve Pinker reminds us endlessly, morality has changed substantially, and improved, in the past few centuries. What that means is that things that we do now (is meat-eating one?) will be regarded as odious in 200 years. Who can we celebrate today, knowing that in the future they can (and probably will) be demonized. Will Joe Biden be seen as a barbarian because he enjoyed an occasional pork chop? What this all means is that there is nobody we can admire today except insofar as they conform to a quotidian morality that we know will be supplanted.

Further, I can’t help but feel that a lot of those engaged in denigrating people like Darwin, R. A. Fisher, and George Washington are doing so for performative reasons: to tell us, “Look, I can see how much better we (and I) am today than our forebears.”  Now clearly, for someone like Hitler or the slaveholders of the South, we need not celebrate them at all, for there’s no good that they did to be celebrated. But to deny Darwin some encomiums for what he did, or even Jefferson? For those people surely did do some good things, and their statues are not erected to celebrate the bad things they did.

As I’ve said repeatedly, here are my criteria for celebrating, via statues, plaques, and so on, somebody of the past:

My criteria so far have been twofold. First, is the statue celebrating something good a person did rather than something bad? So, for example, Hitler statues fail this test, though I hear that some Nazi statues are simply left to molder and degenerate rather than having been pulled down. Second, were the person’s contributions on balance good or bad? So, though Gandhi was a bit of a racist towards blacks as a barrister in South Africa, the net amount of good he did in bringing India into existence through nonviolent protest seems to me to heavily outweigh his earlier missteps. Gandhi statues should stay.

What if someone was bad on balance but did a very good thing—should that person be celebrated? That would be a judgment call.

In general, I err on the side of preserving history, for statues and buildings are a mark of history—of times when our morality was different from what it is today. And it’s useful to be reminded of that rather than simply having our past, especially the bad bits, erased. History, after all, isn’t all beer and skittles. We don’t want a lot of Winston Smiths operating in our culture.

So no, Sheffield, don’t spend a lot of time reminding me what a racist and misogynist Darwin was, all of which will be done at the expense of telling us what Darwin accomplished. All you’re doing in effect is showing that, over time, morality has improved.

A Guardian “long read” on free will

April 27, 2021 • 9:15 am

Several readers sent me a link to a new Guardian piece on free will by journalist Oliver Burkeman (some added that I’m quoted a couple of times, which is true). It’s a “long read” for those with a short attention span, but I have to say that it’s a very good piece, covering all the bases: the definitions, the consequences of contracausal free will, the “solution” of compatibilism, the implications for moral responsibility and for judicial punishment; yes, it’s all there.  And although Burkeman’s personal take, given at the end, is a bit puzzling, it’s a very good and fair introduction to the controversies about free will.

Click on the screenshot to read:

 

As I said, I have mostly praise for Burkeman’s piece, as he’s clearly done his homework and manages to condense a messy controversy into a readable piece.  So take my few quibbles in light of this general approbation.

First, though, I must note Burkeman’s opening, which, surprisingly, shows the hate mail philosophers have received for promulgating determinism. (Burkeman notes, correctly, that even compatilists who broach a new kind of free will are still determinists.) Although I was once verbally attacked by a jazz musician who said I’d taken away from him the idea that he had complete freedom to extemporize his solos, I’ve never received the kind of mail that Galen Strawson has:

. . . . the philosopher Galen Strawson paused, then asked me: “Have you spoken to anyone else yet who’s received weird email?” He navigated to a file on his computer and began reading from the alarming messages he and several other scholars had received over the past few years. Some were plaintive, others abusive, but all were fiercely accusatory. “Last year you all played a part in destroying my life,” one person wrote. “I lost everything because of you – my son, my partner, my job, my home, my mental health. All because of you, you told me I had no control, how I was not responsible for anything I do, how my beautiful six-year-old son was not responsible for what he did … Goodbye, and good luck with the rest of your cancerous, evil, pathetic existence.” “Rot in your own shit Galen,” read another note, sent in early 2015. “Your wife, your kids your friends, you have smeared all there [sic] achievements you utter fucking prick,” wrote the same person, who subsequently warned: “I’m going to fuck you up.” And then, days later, under the subject line “Hello”: “I’m coming for you.” “This was one where we had to involve the police,” Strawson said. Thereafter, the violent threats ceased.

Good lord! Such is the resistance that people have to hearing that they don’t have “contracausal” (you-could-have-chosen-otherwise) free will. Regardless of what compatibilists say, belief in contracausal free will is the majority view in many places (see below).

There are only a few places where Burkeman says things I disagree with. One is how he treats the issue of “responsibility”. My own view, as someone Burkeman calls “one of the most strident of the free will skeptics,” is that while we’re not morally responsible for our misdeeds, which implies we could have chosen a different path, we are what Gregg Caruso calls “answerably responsible”. That is, as the agent of good or bad deeds, whatever actions society deems appropriate in response to our acts must devolve upon our own bodies. Therefore, if we break the law, we can receive punishment—punishment to keep us out of society where we might transgress again, sequestering us until we are deemed “cured” and unlikely to transgress again, and punishment to deter others. (Caruso, also a free-will skeptic, disagrees that deterrence should be an aim of punishment, since it uses a person as an instrument to affect the behavior of others.) Caruso holds a “quarantine” model of punishment, in which a transgressor is quarantined just as Typhoid Mary should be quarantined: to effect possible cures and protect society from infection. Burkeman describes Caruso’s model very well.

What is not justified under punishment (and most compatibilists, including Dan Dennett, agree) is retributive punishment: punishment meted out by assuming that you could have chosen to behave other than how you did. That assumption is simply wrong, and so is retributivism, which is largely the basis of how courts in the West view punishment.

As for praise or blame, or responsibility itself, Burkeman somehow thinks they would disappear even under a hard-core deterministic view of society:

Were free will to be shown to be nonexistent – and were we truly to absorb the fact – it would “precipitate a culture war far more belligerent than the one that has been waged on the subject of evolution”, Harris has written. Arguably, we would be forced to conclude that it was unreasonable ever to praise or blame anyone for their actions, since they weren’t truly responsible for deciding to do them; or to feel guilt for one’s misdeeds, pride in one’s accomplishments, or gratitude for others’ kindness. And we might come to feel that it was morally unjustifiable to mete out retributive punishment to criminals, since they had no ultimate choice about their wrongdoing. Some worry that it might fatally corrode all human relations, since romantic love, friendship and neighbourly civility alike all depend on the assumption of choice: any loving or respectful gesture has to be voluntary for it to count.

But no, praise and blame are still warranted, for they are environmental influences that can affect someone’s behavior.  It is okay to praise someone for doing good and to censure them for doing bad, because this might change their brains in a way to make them liable to do less bad and more good in the future. (Granted, we have no free choice about whether to praise or blame someone.) The only thing that’s not warranted in Burkeman’s list is retributive punishment. Gratitude, pride, guilt, and so on are useful emotions, for even if we had no choice in what we did, these emotions drive society in positive directions, reinforcing good acts and discouraging bad ones.

Burkeman goes on, emphasizing the danger to society of promulgating determinism—a determinism that happens to be true. As the wife of the Bishop of Worcester supposedly said about Darwin’s view that we’re descended from apes,

“My dear, descended from the apes! Let us hope it is not true, but if it is, let us pray that it will not become generally known.”

This appears to be the view of not only Burkeman, it seems, but also of Dan Dennett. As Burkeman notes “Dennett, although he thinks we do have [compatibilist] free will, takes a similar position, arguing that it’s morally irresponsible to promote free-will denial.”

Morally irresponsible to promulgate denial of contracausal free will? Morally irresponsible to promulgate the truth? Or does he mean morally irresponsible to deny compatibilist notions of free will like Dennett’s? Either way, I reject the idea that we must hide the truth, or quash philosophical discussion, because it could hurt society.

Burkeman goes on about morality:

By far the most unsettling implication of the case against free will, for most who encounter it, is what it seems to say about morality: that nobody, ever, truly deserves reward or punishment for what they do, because what they do is the result of blind deterministic forces (plus maybe a little quantum randomness). “For the free will sceptic,” writes Gregg Caruso in his new book Just Deserts, a collection of dialogues with his fellow philosopher Daniel Dennett, “it is never fair to treat anyone as morally responsible.”

The operant word here is “deserves”—the idea of “desert” that’s the topic of a debate between Caruso and Dennett that I recently reviewed.  If you mean by “deserve” the fact that you’re deemed “answerably responsible,” and thus can undergo punishment for something bad you did, or can justifiably be praised, then yes, there is good justification for holding people answerably responsible for their good and bad deeds, and taking action accordingly.

There is much to argue with in the piece, not with Burkeman, but with some of the compatibilists he quotes. One of them is Eddy Nahmias:

“Harris, Pinker, Coyne – all these scientists, they all make the same two-step move,” said Eddy Nahmias, a compatibilist philosopher at Georgia State University in the US. “Their first move is always to say, ‘well, here’s what free will means’” – and it’s always something nobody could ever actually have, in the reality in which we live. “And then, sure enough, they deflate it. But once you have that sort of balloon in front of you, it’s very easy to deflate it, because any naturalistic account of the world will show that it’s false.”

Here Nahmias admits that determinism reigns, and implicitly that contracausal free will is nonexistent. But what I don’t think he grasps is that the naturalistic view of will, determinism, while accepted by him and his fellow compatibilists, is flatly rejected by a large majority of people—and in several countries (see the study of Sarkissian et al., though I note that when presented with concrete moral dilemmas, people tend to become more compatibilistic). Contracausal free will is the bedrock of Abrahamic religions, which of course have many adherents. Those who proclaim that everybody accepts pure naturalism and the deterministic behavior it entails—that denying that is “an easily deflatable balloon”—probably don’t get out often enough.

Likewise, though who say a society grounded on determinism will be a dreadful society full of criminals, rapists, and murderers are wrong, I think. This is for two reasons. First of all, know quite a few free-will skeptics, including Caruso, Alex Rosenberg, Sam Harris, myself, and others, and if free-will skepticism had a palpable effect on someone’s behavior, I can’t see it. It’s an unfounded fear.

The other reason is that there’s an upside in being a determinist. We still have our illusions of free will, so we can act as if our choices are contracausal even if, intellectually, we know they’re not. Hard determinists like myself are not fatalists who go around moaning, “What’s the use to tell the waiter what I want? It’s all determined, anyway.”

And there’s the improvement in the penal system that comes with accepting deteriminism: there’s a lot to be said for Caruso’s “quarantine” model, which is more or less in effect in places like Norway, though I still adhere to the value of deterrence. And, as Burkeman says eloquently, a rejection of free will paradoxially makes us “free” in the sense that we can be persuaded to give up unproductive retributive attitudes and overly judgmental behavior:

In any case, were free will really to be shown to be nonexistent, the implications might not be entirely negative. It’s true that there’s something repellent about an idea that seems to require us to treat a cold-blooded murderer as not responsible for his actions, while at the same time characterising the love of a parent for a child as nothing more than what Smilansky calls “the unfolding of the given” – mere blind causation, devoid of any human spark. But there’s something liberating about it, too. It’s a reason to be gentler with yourself, and with others. For those of us prone to being hard on ourselves, it’s therapeutic to keep in the back of your mind the thought that you might be doing precisely as well as you were always going to be doing – that in the profoundest sense, you couldn’t have done any more. And for those of us prone to raging at others for their minor misdeeds, it’s calming to consider how easily their faults might have been yours. (Sure enough, some research has linked disbelief in free will to increased kindness.)

. . . . Yet even if only entertained as a hypothetical possibility, free will scepticism is an antidote to that bleak individualist philosophy which holds that a person’s accomplishments truly belong to them alone – and that you’ve therefore only yourself to blame if you fail. It’s a reminder that accidents of birth might affect the trajectories of our lives far more comprehensively than we realise, dictating not only the socioeconomic position into which we’re born, but also our personalities and experiences as a whole: our talents and our weaknesses, our capacity for joy, and our ability to overcome tendencies toward violence, laziness or despair, and the paths we end up travelling. There is a deep sense of human fellowship in this picture of reality – in the idea that, in our utter exposure to forces beyond our control, we might all be in the same boat, clinging on for our lives, adrift on the storm-tossed ocean of luck.

I agree with this. And there’s one more benefit: if you are a free-will skeptic, you won’t always be blaming yourself for choices you made in the past on the grounds that you made the “wrong choice.” You didn’t have an alternative! This should mitigate a lot of people’s guilt and recrimination, and you can always learn from your past mistakes, which might alter your behavior in a permanent way. (This is an environmental influence on your neural program: seeing what worked and what didn’t.)

In light of Burkeman’s paean to free-will skepticism, then, it’s very odd that he says the following at the end:

Those early-morning moments aside, I personally can’t claim to find the case against free will ultimately persuasive; it’s just at odds with too much else that seems obviously true about life.

The deterministic case against contracausal free will is completely persuasive, and I think Burkeman agrees with that. So exactly what “case against free will” is he talking about? Is he adhering to compatibilism here? He doesn’t tell us. What, exactly, is at odds with what seems “obviously true about life”? But so much that “seems obviously true” is wrong as well, like the view that there’s an “agent”, a little person, sitting in our head that directs our actions. I would have appreciated a bit more about what, after doing a lot of research on the free-will controversy, Burkeman has really come to believe.

h/t: Pyers, David

Ross Douthat laments the “elite’s” loss of faith

April 11, 2021 • 9:45 am

The answer to Ross Douthat’s title question below is, of course, “no”: the meritocracy, which I suppose one can define as either the rich or the educated, are increasingly giving up religion. And, if history be any guide, they’re unlikely to go back to it. Click on the screenshot below to read Douthat’s elegy for the loss of religion among America’s elite, his reasons why it’s happening, and his straw-grasping about how the meritocracy might come back to God. (Douthat is, of course, a staunch Catholic.)

Last year, by even Douthat’s admission, only 47% of Americans belonged to a church, mosque, or synagogue.  Two years ago, in an article called “In U.S., decline of Christianity continues at rapid pace,” the Pew Research Center presented the following graphs. As American Christianity has declined quickly, the proportion of “nones”—those who see themselves as agnostics, atheists, or holding “no religion in particular”—is growing apace. (remember, this is over only a dozen years).

The fall in religiosity has been faster among the younger than the older, among Democrats than among Republicans, and among those with more education rather than less.

Douthat calls these data “grim.” Here’s his worry:

A key piece of this weakness is religion’s extreme marginalization with the American intelligentsia — meaning not just would-be intellectuals but the wider elite-university-educated population, the meritocrats or “knowledge workers,” the “professional-managerial class.”

Most of these people — my people, by tribe and education — would be unlikely models of holiness in any dispensation, given their ambitions and their worldliness. But Jesus endorsed the wisdom of serpents as well as the innocence of doves, and religious communities no less than secular ones rely on talent and ambition. So the deep secularization of the meritocracy means that people who would once have become priests and ministers and rabbis become psychologists or social workers or professors, people who might once have run missions go to work for NGOs instead, and guilt-ridden moguls who might once have funded religious charities salve their consciences by starting secular foundations.

But this all sounds good to me! Isn’t it better to have more psychologists, social workers, and professors instead of more clerics? At least the secular workers are trained to do their job, and don’t have a brief to proselytize or inculcate children with fairy tales.

But no, not to Douthat. Implicit in his column is the worry that without religion, America would be less moral. (He doesn’t state this outright, but absent that belief his column makes no sense. Unless, that is, he’s interested in saving souls for Jesus.)

As a Christian inhabitant of this world, I often try to imagine what it would take for the meritocracy to get religion. There are certain ways in which its conversion doesn’t seem unimaginable. A lot of progressive ideas about social justice still make more sense as part of a biblical framework, which among other things might temper the movement’s prosecutorial style with forgiveness and with hope. Meanwhile on the meritocracy’s rightward wing — meaning not-so-woke liberals and Silicon Valley libertarians — you can see people who might have been new atheists 15 years ago taking a somewhat more sympathetic look at the older religions, out of fear of the vacuum their decline has left.

You can also see the concern with morality as Douthat proffers two reasons why, he thinks, the elite are prevented from hurrying back to Jesus, Moses, or Muhammad:

One problem is that whatever its internal divisions, the American educated class is deeply committed to a moral vision that regards emancipated, self-directed choice as essential to human freedom and the good life. The tension between this worldview and the thou-shalt-not, death-of-self commandments of biblical religion can be bridged only with difficulty — especially because the American emphasis on authenticity makes it hard for people to simply live with certain hypocrisies and self-contradictions, or embrace a church that judges their self-affirming choices on any level, however distant or abstract.

Again, I’m baffled about why Douthat sees religiously-based morality, particularly of the Catholic variety, as superior to humanistic morality. After all, only religious “morality” prescribes how and with whom you can have sex, the supposed “role” of women as breeders and subservient partners, the demonization of gays, the criminality of abortion, the desirability of the death penalty, and the immorality of assisted dying.  What kind of morality do you expect to get by following the dictates of a bunch of superstitious people from two millennia ago, people who had to posit an angry god to explain what they didn’t understand about the cosmos? You get the brand of religion that Douthat wants us all to have! For he sees religiously deontological morality as better than think-for-yourself morality: the “the thou-shalt-not, death-of-self commandments of biblical religion.”

And it’s clear, as Douthat continues his risible lament for the loss of faith, that he sees no contradiction between rationality and superstition, though the conflict between them, and the increasing hegemony of science in explaining stuff previously within God’s bailiwick, is what is driving the educated to give up their faith:

A second obstacle [to the elite regaining faith] is the meritocracy’s anti-supernaturalism: The average Ivy League professor, management consultant or Google engineer is not necessarily a strict materialist, but they have all been trained in a kind of scientism, which regards strong religious belief as fundamentally anti-rational, miracles as superstition, the idea of a personal God as so much wishful thinking.

Thus when spiritual ideas creep back into elite culture, it’s often in the form of “wellness” or self-help disciplines, or in enthusiasms like astrology, where there’s always a certain deniability about whether you’re really invoking a spiritual reality, really committing to metaphysical belief.

There are two misconceptions in two paragraphs. The first is that professors indoctrinate students with the belief that there is no God—we are training them in atheism, materialism, and scientism. But we don’t do that: the students give up God because, as they learn more, they also grasp that, as Laplace supposedly replied to Napoleon, we “have no need of that hypothesis.” If there were actual evidence for miracles and a theistic god, people wouldn’t abandon their faith.

Further, although some of the “nones” are spiritual in the sense of embracing stuff like astrology or crystal therapy, I see no evidence of a rise in embracing of woo as profound as the decline in religiosity.  The example of Scandinavia, which converted from religiosity to atheism in about 250 years, shows not only that religion isn’t needed to create a moral, caring society (indeed, it shows that religion is inimical to this), but also that religion needn’t be replaced by other forms of woo. As far as I know, the Danes and Swedes aren’t fondling their crystals with alacrity.

Nothing will shake Douthat’s faith in God, nor his faith in faith as an essential part of society—in this he resembles his co-religionist Andrew Sullivan—but he does adhere to a form of intelligent design held by those sentient people who are still religious:

Yes, science has undercut some religious ideas once held with certainty. But our supposedly “disenchanted” world remains the kind of world that inspired religious belief in the first place: a miraculously ordered and lawbound system that generates conscious beings who can mysteriously unlock its secrets, who display godlike powers in miniature and also a strong demonic streak, and whose lives are constantly buffeted by hard-to-explain encounters and intimations of transcendence. To be dropped into such a world and not be persistently open to religious possibilities seems much more like prejudice than rationality.

I don’t seem to have had those hard-to-explain encounters or intimations of transcendence. I must be missing my sensus divinitatis! What Douthat takes as evidence for God, like the tendency of humans to be clever but sometimes nasty, can be understood by a combination of our evolutionary heritage and our cultural overlay. The same holds for “a system that generates conscious beings.” It’s evolution, Jake!

In the end, Douthat is as baffled by we secularists’ rejection of God as I am by his credulous acceptance of the supernatural as the only plausible explanation for the Universe:

And my anthropological understanding of my secular neighbors particularly fails when it comes to the indifference with which some of them respond to religious possibilities, or for that matter to mystical experiences they themselves have had.

Like Pascal contemplating his wager, it always seems to me that if you concede that religious questions are plausible you should concede that they are urgent, or that if you feel the supernatural brush you, your spiritual curiosity should be radically enhanced.

Well, as a scientist one must always give a degree of plausibility to any hypothesis, but when that degree is close to zero on the confidence scale, we need consider it no further. Based on the evidence, the notion of a god is as implausible as notions of fairies, leprechauns, or other such creatures.  And if the plausibility is close to zero, then so is the urgency.  And even if the questions are urgent, which I don’t believe since the world’s well being doesn’t depend on them, they are also unanswerable, making them even less urgent. Would Douthat care to tell me why he thinks the Catholic god is the right one rather than the pantheon of Hindu gods, including the elephant-headed Ganesha? Isn’t it urgent to decide which set of beliefs is right?

But maybe it’s because I never felt the supernatural brush me.

Amen.

 

h/t: Bruce

Another criterion for judging whether to “cancel” someone

September 15, 2020 • 11:00 am

Although I don’t spend a lot of time calling for people to be unpersoned, canceled, or have their statues toppled or namesakes changed, I do try to discern whether “cancellation” calls are justified or unwarranted. Clearly there’s no good criteria that will work all the time, so it usually comes down to a judgment call.  In general, I tend to side with those who want history left as it is, but sometimes qualified, as with statues of Confederates famous for defending the South. (I favor “counterstatues” or explanatory plaque.) But in many cases, such as the Teddy Roosevelt statue at the American Museum of Natural History, I see no need for revision (see Greg’s post on that here).

My criteria so far have been twofold. First, is the statue celebrating something good a person did rather than something bad? So, for example, Hitler statues fail this test, though I hear that some Nazi statues are simply left to molder and degenerate rather than having been pulled down. Second, were the person’s contributions on balance good or bad? So, though Gandhi was a bit of a racist towards blacks as a barrister in South Africa, the net amount of good he did in bringing India into existence through nonviolent protest seems to me to heavily outweigh his earlier missteps. Gandhi statues should stay.

What if someone was bad on balance but did a good thing—should that person be celebrated? That would be a judgment call.

In general, I err on the side of preserving history, for statues and buildings are a mark of history—of times when our morality was different from what it is today. And it’s useful to be reminded of that rather than simply having our past, especially the bad bits, erased. History, after all, isn’t all beer and skittles. We don’t want a lot of Winston Smiths operating in our culture.

Now, in an article in Quillette, Steven Hales, described as “Professor and Chair of Philosophy of Bloomsberg University of Pennsylvania”, and author of The Myth of Luck: Philosophy, Fate, and Fortune, has added another criterion, one that seems sensible to me.  Well, it’s not really a criterion for determining who should be cancelled, but a way to look at the supposed missteps of figures from the past. I have tweaked it to make it a criterion.

Click on the screenshot to read:

Hales analyzes morality as analogous to science. Science has improved over time in helping us understanding nature, but we don’t denigrate scientists who made honest errors in the past. (Miscreant scientists, like Lysenko, are a different case.) Similarly, morality improves over time. (I don’t think there’s an objective morality, but surely the way we run society has allowed flourishing of more people over the past few centuries.) To Hales, it makes as little sense to denigrate those who went along with the morality of their time as to denigrate those scientists who accepted the “received wisdom” of their time. As he says:

All of which to say, there is a vital difference between being wrong and being blameworthy. Einstein struggled to admit the fact of quantum entanglement, but that does not entail his blameworthiness as a scientist. In one clear sense, he was on the “wrong side” of quantum history, but that doesn’t necessarily merit demotion from the pantheon. Scientific praiseworthiness or blameworthiness is determined not by the standards of our times, but of theirs. While you can hardly blame Darwin for not knowing the unit of natural selection, you would certainly blame a modern biology undergraduate if she did not know about DNA. Nonetheless, it is Darwin who deserves our admiration and praise, even if today’s undergrad knows more than he did.

And so we should judge people by the “average” moral standard of the time, which I interpret as meaning that if someone wasn’t considered immoral in their own society, but had values and beliefs that were fairly standard, then we can’t fault them too much today, for people are products of their genes and environments.

Hales:

Anyone who thinks that right moral thinking is obvious, and is incredulous at the horrible beliefs of the past, is the unwitting heir to a philosophical fortune hard-earned by their forebears. The arc of the universe may bend towards justice, but it is a long arc. As with scientists, moral actors of the past also fall into the great, the average, and the bad. Our judgment of them shouldn’t be by the standards of our own times, but the standards of theirs. By the moral understanding of his day, Vlad the Impaler was still a monster. But should we say the same of St. Paul, who in his Letter to Philemon, returns Onesimus, a runaway slave, to his owner instead of providing the slave with safe harbor? While Paul’s letter includes a request for Christian mercy, he omits condemnation for the horror of slavery. Paul was no slave trader, but the moral views displayed here were typical for his time.

By these lights, Hume, who gave approbation to a slaveholder, wasn’t the monster he seems to be today, as acceptance of slavery wasn’t seen as immoral back then as it is now. Morality has evolved for the better. I think it’s misguided, then, to “cancel” Hume, as they’re trying at Edinburgh with a building name, because of one “misstep” in a life that was otherwise very useful and salubrious.

Now of course this criterion has its own problems, the most obvious being “what was the ‘received’ moral wisdom of the time?” For example, Darwin was not in the majority of Brits of his time in being an abolitionist. Should we expect people of Darwin’s era, then, to adhere to the “best” morality, or simply to an “average” morality—one that wouldn’t get its adherent labeled as immoral in his society? Since there are always some angels in society, however rare, I’d go with the latter criterion.

This doesn’t solve all the issues, for of course the Nazis adhered to the average anti-Semitic morality of their times, and we don’t want people to put up statues to Nazis or label buildings “Goebbels Hall.” How do we judge an “average” morality? Morality among all humans on the planet in a time when people can read, learn and think, or the morality obtaining in one’s immediate surroundings? I have no answer.

Nor do I know how to combine Hales’s criterion with the ones I’ve held previously. All I know is that I have a mental algorithm about who should be canceled, and few people fall on the “yes” side, mainly those with no redeeming lives, acts, or thoughts.  Nor should we laud people today for things that were once considered okay, but now are seen as bad. Hume deserves to stay because he was not only a great man and a great philosopher, but also because he wasn’t the equivalent of a Nazi.  Finally, I don’t have problems getting rid of art that shows things that really are considered universally offensive: like a mural showing a lynching in the South.  Clearly, we will never get everyone to agree on these issues.

But as for Darwin, Gandhi, Jefferson, George Washingon and yes, with qualification, Robert E. Lee—let them stay. As they say, those who don’t remember history are doomed to repeat it.