My New Republic piece on bungled executions (and a related radio interview)

May 5, 2014 • 8:49 am

My piece on Oklahoma’s botched execution of Clayton Lockett has been heavily rewritten, combined with some other stuff, and published by The New Republic as “The three-drug death penalty cocktail is a mess.” (It takes about 2.5 hours to rewrite a website post for a column.)

If you love Professor Ceiling Cat, who has hearts on his boots, go over and give it a click, engaging in discussion if you’d like.

Oh, and I’ll be discussing this piece (and the morality of executions) on WDEL talk radio (Delaware) at about 11:45 Chicago time (12:45 p.m. Eastern time and 5:45 p.m. London time); you can listen live here (click on “Listen Now!” at upper left). It’ll be only a 4-5 minute interview, and the start time may be off a few minutes.

Damon Linker fails to spot the nightjar; says human altruism proves Jesus

May 4, 2014 • 6:15 am

I’m not really sure who Damon Linker is, but this recommendation on his website doesn’t give me a lot of confidence:

Damon Linker is one of the most arresting and honest writers of his generation  on the subjects of faith and politics.
—Andrew Sullivan

And if you Google “Damon Linker”, the second hit you get, after his own website, is a critique I wrote on this site.

Linker clearly doesn’t like me because I make Baby Jesus Cry, and, as you’ll see, he harbors a great deal of love for Jesus. In fact, in his latest piece at The Week,Why atheism doesn’t have the upper hand over religion,” he gives the Saviour credit for human altruism and for the fact that we humans admire it so much. But what it does is not do is show any advantage of religion over atheism. Rather, Linker proves beyond any doubt that he understands neither evolutionary biology nor science in general.

Linker begins with a gratuitous slap at yours truly, for I supposedly instantiate the philosophical dimwitedness of New Atheism:

In my last column, I examined some of the challenges facing religion today. Those challenges are serious. But that doesn’t mean that atheism has the upper hand. On the contrary, as I’ve argued many times before, atheism in its currently fashionable form is an intellectual sham. As Exhibit 653, I give you Jerry Coyne’s latest diatribe in TheNew Republic, which amounts to a little more than an inadvertent confession that he’s incapable of following a philosophical argument.

My “diatribe” was a critique of David Bentley Hart’s new book, which Linker has promoted furiously as the kind of stuff we New Atheists need to deal with because its Srs Bsns. But if I instantiated intellectual sham, Linker does it in spades, for his piece simply makes a God-of-the-gaps argument for human altruism. This, says Linker, is something that atheism simply can’t explain:

Atheism shouldn’t be wholly identified with the confusions of its weakest exponents any more than we should reduce religious belief to the fulminations of fundamentalists. Yet when it comes to certain issues, the quality of the arguments doesn’t much matter. The fact is that there are specific human experiences that atheism in any form simply cannot explain or account for. One of those experiences is radical sacrifice — and the feelings it elicits in us.

Think of a soldier who throws herself on a live grenade to save her comrades. Or a firefighter who enters a blaze to rescue a child knowing that he will likely perish in the effort.

Or consider Thomas S. Vander Woude, the subject of an unforgettable 2011 article by the journalist Jeffrey Goldberg. One day in September 2008, Vander Woude’s 20-year-old son Josie, who has Down syndrome, fell through a broken septic tank cover in their yard. The tank was eight feet deep and filled with sewage. After trying and failing to rescue his son by pulling on his arm from above, Vander Woude jumped into the tank, held his breath, dove under the surface of the waste, and hoisted his son onto his shoulders. Josie was rescued a few minutes later. By then his 66-year-old father was dead.

This is something that any father, atheist or believer, might do for his son. But only the believer can make sense of the deed.

First error: it’s not atheism that has to explain or account for altruism, altruistic feelings, or our approbation of altruism. It’s science that must do that—and sociology (which, properly conducted, is a form of science). For atheism is simply denying belief in Gods. It doesn’t have to explain anything about nature, but only denies that there’s convincing evidence for the divine. Since human morality is surely a joint product of evolution and acculturation, those disciplines are where we should look for clarity.

And, of course, altruism is not a complete mystery to scientists. “True” altruism, in which animals sacrifice their lives (or rather, their reproductive fitness) to help unrelated members of the same species, is vanishingly rare among animals. (Don’t mention vampire bats regurgitating blood to other’s offspring, for that result has not been replicated, and is questionable.) And that’s exactly what you expect under Darwinian individual selection, for no animal could be selected to sacrifice itself without getting some reproductive payback. (The rarity of “true” altruism in nature, by the way, also argues against its production by group selection, for group selection can supposedly overcome the disadvantages of individual altruism if such acts are beneficial for the persistence of the group. But that apparently hasn’t happened, for we see almost no true altruism in nature. In fact, I know of no such cases. In contrast, the way altruism and cooperation play out in human society strongly implicates individual rather than group selection.)

Kin selection is not “true” altruism, for the sacrificing individual gains genetically by saving copies of the gene that promotes sacrificial behavior. If your expected genetic benefit (discounted by the degree of relatedness to those you’re saving) exceeds the genetic cost, then the behavior will evolve. In other words, you’d be willing to voluntarily and certainly sacrifice your life to save more than two children—each of whom shares half your genes.  And if your chance of dying (or loss of reproduction) is less than certain, then you’d try to save even one child. This, of course, is the rationale for why parents care more about their own kids than other people’s. And it’s a good explanation for why Thomas Vander Woude would try to save his child. He didn’t know that he would die, he simply had the impulse to try to save his child—something that’s certainly built into us by natural selection.

There are also cases of reciprocal altruism, in which you’ll sacrifice a certain amount because you expect reciprocity from those you help. You might, for instance, share food with others if you have a surfeit, knowing that they’ll remember and reciprocate when it’s your turn to go hungry. That kind of altruism can be shown to evolve in small groups in which individuals recognize and remember each other—precisely the situation that obtained over millions of years of human evolution. So surely some of our altruistic feelings come from evolution acting on individuals in the small groups of our ancestors.

But those instinctive and evolved feelings can also be highjacked, for they rest on certain cues that can be mimicked by other situations. Soldiers, for instance, form bonds with their platoons: it’s not for nothing that they call each other “brother” (i.e., “Band of Brothers.”) In such cases your feeling of solidarity may piggyback on your evolved feelings for either kin or groupmates, and cause you to, say, fall on a grenade, or take horrific chances in wartime to save your “brothers.” Remember, the cue for helping is likely to be familiarity with others, not explicit recognition of a genetic relationship.

Remember the video I showed a few weeks ago of a mother cat suckling a brood of ducklings? Explain that one, atheists! But of course we can: the ducklings happened to be around when the cat, infused with motherly hormones by her own impending litter, was willing to take care of anything. Does that constitute proof of God for Linker? Is it The Argument from Suckling Ducklings? I suppose that the frequent phenomenon of human adoption, something that’s deeply altruistic yet evolutionarily maladaptive, also constitutes evidence for God!

We highjack evolutionary feelings in a maladaptive way all the time.  When you don a condom before sex, you are deliberately doing what evolution doesn’t “want” you to do: sacrificing your reproduction. But you’re doing that because you like the cue that evolution has given us to reproduce: the pleasure of the orgasm and the sheer wonderfulness of sex. We don’t impute condoms to God; we impute them to the fact that we’ve evolved to be wily enough to overcome our evolved tendencies: to get the sizzle without the steak.

Finally, as Peter Singer and Steve Pinker have noted, morality can be—and certainly is—culturally inculcated. As we become more and more familiar with other cultures, their inhabitants become more “brotherlike”: we see that we stand in no special moral position with respect to them, and so will help them, especially when it doesn’t cost much. (Really, how much of our reproduction do we sacrifice by giving $100 to Doctors Without Borders?) Therefore we will help them, and our feeling of satisfaction accompanying that help can also be explained either by evolution—reciprocal altruism could depend on a cue of approving of sacrificial acts—or by culture (we’ve learned that people behave better when they are rewarded for sacrifice, and that depends on the approbation of people who see that altruism). In fact, people are more likely to be altruistic when other people are around to see it; “free-riding” (benefitting from other’s sacrifices without paying back) is more common when you can do it undetected.

Linker shows his abysmal ignorance of all this when briefly considering, and then dismissing, the alternative explanations:

Other atheistic theories similarly deny the possibility of genuine altruism, reject the possibility of free will, or else, like some forms of evolutionary psychology, posit that when people sacrifice themselves for others (especially, as in the Vander Woude case, for their offspring) they do so in order to strengthen kinship ties, and in so doing maximize the spread of their genes throughout the gene pool.

But of course, as someone with Down syndrome, Vander Woude’s son is probably sterile and possesses defective genes that, judged from a purely evolutionary standpoint, deserve to die off anyway. So Vander Woude’s sacrifice of himself seems to make him, once again, a fool.

Things are no better in less extreme cases. If Josie were a genius, his father’s sacrifice might be partially explicable in evolutionary terms — as an act designed to ensure that his own and his son’s genes survive and live on beyond them both. But the egoistic explanation would drain the act of its nobility, which is precisely what needs to be explained.

We feel moved by Vander Woude’s sacrifice precisely because it seems selfless — the antithesis of evolutionary self-interestedness.

Oh, my dear Mr. Linker, we save our children based on inborn impulses that just say “save your kids”. Those impulses don’t include a brain module that says “but first make sure your kid isn’t sterile, and it would help if he were a genius.” In the same way, putting on a condom doesn’t eliminate the possibility of having an orgasm. It’s the cue that’s important—whatever cue evolved over 6 million years to guarantee an evolutionarily beneficial result.  And over those six million years, the chances that a child would one day be fertile were very high.

And yes, we feel moved by that sacrifice, but, as I’ve said, the emotions of approbation for sacrifice can also be explained in both evolutionary and cultural terms. Culture, by the way, is surely an important source of moral feelings. As developmental psychologist Paul Bloom explains in his recent book, Just Babies: The Origins of Good and Evil (recommended), babies start off being pretty selfish towards strangers and then must be taught to help others. As I wrote about Bloom’s views when I reviewed his book:

The empathy that seems inherent in “human nature” is directed only towards those the infants are familiar with, like family. It is not directed at strangers. In fact, infants are spiteful little things, and do not like even equality with strangers. They will, for example, prefer to have one cookie while another infant nearby gets none, over the alternative where both infants get two cookies. In other words, infants sacrifice their own well-being just to affirm their superiority in the acquisition of goods.  Several other studies show the same thing.  Infants are empathic but not altruistic.

Bloom argues, then, that the altruism comes from education, an argument also made by Peter Singer in his superb book The Expanding Circle. I quote Bloom:

“And so there is no support for the view that a transcendent moral kindness is part of our nature. Now, I don’t doubt that many adults, in the here and now, are capable of agape.

. . . When you bring together these observations about adults with the findings from babies and young children, the conclusion is clear: We have an enhanced morality but it is the product of culture, not biology. Indeed, there might be little difference in the moral life of a human baby and a chimpanzee; we are creatures of Charles Darwin, not C.S. Lewis.”

Of course Linker has his alternative theory: altruism comes from God, and it’s instilled in us divinely by the Christian God. I am not making up this conclusion from his piece:

What is it about the story of a man who willingly embraces a revolting, horrifying death in order to save his son that moves us to tears? Why does it seem somehow, like a beautiful painting or piece of music, a fleeting glimpse of perfection in an imperfect world?

I’d say that only theism offers an adequate explanation — and that Christianity might do the best job of all.

Christianity teaches that the creator of the universe became incarnate as a human being, taught humanity (through carefully constructed lessons and examples of his own behavior) how to become like God, and then allowed himself to be unjustly tried, convicted, punished, and killed in the most painful and humiliating manner possible — all as an act of gratuitous love for the very people who did the deed.

Why does Vander Woude’s act of sacrifice move us? Maybe because in freely dying for his son, he gives us a fleeting glimpse of the love that moves the sun and the other stars.

Which is to say, he gives us a fleeting glimpse of God.

That might sound outlandish to atheists. But for my money, it comes closer to the truth, and does more to explain the otherwise irreducibly mysterious experience of noble sacrifice than any competing account.

Don’t buy it? I dare you to come up with something better.

I just did in the post above, Mr. Linker. And your theory doesn’t explain altruism in non-Christians, does it?

To close, I’ll simply repeat the words of Linker’s hero, David Bentley Hart:

If my salad at lunch were suddenly to deliver itself of such an opinion, my only thought would be “What a very stupid salad.” 

Linker nightjar

Must you be religious to be moral?: A worldwide survey, and its lesson

March 19, 2014 • 8:25 am

A post by C. J. Werleman at Alternet called my attention to a new study by the Pew Research “Global Attitudes Project” that polls people on the perennial (and already answered) question, “Do you need God to be moral”? Pew’s answer, however, is a general “yes,” but that answer is far more common in poorer than richer countries. Here are Pew’s data broken down by country:

PG-belief-in-god-03-13-2014-01

The survey involved 40,080 people.

As you see, the wealthier countries of Europe and Asia have a fairly high proportion of people who don’t think it’s necessary to believe in God to be moral, while sub-Saharan Africa and the Middle East (with the exception of Israel) show a much higher belief that goodness requires godliness. Much of Latin America is also in line with that view.

Note that the U.S. is higher than any surveyed European country in its view that you need God to be moral (53%), while our more sensible Canadian friends are much lower (31%).

Pew also published an interesting plot (divided by country) of the proportion of people who think belief in God is necessary for morality versus the wealth of that country (expressed as per capita GDP). As you see below, the correlation is strong, and undoubtedly highly significant. There are two outliers, though; as Pew notes:

Two countries, however, stand out as clear exceptions to this pattern: the U.S. and China. Americans are much more likely than their economic counterparts to say belief in God is essential to morality, while the Chinese are much less likely to do so.

MMqP0uLBMPUZvKlN1PlYFO1SPQzzmtScb12VPGs_Bu7SmjNpCDyx90OYfzEL1Sqr8-tC0mRYwM8etjwWvYY3LpvKeK3Ngf-l63l1oB-tyxyeMaLufPYM2Ly0zRJKhCazrWORLB8

What is curious here is that the report leaves out any mention of the correlation between religiosity and the belief that goodness requires Godliness.  For it is certain that there is another factor involved in the relationship shown above: belief in God. Those sub-Saharan African countries, and those in the Middle East, are the most religious countries in the world. The U.S. is the most religious of First World countries, and China, because of its Communist past and general lack of goddy religions, is notably nonreligious. Greece and Poland are more religious than Britain or France, and Canada is less religious than the U.S.

In other words, if you plotted religiosity of these nations versus the goodness-requires-God quotient, you’d get the same kind of relationship, but with a positive correlation.  That’s a no-brainer, because clearly countries that are more religious will have inhabitants that see religiosity as more critical to morality.

Curiously, though, that obvious fact isn’t mentioned, and neither is the finding (from other studies) that religiosity is negatively correlated with average income, and especially with indicators of social dysfunction like income inequality, lack of government health care, and so on. I’m willing to bet that if sociological indices of a country’s well being were applied to sub-Saharan Africa and the Middle East, one would find many of those countries to be deeply dysfunctional.

Pew gives one other finding:

There are also significant divides within some countries based on age and education, particularly in Europe and North America. In general, individuals age 50 or older and those without a college education are more likely to link morality to religion. For example, in Greece, 62% of older adults say it is necessary to believe in God to be a moral person, while just 29% of 18- to 29-year-olds agree. In the U.S., a majority of individuals without a college degree (59%) say faith is essential to be an upright person, while fewer than four-in-ten college graduates say the same (37%).

And, of course, older Americans (I don’t know about Europeans) are more religious than younger ones, while more educated individuals in the U.S. are less likely to be religious.

All these data show, then, is that the more religious one is, the more likely one will believe that having faith is necessary for morality. I don’t know why Pew concentrates solely on average income, age, and education, ignoring a factor clearly involved in these relationships—religiosity.

At any rate, the question about whether one needs God to be moral has already been answered in two ways: philosophically by Plato’s Euthyphro argument, and empirically, by observing that countries that are largely godless, like those of northern Europe, are just as moral—if not more so—than places like the U.S. or Middle East. Further, as the West becomes less religious, it has become, as Steve Pinker argues persuasively, more moral.

I’d like to add some data mentioned by Werleman that confirm my suspicion that what breeds religiosity is social dysfunction. Along with some sociologists, I think that those who can’t get help or security from their government, or from a national ethos that citizens should be taken care of, may turn to God for solace and hope. In that sense, Marx was right to indict religion as the “sigh of the oppressed creature.” But I fulminate; let me instead quote Werleman, who cites data supporting the negative relationship between religiosity and social well-being:

Staying with the U.S., this correlation between a high rate of poverty and high degree of religiosity is supported by a 2009 Pew Forum “Importance of Religion” study that determined the degree of religious fervor in all 50 states. The study measured a number of variables including frequency of prayer, absolute belief in God, and so forth. Led by Mississippi, Alabama and Arkansas, nine of the top 10 most religious states were southern. Oklahoma ruined the South’s clean sweep by sneaking in at number seven.

Not coincidentally, led again by Mississippi, Alabama and Arkansas, nine of the top 10 poorest states are also found in the South, while northern and pacific states such as Wisconsin, Washington, California, New York, New Hampshire, and Vermont are among the least religious and the most economically prosperous.

Well spoken! Werleman concludes:

In an earlier piece, I wrote that the primary reason for abject child poverty in these Southern states is that more than a third of children have parents who lack secure employment, decent wages and healthcare. But thanks to religion, these poor saps vote for the party that rejects Medicaid expansion, opposes early education expansion, legislates larger cuts to education, and slashes food stamps to make room for oil and agriculture subsidies on top of tax cuts and loopholes for corporations and the wealthy. Essentially, the Republican Party has convinced tens of millions of Southerners that a vote for a public display of the Ten Commandments is more important to a Christian’s needs than a vote against cuts in education spending, food stamp reductions, the elimination of school lunches and the abolition of healthcare programs.

. . . While the Republican Party retains its monolithic hold on the South, the rest of America remains deprived of universal healthcare, electric cars, sensible gun control laws, carbon emission bans, a progressive tax structure that underpins massive public investment, and collective bargaining laws that would compress the income inequality gap. In other words, without the South’s religiosity, “America” would again look like a developed, secular country, a country where it’s probable for an atheist to be elected into public office, and where the other 50 million law-abiding atheists wouldn’t be looked upon as rapists, thieves and murders.

He’s almost calling for secession!

While I see no necessary connection between atheism and belief in social reform—the kind of reform that makes people more economically and socially secure, and provides government-sponsored healthcare—it’s starting to seem clear that if we want to eliminate religion’s hold on the world, we have to eliminate those conditions that  breed  religion. In that sense, Marx was right (and now wait for the Discovery Institute to start calling me a Marxist!).

This view, which is mine, differs from that of the so-called “social justice warriors,” who see a necessary philosophical connection between atheism and “social justice”.  I don’t agree—atheism is simply a lack of belief in gods, and has no necessary connection with any social view. The connection I see is a tactical and practical one: if, as atheists, we’re interested not only in our own convictions, but in convincing others to believe (or, in this case, disbelieve) likewise, then we must deal with the factors that promote religious belief. If those include social dysfunction, as I think they do, then eliminating faith will require restructuring society.

Lack of government healthcare and income inequality are good places to start.

Paul Bloom claims that we’re not biochemical puppets because we can reason. He’s wrong.

February 24, 2014 • 7:49 am

Paul Bloom is a noted psychologist at Yale, specializing in morality and its development in young children (see an earlier post on that topic here).

Now, in the new Atlantic, Bloom has published a longish piece, “The war on reason,” that describes a purported war on rationality incited by the findings of neuroscientists, determinists, and people like Sam Harris—findings that we are “biochemical puppets.” While Bloom’s piece is well reasoned and well written, I think it comes off as a veiled attack on incompatibilism, or at least as a defense of compatibilism, where “free will” is replaced by the word “reason.” And I think he’s off the mark when saying that the rationality of humans somehow exempts us from being “biochemical puppets.”

I say it’s a “purported” war because I don’t think that we hard determinists have any problem with rationality, or with people using reason before they perform an action.  I see reason and rationality as tools installed in us (and our ancestors) by natural selection: a computer program, if you will, whereby input information is weighted differently depending on how reliable it is, or whether it’s empirical versus revelatory.  And rational behavior is reinforced by being emphasized by everyone (except some churches) as a virtue. Humans, of course, aren’t the only animals that can reason.  Surely many primates can, as well as dogs, cats, and even those birds who, when they cache food, will dig it up and cache it elsewhere if they see another bird watching. (The latter involves a “theory of mind” which, in humans, would be taken as evidence for “rationality”.)

It’s obvious why natural selection would favor brain patterns that would evaluate evidence rationally, for if you have good reasons for what you do, you’re more likely to survive and reproduce. That is why, for example, our ancestors used empirical evidence and reason when hunting or finding food, or evaluating the mindset of their clan members.  (You don’t look for wildebeest where there is no grass.)

But rationality is not something we “choose” to exercise (I’m using “choose” in a libertarian sense here). Rather, it is something that most people are conditioned to use when evaluating evidence. And some do not, for they are swayed by emotion, mental illness (brain disorders that we still fail to understand), abuse or other prior mistreatement, a childhood spent in bad environments, and so on. And none of us (not even Professor Ceiling Cat) are completely rational beings. Love, for instance, is a largely irrational emotion, often driven by factors beyond our current ability to reason.  Most of us are largely rational but also show a good dollop of irrationality based on our backgrounds and genes. And some people are less rational than others.  But, at any rate, rationality is simply the brain’s adaptive computer program that, before providing an output, weighs the inputs according to their probative value. The use of rationality is something over which we have no personal control. Why on earth should it be seen as being less free from determinism, or more conducive to culpability, than even full-blown irrationality?

In other words, to say that we can reason says nothing about whether our “decisions” and actions are “free” or different in principle from the actions taken by those who are irrational or have a mental illness that impairs the input-output system of their brain.  The fact that we use reason says nothing about whether those who can reason, but nevertheless do bad things, deserve more punishment than those who can’t reason.  Both groups show equal moral responsibility for their actions—that is, none. 

Certainly we should treat those malefactors who are mentally ill, irrational, or incapable of persuasion differently from those who can be persuaded to reform via rational argument.  No determinist says otherwise.  But that rehabilitation and punishment must be determined by three things: a.) the liability of an offender to be rehabilitated, and the best means of doing so;  b.) the likelihood of recidivism (pedophiles, for instance, are more likely to relapse than are other criminals); and c.) the deterrent effect of punishment on others.  And of course it can be useful to persuade people to be rational, for it’s possible to reprogram someone’s brain by that form of environmental input. (It’s a common misconception that determinists don’t believe that their behavior can be changed by others.) But I see no rationale for claiming that rationality somehow makes me less of a biochemical puppet.

Bloom feels otherwise. He does agree though, that we are largely “biochemical puppets,” but somehow exempts reason from that monicker. And therefore he sees neuroscientists and incompatibilist philosophers as engaging in a “war on reason.” Frankly, I’m baffled. The article almost sounds as if it were written to reassure those who are discomfited by determinism (and the latest findings in brain science) that we can safely retain our notions of free will and moral responsibility.

First, Bloom’s admission of determinism:

We are soft machines—amazing machines, but machines nonetheless. Scientists have reached no consensus as to precisely how physical events give rise to conscious experience, but few doubt any longer that our minds and our brains are one and the same.

. . . For the most part, I’m on the side of the neuroscientists and social psychologists—no surprise, given that I’m a psychologist myself. Work in fields such as computational cognitive science, behavioral genetics, and social neuroscience has yielded great insights about human nature. I do worry, though, that many of my colleagues have radically overstated the implications of their findings. The genetic you and the neural you aren’t alternatives to the conscious you. They are its foundations. [JAC: who said otherwise?]

So where is this attack on reason coming from? According to Bloom, from the neuroscientists and psychologists who find that people can be influenced by unconscious factors—or take decisions made by their brains before they’re conscious of them. One of the former is the famous experiment showing that people who find a dime in phone booth are more likely to act charitably than those who don’t. There are innumerable similar studies showing how people’s behavior can be unconsciously manipulated. Bloom agrees, but says that this is not a strong criticism of rationality because those unconscious determinants don’t completely dominate our behavior—they merely influence it.

My response to this is: so what? Nobody claims otherwise. Except for strong manipulations like drugs or electrical stimulation of the brain, we can rarely completely efface rational thought and action.  Humans are a combination of programmed rational behavior—programmed by our genes and past environments—and behaviors that don’t always follow the dictates of reason (also caused by genes and whatever environments we experienced). Someone may, for instance, have been severely mauled by a dog when young, and although normally rational, he continues to hurl rocks at dogs whenever he sees them. We all know scientists like Ken Miller and Francis Collins who are perfectly rational in their working lives, but throw that all out the window on Sundays.

Bloom further notes that we should draw a distinction between people who have been “horrifically abused as a child,” those who “are psychopaths who appear incapable of empathy,” and “the cold-blooding planning of a Mafia hit man.” He sees the last person as having more morally responsibility for their actions.  But moral responsibility is, to many of us (including me) bound up with our idea of “freedom to choose otherwise in a fixed situation” and nobody—including Bloom—thinks we have that.  In fact, although Bloom throws about the term “moral responsibility,” he fails to distinguish it from “agent responsibility.” Yes, a psychopath is responsible for what he did, and should be punished, but he had no more choice in his actions than did a Mafia hit man. They are both biochemical puppets. Later in the piece, Bloom implies that you are somehow more culpable if you could have exercised “self control” over your actions (he says that such self control is “the embodiment of rationality”), but self control, too, is something we don’t choose to exercise or not. We simply have or do not have it depending on our genes and environments. It’s simply not true that anyone can choose to stop chain-smoking.

This bit, I think, sums up Bloom’s dilemma:

You have reasons for that choice, and you can decide to stop reading if you want. If you should be doing something else right now—picking up a child at school, say, or standing watch at a security post—your decision to continue reading is something you are morally responsible for.

The idea of “choosing” to stop (or choosing anything at all), they suggest, implies a mystical capacity to transcend the physical world. Many people think about choice in terms of this mystical capacity, and I agree with the determinists that they’re wrong. But instead of giving up on the notion of choice, we can clarify it. The deterministic nature of the universe is fully compatible with the existence of conscious deliberation and rational thought—with neural systems that analyze different options, construct logical chains of argument, reason through examples and analogies, and respond to the anticipated consequences of actions, including moral consequences. These processes are at the core of what it means to say that people make choices, and in this regard, the notion that we are responsible for our fates remains intact.

I am guessing that Bloom’s agenda is in the third sentence of the second paragraph: “But instead of giving up on the notion of choice, we can clarify it.” He wants to let people know that by some redefinition, they can retain their beloved idea of choice.  By all means we should avoid discomfiting the public with the scientific truth. By some judicious re-jiggering of how we use words, we can let them have their determinism and moral responsibility, too.

Bloom is right that “choice” is really deterministic: we could not have chosen otherwise. Where he goes wrong is thinking that somehow rational deliberation is what people really mean when they say they make choices, and that such rationality is the ultimate touchstone of moral responsibility. (By the way, why on earth would Bloom think that a choice to continue reading his article is a “moral” choice? Even if you believe in moral responsibility, which I don’t, not all choices are “moral” ones. And surely to continue reading has nothing to do with morality, however you conceive it.)

But who is Bloom to tell us what people really think when they say we “make choices” or are “morally responsible” for our choices? We’ve seen in the past few days, reading papers by Nahmias et al. and Sarkissian et al., how complex this issue is, and how hard it is to gauge what people really think about determinism and moral responsibility. First of all, many people are true indeterminists, disagreeing with the Bloom’s notion that the universe is deterministic (with some quantum indeterminacy thrown in—an addition that doesn’t give credence to anybody’s notion of “free choice” or “moral responsibility”). Second, some people agree that in such a universe people are not morally responsible for their action. Curiously, still others think that in a universe that is completely determined, people still think that others “could have chosen otherwise” and are morally responsible for their actions.

It’s all a mess, probably because, as some commenters have noted, many people don’t think a lot about physics, determinism, and moral responsibility. What we do know, as scientists, is that determinism reigns (as Official Website Physicist™ Sean Carroll notes, the laws underlying the physics of everyday life are completely understood), and in that light we have to decide what we mean by “responsibility.” Bloom is silent on this issue, particularly when it comes to “moral responsibility.”

In the end, I don’t agree with Bloom that determinists, like those who show we can predict simple decisions before people are conscious of having made them, are waging a war on rationality. We aren’t. If there is a “war,” it’s on three other fronts.

First, there’s a war on whether determinism reigns. Bloom and I agree that that issue has been settled in favor of determinism, but many fellow humans would disagree. Those include the many religious believers who think that we can make libertarian, can-do-otherwise choices.

Second, there’s a war about what it means to be “morally responsible,” and how that differs from simply being “responsible” (to see how one can distinguish these, read Bruce Waller’s book Against Moral Responsibility).  I don’t think that there is such a thing as moral responsibility, for if surveys say anything, they tend show that moral responsibility goes hand in hand with the notion of true libertarian (“can do otherwise”) free will—something that we do not have. I fully agree that we must hold people responsible for their actions, for social good demands it, but we must realize that there is no essential difference between the culpability of those who are “rational” criminals and those who are “irrational” criminals. There is a difference, however, in how we should deal with such people.

These first two “wars” are important ones, for they have real implications about how we should run our society. While some disagree, and argue that giving up the ideas of indeterminism, free choice, or moral responsibility would still have no social implications, I think they’re wrong. They’re wrong because we already recognize that some people can’t freely choose to refrain from crime. Sending mentally ill criminals to prison hospitals instead of jail is one example. Imagine how things would differ if we realized, as we should, that no criminal had a choice about what he did.  I won’t dwell on how we’d change the criminal justice system, but, with Waller, I agree that we’d also concentrate far more on eliminating the environmental factors that promote criminality. (Chicago is already doing that by getting rid of large “projects” and trying to mix low-income people with higher-income ones.)

Finally, there is a semantic war among determinists: do we have “free will” or not? I myself engage in this discussion, but see it as a much less important “war” than the battle between deterministics and indeterminists. That’s why I say that I’m baffled when philosophers spend their time confecting new and diverse reasons why we have “free will” in a deterministic world. That’s like theology: it’s an activity without a point. (Or rather, the point resembles the point of theology: to reassure people that they have something they don’t.)

Let me hasten to add again that I do believe in holding people responsible for their actions.  I also believe that rationality is a quality that we should aspire to and promote. If I didn’t, I wouldn’t spend a lot of time criticizing religion and its evidential basis—faith. What I don’t believe is that people can themselves “choose” to be rational in a libertarian sense. But we can promote the virtue of rationality, and even in a deterministic world such promotion can have positive effects.

And finally, we can’t freely choose to promote rationality. We do that because of our genes and environments: the infinite regress back to our ancestors. What a good thing that evolution and experience favor rationality!

Another paper on “folk intuitions” about free will: Nahmias et al.

February 21, 2014 • 7:38 am

To complement the paper of Sarkissian et al., which I wrote about the other day, I’ll present as briefly as I can the results of an earlier paper on beliefs about free will by Eddy Nahmias et al. (references to both papers are at bottom, free download on this one).

In contrast to the results of Sarkissian et al., Nahmias et al. conclude that the “average person” (in this case, students “drawn from an Honors student colloquium and several introductory philosophy classes at Florida State University”) were compatibilists about free will. In other words, given a hypothetical “deterministic” universe in which the future was completely determined by the laws of nature acting on the present situation, students still believed that in many concrete situations requiring “moral” judgement, individuals retained free will and moral responsibility for their actions.

Nahmias et al. posed three sets of questions to the students.

CASE 1

Students were given a deterministic scenario and asked two questions about it. Here’s the scenario:

Screen shot 2014-02-21 at 6.47.40 AM

As with the Sarkissian et al. paper, there is no quantum indeterminacy in this scenario, which almost certainly means that one cannot deduce the future state of the universe from the present one, but I don’t think that would affect the results, and at any rate it would be hard to explain to undergraduates the idea of pure indeterminacy.

The students were first asked if the scenario given above was possible. The majority of the students said “no” for various reasons (including quantum indeterminacy!), but also for nondeterministic reasons, like “the computer could never acquire that much information.” Like the students in the Sarkissian et al. study (the latter from four countries), then, these students were not deterministic.

Then the crucial question about free will:

Regardless of how you answered question 1, imagine such a supercomputer actually did exist and actually could predict the future, including Jeremy’s robbing the bank (and assume Jeremy does not know about the prediction):
Do you think that, when Jeremy robs the bank, he acts of his own free will?

76% of the students said “yes,” indicating a compatibilist view of free will. Given the deterministic scenario, it’s clear that either this is genuine compatibilist free will exercised in a deterministic universe, or else the students believed in libertarian free will despite the deterministic scenario! That would understand an inability to comprehend true determinism.

To test whether the students accepted free will only because Jeremy did something bad, Nahmias et al. also asked them if Jeremy had free will in this deterministic universe if either b) went jogging (a “neutral” action) or c) saved a child from a burning building (a “praiseworthy” action). They were also asked if Jeremy had moral responsibility in the bank-robbing and saving-child situations.

In all cases the results were “yes”, with more than 60% of the students agreeing that Jeremy had both free will and moral responsibility. Here are the results given in bar charts:

Screen shot 2014-02-21 at 6.59.14 AM

As the authors note, as have other philosophers like Dan Dennett, judgements of moral responsibility are closely aligned with those of free will.

CASE 2

In this study, the authors wanted to see if the respondents thought that Jeremy could have acted otherwise in this situation. They call this the “ability to choose otherwise” (ACO), and this is what many see as a libertarian notion of free will. The authors describe the question:

In these cases, participants were asked—again, imagining the scenario were actual—whether or not Jeremy could have chosen not to rob the bank (case 6), whether he could have chosen not to save the child (case 7), or whether he could have chosen not to go jogging.

The bar graph gives the ACO (“could have chosen otherwise” figures compared to those already given for judgement about whether Jeremy had free will:

Screen shot 2014-02-21 at 7.12.26 AM

The authors summarize these data:

In the blameworthy variation, participants’ judgments of Jeremy’s ability to choose otherwise (ACO) did in fact track the judgments of free will and responsibility we collected, with 67% responding that Jeremy could have chosen not to rob the bank. However, in the praiseworthy case, judgments of ACO were significantly different from judgments of his free will and responsibility: Whereas a large majority of participants had judged that Jeremy is free and responsible for saving the child, a majority (62%) answered ‘‘no’’ to the question: ‘‘Do you think he could have chosen not to save the child?’’ Finally, in the morally neutral case, judgments of ACO were also significantly different from judgments of free will—again, whereas a large majority had judged that Jeremy goes jogging of his own free will, a majority (57%) answered ‘‘no’’ to the question: ‘‘Do you think he could have chosen not to go jogging?’’

I have two comments here.  I’m puzzled that despite the presentation of an explicitly deterministic scenario for human action, 67% of the students still concluded that Jeremy could have chosen not to rob the bank. While that could superficially be seen as compatibilism, it also seems to be a compatibilism based largely on an acceptance of libertarian free will, so that perhaps the students don’t understand the real conflict between libertarianism and determinism.

Second, the notion of “choosing otherwise” may mean different things in a praiseworthy versus a blameworthy situation. In the bank-robbing situation, it may mean that the students really did think Jeremy had a choice. In the “save-a-child” situtation, it may mean that it would be unthinkable for Jeremy not to save the child, so “no choice” is a sign of moral duty, not freedom of will.

CASE 3

The authors proffered a third scenario because of the possibility that [they] “did not make the deterministic nature of the scenario salient enough to the participants.” (They were worried that the “supercomputer” example was not clear enough in mandating determinism.) They thus described a third scenario corresponding to determinism based on genes and environment:

Imagine there is a world where the beliefs and values of every person are caused completely by the combination of one’s genes and one’s environment. For instance, one day in this world, two identical twins, named Fred and Barney, are born to a mother who puts them up for adoption. Fred is adopted by the Jerksons and Barney is adopted by the Kindersons. In Fred’s case, his genes and his upbringing by the selfish Jerkson family have caused him to value money above all else and to believe it is OK to acquire money however you can. In Barney’s case, his (identical) genes and his upbringing by the kindly Kinderson family have caused him to value honesty above all else and to believe one should always respect others’ property. Both Fred and Barney are intelligent individuals who are capable of deliberating about what they do.

One day Fred and Barney each happen to find a wallet containing $1000 and the identification of the owner (neither man knows the owner). Each man is sure there is nobody else around. After deliberation, Fred Jerkson, because of his beliefs and values, keeps the money. After deliberation, Barney Kinderson, because of his beliefs and values, returns the wallet to its owner.

Given that, in this world, one’s genes and environment completely cause one’s beliefs and values, it is true that if Fred had been adopted by the Kindersons, he would have had the beliefs and values that would have caused him to return the wallet; and if Barney had been adopted by the Jerksons, he would have had the beliefs and values that would have caused him to keep the wallet.

The result were these: 76% of the participants judged that Barney returned the wallet and Fred kept it of their own free will.  That result is similar to the figures from the Jeremy scenario. Further, 60% of the participants judged Fred blameworthy for keeping the wallet and 64% of the participants found Barney praiseworthy for returning it. These views were concordant 90% of the time. Finally, 76% of the participants judged that both Fred and Barney “could have done otherwise.”

Again, this evinces a superficial compatibilism, but I am a bit worried about the last result.  Clearly, in a deterministic universe—one in which Fred and Barney’s actions were completely determined by their genes and environment—each could have made only one decision. Either the students do not understand what “could have done otherwise” means, or they have a very sophisticated notion, à la Dennett, of what it does mean, which is that at any given moment either decision was not possible in identical circumstances, but in slightly different circumstances a different decision was possible.

Although the authors note that the students’ replies indicate that they were compatibilist, I am worried that the students still don’t fully comprehend what determinism really means, something that I think philosophers need to clarify when asking such questions. I simply don’t think they’re sophisticated enough to comport “I could have behaved otherwise” with accepting a purely deterministic world. To the authors’ credit, though, they too worry about this.

I won’t summarize the authors’ discussion, but it’s a very good summary of the state of the art, with all the proper caveats and possible objections to their results. On the whole, I liked the paper.

I am of course a “hard incompatibilist”, but the subject of these papers was not to judge whether compatibilism or incompatibilism is the philosophically proper stance. Rather, Nahmias et al. and Sarkissian et al. had identical tasks: are most people compatibilists or incompatibilists? The former says “compatibilists”; the latter “incompatibilists.” How do we reconcile these conflicting results?

As Sarkissian et al. note, perhaps the students tend to be incompatibilists when presented with a scenario asking them to choose “world views”—the vast majority of students in their four-country samples were not determinists and did accept free will—while students tend to be compatibilists when presented, as did Nahmias et al., with more concrete moral dilemmas. This disparity deserves further exploration. But I also think that philosophers who are physical determinists (while accepting some quantum indeterminacy) need to work harder to convey that view to the public. After all, most secular philosophers dealing with this issue are deterministics. The difficulty of limning determinism might be evinced in some of the counterintutive results of the Nahmias et al. paper.

___________

Nahmias, E., S. Morris, T. Nadelhoffer, and J. turner. 2006. Surveying freedom: Folk intuitions about free will and moral responsibility. Philosohical Psychology 18:561-584.

Sarkissian, H., A. Chatterjee, F. De Brigard, J. Knobe, N. S., and S. Sirker. 2010. Is belief in free will a cultural universal? Mind & Language 25:346-358.

Bizarre Mormon anti-masturbation video narrated by BYU President

February 3, 2014 • 12:50 pm

We all know the Catholic strictures about masturbation, and how you can suffer eternally for unconfessed onanism. What I didn’t realize is that the Mormons also regard “self abuse,” depicted in the video below as an implied consequence of watching online pornography, as something with dire consequences.

This video, narrated by Kim B. Clark, president of Brigham Young University (the world’s most famous Mormon college), depicts a college student watching internet porn as the equivalent of a soldier wounded in battle. And those who know and ignore his “addiction” are compared to soldiers who ignore that wounded comrade. The film urges those in the know to report the onanistic miscreant to their bishop or another authority figure.

As the film ends, the self-abuser, who has clearly been subject to that intervention, is now depicted as having a healthy attitude toward the opposite sex, while the tattle-tale looks on.

It’s just like religion to take a normal sexual outlet and make people see it as the equivalent of a grievous wound. Why do Mormons care about this?

This video was apparently removed (by Mormons?) after it was publicized and ridiculed, but Dusty Smith put up a mirror video, and then made his own video mocking it (WARNING: Smith’s video uses pretty raw language, but it’s also passionate and pretty funny)

h/t: Buzzfeed, Ginger K

Save a life in 3 minutes

February 1, 2014 • 1:21 pm

A reader who is taking Paul Bloom’s free online course “Moralities of everyday life” (it started Jan. 20), sent me this short video that Bloom uses in the course.  It’s based on Peter Singer’s argument on why we’re obligated to help strangers, and I find it very convincing.

The link at the end to The Life You Can Save site, which recommends some good charities. I also recommend using Charity Navigator, an American site that rates charities based on their effectiveness, financial transparency, and the proportion of donations actually used to help people. I was pleased to see that Doctors Without Borders, the Official Website Charity™, gets the highest rating (4 stars), and gives nearly 87% of its income for its medical program.

I’ve also used Charity Watch (formerly the the American Institute of Philanthropy), which has a convenient page giving the top-ranking charities by area (international relief & development, environmental protection, child protection, literacy, women’s rights, and so on). They give Doctors Without Borders an “A” rating, just a tad lower than the highest, A+.

h/t: Miss May