Australian Jehovah’s Witness (and her fetus) die after mother refuses blood transfusion

April 7, 2015 • 9:29 am

Now here’s a conundrum: at what stage of life does a fetus acquire the “right” to be free from having its health controlled by the religious beliefs of its parents? According to yesterday’s Sydney Morning Herald, a 28-year-old woman in Australia was discovered to have leukemia when she was seven months pregnant. She was also a Jehovah’s Witness, which means that it was against her religion to get blood transfusions. Because there were complications of the pregnancy (probably related to the leukemia), the doctors needed to do a Caesarian section, but couldn’t because the mother would bleed to death since she wouldn’t accept blood.

The baby died, and the mother died shortly thereafter from a stroke—a common cause of death in untreated leukemia, as with the Canadian First Nations girl Makayla Sault. In all likelihood the doctors could have saved the baby, but it would have involved killing the mother, although the mother would have died anyway from the disease.

Seven months pregnant is beyond the time when the fetus is considered viable, and beyond the time abortion is legal in the U.S.

The question is this: did the doctors do the ethical thing by letting both die? Their action was legal, as they abided by the mother’s wishes, and couldn’t at any rate kill her, though a bill has been drawn up in Australia that would criminalize harming a fetus in utero). But should the doctors have forcibly transfused her, saved the baby via Caesarian, and then let the mother have her wish and die from leukemia? What we have here is the equivalent of a parent choosing death, but in the process choosing to abort a 7-months-old fetus. In the U.S., I suspect, doctors would have done the same thing, for to save the baby they’d have to kill the mother.

My own view is that nothing could be done, for I am one of those extremists who isn’t opposed to late-term abortions, and don’t feel that the fetus had any “right” to live at the mother’s expense. Nor do I think that in such a case the mother should have been forced to have a transfusion, which violates her religion—even though I think that those religious views are particularly stupid and harmful. She was, according to law, old enough to have “decided” what she did, though, as a determinist, I recognize that she, probably brainwashed as a child, had no more choice in the matter as did her fetus.

Still, the baby was not unwanted, and both it and the mother could have lived with a transfusion. Perhaps the mother would have eventually died of leukemia, for I don’t think one can survive that without transfusions, but I’m not sure. What bothers me immensely is that both deaths were completely needless, based as they were on the two Bible verses that Jehovah’s Witnesses use to justify refusing whole-blood transfusions.  (Curiously, they will accept some components of blood, like serum.) It’s yet another example of two lives martyred for an ancient work of fiction.

As for the legal aspects, the paper quotes a bioethicist:

Sascha Callaghan, an expert in ethics and law at the University of Sydney, said the law as it stands allowed the mother to make decisions that would affect the fetus, even if it probably would have been able to survive outside her body.

“This isn’t to say it isn’t a tragic event … but we live in a society where, within reason, we let citizens be the authors of their own lives,” she said. “If you are going to grant women full rights as citizens, are you going to dilute those rights for women who are carrying fetuses?”

Dr Callaghan said Jehovah’s Witnesses were often unfairly criticised for their religious stance against blood transfusion despite it being a thoughtfully and strongly held belief.

“This woman had a long-held commitment to the Jehovah’s Witness faith and that’s how she chose to die. We are all entitled to die with dignity,” she said. “When your fetus is in utero, it is inextricably tied to your life.”

This gives unwarranted respect to a horrible, execrable belief. It is not “unfair” to criticize a belief based on Bronze Age mythology, even if it is “thoughtfully and strongly held.”  First of all, it’s hardly “thoughtful”, as it’s based on two Bible verses about eating blood, which certainly can have other interpretations. Jehovah’s Witnesses aren’t allowed to deliberate on the issue, and they get it from indoctrination. Further, there are many “strongly held beliefs” that deserve no respect by virtue of the strength of adherence: denial of rights for gays is such a view.  So while I agree with Callaghan’s view that what doctors did in this case was proper, I don’t afford an iota of respect to the religious views of a woman who decided to die and take her unborn child with her. Yes, by all means apprise her of the risk, and treat her with the dignity afforded all humans who choose to die, but do not insult rationality by saying that criticism of her choice is “unfair.” On the contrary, criticism of such choices is almost mandatory, for they are both irrational and murderous.

h/t: Pyers

Dennett tries to save free will, fails

March 19, 2015 • 9:13 am

I’ve long been puzzled by the many writings of “compatibilists”: those philosophers and laypeople who accept physical determinism of our choices and behaviors, but still maintain that we have a kind of “free will.” Such people reject the classical form of free will that’s been so important to many people (especially religious ones)—the kind of “libertarian” free will that posits that we really can freely control our actions, and in many cases could have chosen to behave other than how we did. This is the kind of free will that most people accept, as they don’t see the world as deterministic; and most also feel that if the world were deterministic, people would lose moral responsibility for their actions (see my post on the work of Sarkissian et al.).

Based on statements of some compatibilists, I realized that one reason philosophers spend so much time trying to define forms of free will compatible with determinism is because they see bad consequences of rejecting all free will. Some compatibilists think that if people realized that they don’t have the kind of free will they thought they did, the world would disintegrate: people would either lie in bed out of sheer languor and despair, or behave “immorally” because, after all, we can’t choose how to behave.

I’ve been rebuked sharply for imputing these motivations to compatibilists. Their efforts, I’m told, have nothing to do with trying to stave off possible bad results of rejecting free will. Rather, they’re supposedly engaged in a purely philosophical exercise: trying to show that we still have a form of free will that really matters, even if the libertarian form has been killed off by science.  I have, however, responded by pointing out statements by compatibilists like Dan Dennett warning about the bad things that could happen if neuroscientists tell us that we don’t have free will.

If you ever doubted that compatibilism is motivated largely by philosophers’ fears about what would happen if people rejected classical free will, and weren’t presented with a shiny new compatibilist form, watch this “Big Think” video by Dan Dennett. It’s called “Stop telling people they have free will”:

Supposedly aimed at promulgating a better concept of free will, Dan’s video in fact doesn’t do that at all. Rather, Dennett tries to show that those neuroscientists who tell people they don’t have free will are being “mischievous” and “irresponsible.” He devises a thought experiment that shows only one thing: if people don’t think they have free will, they start behaving badly, and could even commit crimes! They become “morally incompetent people.” His short talk is an exercise in consequentialism, not a philosophical recasting of free will.

Dennett even cites the Vohs and Schooler experiment purporting to show that if people read passages showing that they have no free will, they tend to cheat more on subsequent puzzle-solving tests. (Note that those supposed effects are tested over a very short span—an hour or two—and say absolutely nothing about the long term effects of rejecting classical free will.)

Dennett, however, fails to cite the work of Rolf Zvaan at Rotterdam, who failed to replicate the results of Vohs and Schooler while pointing out defects in their experimental design. (See my post on that here.) Zvaan found absolutely no effect of reading pro- and anti-free will passages on the level of cheating in subsequent tests. His paper is being submitted for publication.

But even if people behaved worse if they were told that determinism reigns and libertarian free will doesn’t exist, so what? The truth is the truth, and if science shows us something like that, we simply have to deal with it. After all, science has found no evidence for God, either, and yet there are studies showing that belief in God similarly produces better short-term behavior on psychological tests. Do philosophers like Dennett then try to confect new definitions of God—”the kind of a God worth wanting”? Maybe we should redefine God to comport with science: “God is the Cosmos.” No, of course they don’t do that. They’re atheists!

It is curious that Dennett has spent a lot of time attacking the concept of “belief in belief”: the idea that we should tolerate religious belief because, even if not based on truth, it still makes people behave better. Yet when the “belief” is in free will rather than God, then “belief in belief” becomes not only okay, but essential.

And that, I think, is why some compatibilists try to invent forms of free will to replace the libertarian version. They do it, I believe, because they can then tell people that they really do have free will, and so we’ll all continue to behave well and society will thrive.

But I don’t believe that people will either run amok or become vegetables if they become incompatibilists and realize that all our behaviors are determined (or perhaps slightly affected by quantum indeterminacy, which still does not constitute anybody’s idea of “free will”). I’m an incompatibilist, and since I became one neither I nor anyone else has noticed a change in my behavior. I haven’t started robbing banks or assaulting people, and I sure don’t lie abed in the morning!

Society will learn to live with determinism, as it has learned to live with death and the absence of God. And, as I always maintain, abandoning the idea of free will is actually good for society in several ways: it undermines religion, and it is a highly useful attitude when thinking about how to reform the criminal justice system.

*******

BTW, while we’re on free will, reader Jim E. sent a short (2-minute) animation about the famous Libet experiment, and pointed out that Professor Ceiling Cat makes a cameo appearance as a critic of free will. And I do! Look for me at 1:05 in the video below. Dennett is in there, too—as an experimental subject!

Brain-damaged man executed for murder—but all criminals are “brain damaged”

March 18, 2015 • 9:30 am

Last night the state of Missouri executed by lethal injection the convicted murderer Cecil Clayton. Clayton, however, was brain-damaged, and in a way that probably contributed to his crime. The situation is described by The Guardian:

The state of Missouri executed its oldest death row inmate on Tuesday – a man who was mentally impaired from a work accident that removed a large portion of his brain – after his final appeals failed at the US supreme court.

The execution of Cecil Clayton, 74, was delayed for several hours, while the supreme court weighed appeals from Clayton’s defense attorneys.

Lawyers acting for Clayton, 74, had called on the nation’s highest court to intervene and stay the execution. In a petition to the nine justices, they argued that it would be unconstitutional to execute the prisoner because under a series of rulings in recent years the supreme court has banned judicial killings of insane and intellectually disabled people.

Clayton lost about a fifth of his frontal lobe in 1972 when a splinter from a log he was working on in a sawmill in Purdy, Missouri, dislodged and slammed into his skull. The damage has had a long-term impact on his character and behavior, with a succession of medical experts chronicling problems ranging from uncontrolled rage to hallucinations and depression.

The frontal lobe has an important function in controlling impulse and emotion.

Here’s his crime, from MSNBC:

Then, in 1996, Clayton’s life changed forever – again – when he shot and killed a police officer. Before Barry County Sheriff’s Deputy Chris Castetter even got out of his vehicle at Clayton’s home – where the officer had gone to investigate a domestic dispute – Clayton fatally shot Castetter in the head, according to police.

Here’s Clayton (photo from the Guardian):

Missouri Execxution

And here’s a scan of Clatyon’s brain from MSNBC. The damage is obvious and severe:

150316-brain-scan-mn-0820
A brain scan shows the missing portion of Cecil Clayton’s brain. Clayton suffered brain damage in a sawmill accident that required one-fifth of his frontal lobe to be removed. Courtesy of Attorneys for Cecil Clayton

Both the governor of Missouri and the Supreme Court rejected Clayton’s appeal, though admitting he had what they euphemistically called “adaptive deficits.” What they saw as more relevant was Clayton’s ability to understand why he was being killed. More from the Guardian:

“Mr. Clayton’s IQ, since his accident and subsequent deterioration, now falls within the range required for intellectual disability,” the defense wrote in its appeal to the high court. “And there is substantial evidence of adaptive deficits; Mr. Clayton, even in prison, cannot without assistance order canteen items or navigate the telephone system.”

Missouri said that medical exams had found Clayton understood why he was being executed and that meant he was competent to face the needle. They argued that Clayton’s intellectual deficits had to be present before he turned 18 to let him escape execution and that he waited too long to raise his claim.

“As one who has carried a badge most of my adult life, I share the outrage of every Missourian at the murder of law enforcement officer, Deputy Christopher Castetter,” Missouri Attorney General Chris Koster said in a statement following the execution. “Cecil Clayton tonight has paid the ultimate price for his terrible crime.”

This execution is a prime example tragic results that come from people’s failure to understand determinism and its consequences for justice, reward, and punishment. What happened to Clayton is a direct and unavoidable consequence of his background and genes, but also of the public’s erroneous notion that people have “free will”—that in many situations we (and Clayton) can freely choose to act other than the way they did. In fact, science tells us that Clayton had no such choice, whatever the prosecutors say. Our brains are computers made of meat, and run programs based on their wiring, which comes from the genes we inherited and the environments we experienced. There is no ghostly “we” that can override the output of those programs.

The MSNBC link was sent to me by Dr. Russell Jacobe, an anesthesiologist in Texas who should know something about the brain (name and email used with permission). Jacobe sent me his own analysis of the issue, and rather than reiterate what he said, which I fully agree with, I’ll just reproduce what he told me:

The above [MSNBC] link references an execution that has implications regarding freewill. The prisoner, Cecil Clayton, suffered a traumatic brain injury and years later murdered a policeman. After the injury Mr. Clayton had gross abnormalities in the frontal cortex on MRI (missing a significant part of it) personality changes, problems with impulse control, and a decrease in cognitive ability. I do not believe he had the ability to choose his actions at the time he committed the crime. This seems an obvious case where continued medical treatment, not execution, would have been more humane. This patient/prisoner had macro changes to the brain that our current MRI technology can easily identify. What if in the future we can define abnormalities or brain damage at a finer level? There are probably many cases of brain damage in the ranks of murderers that we can’t pinpoint yet via scans or testing. I believe, as neuroscience and genetic testing improve, we will learn that most violent criminals have physical reasons for why they broke the law. We may learn that it is not their fault that their brain structures and pathways predispose them to violence just like it is not a diabetic patient’s fault that her blood sugar is high. Such advances would have profound repercussions for how we punish crime in this country.

I’ve enjoyed your work and website for years and look forward to the new book. Obviously, I had no choice but to send you this email.

But I would go further than Dr. Jacobe, adding something I’ve always believed: every criminal has “brain damage” in the sense that the constitution of their brain, as determined by their environmental history and genetics—in conjunction with the situation in which they found themselves when they transgressed—had no choice but to commit a crime that damages society. Nearly all philosophers agree with that kind of determinism. A criminal could not have done otherwise at the moment of his crime, just as we have no choice about whether to have a sandwich or a salad at lunch.

This determinism makes hash of the notion that we should judge or punish criminals based on whether they “knew right from wrong” or whether they “can understand why they are being executed”. Yes, some miscreants do know and understand those things, but, given that they couldn’t have acted otherwise, why is that relevant? It’s entirely possible to know that what you’re doing is wrong by society’s lights, and yet still be unable to resist doing wrong. Sociopaths are the most extreme example of this: some clearly understand that society judges their actions as wrong, but they themselves don’t feel that they’re wrong. But even criminals who sense that their own actions are “wrong” still have no choice in what they do. And their IQ is irrelevant, too. No matter how “smart” you are, your choices are just as constrained as anyone else’s. We’re all responsible for our missteps, in the sense that we made them and punishment of the miscreant may be warranted. But we’re not morally responsible, for that means that we could have freely chosen a better way.

What Clayton needed was not a lethal injection, but treatment. Yes, perhaps treatment couldn’t help someone with such a severe brain problem. In that case rehabilitation might be futile, but Clayton would still need to be jailed—for both the protection of society from his poor impulse control, and to deter others less obviously debilitated from committing similar crimes.  Biological determinism is still compatible with confinement for these things. Deterrence, rehabilitation, and sequestration are the reasons we determinists favor incarceration, whether it be in a jail or a hospital. (Deterrence is simply the action of an environment circumstance—the observation of someone suffering for what you might contemplate doing—on your neurons.) But in all cases our goal should be the good of society and the possibility of changing the prisoner so he can re-enter society without endangering us all.

But Missouri’s goal in this case goes far beyond that: its goal was largely to punish Clayton for what he did. In other words, the motivation was retribution. This is clear from both the stated “outrage” of the attorney general and the notion that Clayton had to “pay a price” for his crimes—the loss of his life.  Clearly, both of these statements assume that Clayton could have behaved otherwise—could have refrained from the murder. Outrage is not a useful emotion toward someone whose crime probably stemmed from brain damage.

But it’s not a useful emotion to feel towards any crime, although such emotions, and the desire for retribution, may have evolved as a way to protect society from offenders. But rationality has taken us beyond these primitive feelings: we understand determinism, we understand that people’s actions are completely determined by factors over which they have no control, and we can put aside our childish emotions and adopt a truly humane approach to justice. When we realize that criminals never had a choice, we can then let science rather than knee-jerk reactions guide our actions. What punishment is the best deterrent for others? What are the chances that an offender, if released after a certain period, is likely to re-offend? What is the best way to treat prisoners, “brain damaged” or otherwise, to cure them?

All this is, in principle, accessible through research, but little is being done. We’re still letting primitive emotions rather than reason guide our actions. When they slipped the needle into Clatyton’s veins yesterday, it was an act not of reason, but of irrational and state-sponsored retribution. How can it possibly make sense to kill someone for something they could not help doing?

Is ISIS full of “true believers”?

March 8, 2015 • 3:51 pm

You’ve probably seen or heard about the discussion between Sam Harris and Graeme Wood over at Sam’s website, a discussion called “The true believers.” Wood, of course, has become famous—and notorious—for his analysis of ISIS’s theological background in a piece that appeared in The Atlantic (see my post for the link). Wood’s thesis, which he supported by interviewing ISIS supporters outside the Middle East (the man is no fool and didn’t want to be beheaded), was that ISIS represents an apocalyptic strain of Islam, justified by the Qur’an, that aims to establish an ever-expanding Caliphate and longs for a final battle with the West, during which Jesus will appear and save Islam.

Wood was taken to task for the usual things: neglecting “other motivations” for ISIS’s behavior, failure to interview members of ISIS in the Middle East, and for his “un-nuanced” interpretations of theology. By and large, he took as truth what his subjects told him, and when that largely revealed religious motivations, the “Islamophobia-decriers” had to find reasons to discredit him.

In his long discussion with Sam, they go over these motivations again, and I recommend that you read the piece. I’ll highlight just three things:

1. Motivations: religious or otherwise? There is a slight disparity between Harris and Wood here, with Harris taking the religious motivations espoused by ISIS sympathizers at face value, while Wood says that some people might not be expressing other motivations, like resentment of Western colonialism. By and large, though, both men are on the same page; but Wood is a tad more cautious:

Wood: Yes. However, the countervailing current in social science is the tradition in ethnography and anthropology of taking seriously what people say. And this can lead to the exact opposite of the materialist, “root causes” approach. When Evans-Pritchard, for example, talks about witchcraft among the Azande, he’s describing exactly what they say and showing that it’s an internally consistent view of the world. This is something that anthropology has done quite well in the past, and it gives us a model for how we can listen to jihadis and understand them without immediately assuming that they are incapable of self-knowledge.

What I’m arguing for in the piece is not to discard either type of explanation but to remember the latter one and take the words of these ISIS people seriously. Even though at various points in the past we’ve ignored political or material causes, this doesn’t mean that ideology plays no role, or that we should ignore the plain meaning of words.

Of course, we don’t know what people actually think. Maybe they’re self-deluded; maybe they don’t really believe in the literal rewards of martyrdom. We can’t know; we’re not in their heads. But this lack of knowledge cuts both ways. Why do so many people instantly resort, with great confidence, to a material explanation—even or especially when the person himself rejects it?  It’s a very peculiar impulse to have, and I consider it a matter of dogma for many people who study jihadists.

Harris: Yes, especially in cases where a person meets none of the material conditions that are alleged to be the root causes of his behavior. We see jihadis coming from free societies all over the world. There are many examples of educated, affluent young men joining organizations like al-Qaeda and the Islamic State who lack any discernible material or political grievances. They simply feel a tribal connection to Muslims everywhere, merely because they share the same religious identity. We are seeing jihadis travel halfway around the world for the privilege of dying in battle who have nothing in common with the beleaguered people of Afghanistan, Syria, Iraq, or Somalia whose ranks they are joining, apart from a shared belief in the core doctrines of Islam.

. . . Again, the fact that most jihadis are generally rational, even psychologically normal, and merely in the grip of a dangerous belief system is, in my view, the most important point to get across. And it is amazing how resolutely people will ignore the evidence of this. Justin Bieber could convert to Islam tomorrow, spend a full hour on 60 Minutes confessing his hopes for martyrdom and his certainty of paradise, and then join the Islamic State—and Glenn Greenwald would still say his actions had nothing to do with the doctrine of Islam and everything to do with U.S. foreign policy.

I’m perfectly prepared to accept that some of these militants have motivations other than religion. Many may simply long for excitement, or to feel part of something larger than themselves. But what I’m not prepared to accept is that every one of them has nonreligious motivations. It’s curious to me—and this the one thing I think I’ve contributed to Sam’s thinking—that Western apologists like Greenwald and Karen Armstrong will question people’s motivations when they explicitly say their motivations are religious, but will not question them when they say their motivations are based on economics or resentment of Western imperialism. This is the double standard of Western liberals that so infuriates me.

2. So why the double standard? I think both men agree, and I agree too, that holding ISIS to standards different from those to which we hold, say, the Israelis, reflects a kind of paternalism: a tendency to give a break to people considered oppressed.

Harris: Do you have other ideas about why it’s so tempting for liberals to ignore the link between jihadism and religious belief?

Wood: There’s also a deep urge to deny agency to the Islamic State, and I think it’s fundamentally connected to a reluctance to see non-Western people as fully developed and capable of having intelligent beliefs and enough self-knowledge to express them. These people articulate well-thought-out reasons for what they do. And yet ignoring what they say somehow gets camouflaged in the minds of liberals as speaking up for them. It’s delusional.

I think this is on the mark, though liberals are notably reluctant to admit it, for it’s expressing a kind of reverse racism that they deplore. I consider myself a liberal, and am deeply distressed by the view that different groups should be held to different standards of behavior, with some groups excused or overlooked for performing barbaric acts.

3. The false notion of objective morality.  Wood’s interview with ISIS sympathizers convinces me even more that there are no universal moral truths.  Listen to what he says about some of his subjects:

Wood: Anjem Choudary is a fixture on Fox News. He talks to Sean Hannity, and many people would say that those two deserve each other. He’s known for screaming about the greatness and supremacy of shari’ah. But I had no interest in the screaming. Instead, I wanted details. We had a lucid, friendly exchange about what he believed a fully shari’ah-compliant caliphate would look like. I found him articulate, informed, and pleasant company in this regard. When I say “informed,” I mean he had answers to all my questions. They might not have been the right answers, but he was able to answer pretty much everything I could come up with about the Islamic State, about how it looks and why it’s so wonderful.

And he did this unflinchingly, even when he was endorsing what I would call rape or slavery—what even he would call slavery, in fact. This was not a tough call for him. If he has any compunction about these practices, it was completely undetectable. That was not true of some others I’ve interviewed who have literalist views of Islam. To be in the presence of someone who can say, in this modern day, that slavery is a good thing and that to deny its goodness is an act of apostasy was a very unsettling experience.

Most moral objectivists would say that slavery is objectively wrong. I say it’s “wrong” because a society that condones it is a dysfunctional society that promotes the subjugation and unnecessary suffering of individuals. But Choudary would say it’s fine, justifying it on Quranic grounds, or even on consequential grounds. How do you convince someone like him that he’s objectively wrong? Such people appeal to divine sanction, and although you can say that there is no god, and he should be appealing to something else, the fact is that many people hold religious dogma as the arbiter of morality.

I have seen attacks on the internet of my views that there are no objective moral truths, but I don’t find them convincing. Slavery is an example that most reasonable people would agree on, but there are other and harder issues, like abortion, that defy any objective “moral solution.” One must, at bottom, express some kind of preference, like for “overall well being,” that can be neither quantified not objectively justified.

 

A philosopher asserts that there are “moral facts”, and we’re messing up our kids’ education by not telling them that

March 4, 2015 • 2:00 pm

One thing that disturbs me about naturalism is the increasingly frequent contention that there are objective moral “facts” or “truths,” which can somehow be discerned scientifically. I don’t agree with that, since at bottom I think that what one sees as “right” or “wrong” ultimately rests on a set of subjective preferences that can’t be adjudicated scientifically. This is the one major disagreement I have with Sam Harris and Michael Shermer, though I agree with Sam that being “more moral” generally corresponds to “providing more well being.” Like Sam and Michael, I am a consequentialist: I judge actions as “right” or “wrong” based on their consequences to society. The problem is that even if you’re a consequentialist, how do you weigh conflicting consequences—when an action is good for some and bad for others, and in different ways? And others—deontologists—see morality as resting on following rules rather than a utilitarian toting up of consequences, and some philosophers argue for that view.

My view is that there is no objective morality, though reasonable people will generally agree on what is moral. (However, “reason” tends to be bent when the morality is inspired by faith, for religious “morality” is often quite divergent from what most of us would see as our own morality.)  But how do you convince a devout Christian that it’s wrong to prevent gays from marrying, or a devout Muslim that it’s wrong to prevent girls from going to school?

Justin P. McBrayer, however, disagrees in Monday’s “Opinionator” column in the New York Times. His piece, “Why our children don’t think there are moral facts,” argues strongly that there are moral facts, and they’ve been grossly misled by their teachers. He adds that we’d best tell our kids that moral facts are objective lest the world degenerate into immorality.

McBrayer is described as “an associate professor of philosophy at Fort Lewis College in Durango, Colo., [who] works in ethics and philosophy of religion”, but I don’t know how much, if any, of his views about moral factitude come from faith. Regardless, I think he’s confused, and doesn’t make a good case for objective moral truths.

McBrayer first says that kids are taught that there’s a difference between facts and opinions, which of course is true, but then confuses people with the following dialogue between him and his son to show the supposed lack of distinction.

Me: “I believe that George Washington was the first president. Is that a fact or an opinion?”

Him: “It’s a fact.”

Me: “But I believe it, and you said that what someone believes is an opinion.”

Him: “Yeah, but it’s true.”

Me: “So it’s both a fact and an opinion?”

Then McBrayer gives a list of things that, he says, most people consider opinions but that he clearly believes are “moral facts”:

Here’s a little test devised from questions available on fact vs. opinion worksheets online: are the following facts or opinions?

— Copying homework assignments is wrong.

— Cursing in school is inappropriate behavior.

— All men are created equal.

— It is worth sacrificing some personal liberties to protect our country from terrorism.

— It is wrong for people under the age of 21 to drink alcohol.

— Vegetarians are healthier than people who eat meat.

— Drug dealers belong in prison.

The answer? In each case, the worksheets categorize these claims as opinions. The explanation on offer is that each of these claims is a value claim and value claims are not facts. This is repeated ad nauseum: any claim with good, right, wrong, etc. is not a fact.

In summary, our public schools teach students that all claims are either facts or opinions and that all value and moral claims fall into the latter camp. The punchline: there are no moral facts. And if there are no moral facts, then there are no moral truths.

Well, I can see saying that if you have an opinion, which is your view on an issue, that opinion can also be a fact (i.e., “my opinion is that the speed of light is constant in a vaccuum”), but opinions may not be factual; they are, according to the Oxford English Dictionary

a. What or how one thinks about something; judgement or belief. Esp. in in my opinion: according to my thinking; as it seems to me. a matter of opinion : a matter about which each may have his or her own opinion; a disputable point.

In other words, an opinion is someone’s belief or judgement. Whether that opinion happens to be true (“a fact”) depends on two things: a). it concerns an assertion about reality that can be adjudicated by observation (instead of subjective judgments like “my opinion is that pie is better than cake”), and b). the adjudication shows that the factual belief is true. In none of the cases McBrayer gives above can I see a way to determine whether the “opinions” are “true” in any meaningful sense.  I agree with some of them (but not all), but how do you determine whether it’s a “moral truth” that “drug dealers belong in prison”?

The correct way to teach the difference between fact and opinion is, I think, the way I outlined in the paragraph above, and I don’t see that it should cause any difficulties. When kids are young they must be taught that things are “right” or “wrong”, but I don’t think they should ever be told that those issues are simply factual. That’s no way to have a discussion. If the kid asks, “Why?”, then there’s the opportunity for a fascinating discussion (which will either involve “Because I said so” for the youngest kids or, for older kids, a discussion of what you—or society—see as the basis for morality.

The reason McBrayer thinks that we should tell kids that there are moral facts is because it supposedly makes them behave better than if they just see moral judgments as opinions:

It should not be a surprise that there is rampant cheating on college campuses: If we’ve taught our students for 12 years that there is no fact of the matter as to whether cheating is wrong, we can’t very well blame them for doing so later on.

Indeed, in the world beyond grade school, where adults must exercise their moral knowledge and reasoning to conduct themselves in the society, the stakes are greater. There, consistency demands that we acknowledge the existence of moral facts. If it’s not true that it’s wrong to murder a cartoonist with whom one disagrees, then how can we be outraged? If there are no truths about what is good or valuable or right, how can we prosecute people for crimes against humanity? If it’s not true that all humans are created equal, then why vote for any political system that doesn’t benefit you over others?

What he’s doing, in my view, is distorting the meaning of “fact” simply so that it will have better results for society. But what happens when a kid asks, “what is the basis for judging your moral claims as ‘true’?” You can’t just say “Because I said so”—that’s no way to determine truth, or educate kids. You have to prove it, and you can’t do that without appealing to subjective judgments.  What happens when a kid asks a Christian parent, “Daddy, why is abortion wrong?”  I won’t go on; you can see the problem.

McBrayer winds up reiterating his unsupported assertions:

We can do better. Our children deserve a consistent intellectual foundation. Facts are things that are true. Opinions are things we believe. Some of our beliefs are true. Others are not. Some of our beliefs are backed by evidence. Others are not. Value claims are like any other claims: either true or false, evidenced or not. The hard work lies not in recognizing that at least some moral claims are true but in carefully thinking through our evidence for which of the many competing moral claims is correct. That’s a hard thing to do. But we can’t sidestep the responsibilities that come with being human just because it’s hard.

That would be wrong.

I find it odd that McBrayer is a professor of philosophy, and nevertheless can come out with things like this.  Some value claims simply CANNOT be adjudicated by evidence. Abortion is one. Even if you’re a consequentialist like I am and on those grounds am pro-choice, what do you say to someone who feels otherwise, either because they have the religious notion that embryos have souls or the consequentialist notion that it’s worse for society to allow abortions than if it prohibited them? How can you decide? Even the notion “don’t kill innocent people,” won’t resonate with a Muslim extremist if those innocents are apostates.

Of course facts do come into play in some moral discussions. If you oppose abortion on grounds of fetal viability or fetal pain, those things can be empirically determined (or course the age of viability is going to get smaller in the future!). But at bottom all discussions of right or wrong come down to what result one prefers—what you think moralty is supposed to achieve. That’s not to denigrate it, for without rules we can’t have harmonious societies. But I simply don’t believe that one needs to tell kids that there are moral facts to get them to behave in a desirable way. But that, of course, is my subjective judgment.

“Do the right thing”: changing morality and the Friendship Nine

January 29, 2015 • 9:30 am

If you are too young to have lived through the civil rights era in the U.S., you probably haven’t heard of the “Friendship Nine.” They were a group of black men who, in 1961, decided to commit an illegal but nonviolent act of resistance to the odious segregation laws in South Carolina. (The name of the group came from the fact that most of them went to Friendship Junior College.)

On January 31 of that year, the group walked into a store in Rock Hill, South Carolina, sat down at a lunch counter, and ordered lunch. That was illegal: blacks were forbidden from eating in white establishments. They were arrested and convicted. The group decided, as a statement, to go to jail rather than put up bail. They served 30 days at hard labor. The signifiance of this event, which I still remember, was (according to Wikipedia), this:

“What made the Rock Hill action so timely … was that it responded to a tactical dilemma that was arising in SNCC [Student Nonviolent Coordinating Committee] discussions across the South: how to avoid the crippling limitations of scarce bail money,” wrote Taylor Branch in “Parting the Waters,” his Pulitzer Prize winning account of the Civil Rights Movement. [JAC: Branch’s book is terrific.] “The obvious advantage of ‘jail, no bail’ was that it reversed the financial burden of protest, costing the demonstrators no cash while obligating the white authorities to pay for jail space and food. The obvious disadvantage was that staying in jail represented a quantum leap in commitment above the old barrier of arrest, lock-up, and bail-out.”

During their sentence, the men refused to work several times and were put on a bread-and-water diet. All of this drew national attention to the inequities faced by blacks in the South, which ultimately led to the Civil Rights act of 1964, pushed through Congress and signed by Lyndon Johnson.

I bring this up for two reasons. First, the men’s convictions were finally overturned—two days ago, after 54 years. Over much of the interim, the men suffered from having a conviction on their record, hampering their efforts to get jobs. On Tuesday, the men’s original lawyer moved for dismissal, the current prosecutor agreed, and the judge apologized, saying,”We cannot rewrite history, but we can right history.” (See the dramatic courtroom video here.)

That brings up not only the idea of human rights, but also question of “What is the right thing to do?” Reporting on the story last night, Brian Williams of NBC News (the channel I watch) said something like this: “South Carolina did the right thing after more than 50 years.”

Who among the readers here doesn’t agree, instantly and instinctively, that clearing those men, as well as the old battles for civil rights, were indeed the right things to do?  Most Americans would nod in agreement as well.

Yet when I was young, the instincts were largely the opposite, particularly in the southern United States. Segregation was seen as natural and right (indeed, it was often justified on Biblical grounds), and what the Friendship Nine did was seen as wrong and immoral: a group of people claiming a right that they didn’t have.

The instinctive feelings that we have convey a couple of lessons. First of all, they have changed dramatically over those fifty years, and almost completely among white Americans over the last century. Yet our feelings about what’s right have always seemed to come from the gut, even when those feelings change.

Francis Collins and other religionists argue that our instinctive views of right and wrong can’t be explained by science, but must have been vouchsafed by God. (Collins calls this set of feelings “the Moral Law”). But if those instincts change so drastically, and so rapidly, what does that say? It says, of course, one of two things. Either God has changed the Moral Law (which can’t be true if you’re a true believer), or that our moral instincts come not from God but from rationality, secularism, and changing circumstances.

The answer, of course, is the latter. As Peter Singer argues in his book The Expanding Circle, and Steve Pinker in The Better Angels of Our Nature, the increasing interactions between different groups of humans, and the changing tide of thought, has made us realize that nobody is privileged with a set of “rights” not shared by other humans. That is why, as Pinker documents eloquently, what humans see as “moral” has changed so much over the last several centuries.

Second, the rapid change shows that our particular feelings about right and wrong, at least in this case, cannot come from our genes. Morality about civil rights, and other things like animal rights, child labor, slavery, women’s rights, and so on, has changed too fast to be accounted for by evolution.  Yes, some feelings of what is “right” probably reside in our genes (our preference for our own children and our own relatives over others, for instance), and perhaps the very notion of “right” vs. “wrong” also resides in our genes, but the particular actions and feelings that constitute right and wrong are often quite malleable.

Morality does not come from God, and most of it isn’t in our genes. It comes, I suggest, from an evolved background of having a code of behavior that enables humans to live harmoniously, on top of which is overlain the particulars of that code, which change not only as our society changes, but as our species learns what it takes to make a good society.

*****

Here’s a short documentary on the Friendship Nine:

“The dark side of free will”

December 11, 2014 • 10:08 am

Gregg Caruso is an Assistant Professor of Philosophy at Corning Community College, as well as chief editor of the journal Science, Religion, and Culture.  In this ten-minute TEDx talk, he discusses what he calls the “dark side of free will”. Note that the “free will” he’s speaking of is contracausal (libertarian) free will (the idea that at any moment you could have made any of several choices), not “compatibilist” free will (the notion that your choices are determined beforehand by physical laws, but you still have some sort of “free will” anyway). So before you compatibilists start kvetching, remember that Dr. Caruso’s addressing the form of free will that I believe most people hold, and certainly the type that most religious believers hold. He’s lecturing to a general audience, so I have no doubt they know what kind of free will it at issue

And although I’m sometimes told I lack philosophical savvy/credentials to allow me to pronounce on this issue, Caruso certainly has: as his bio notes, he’s “the author of Free Will and Consciousness: A Determinist Account of the Illusion of Free Will (2012) and the editor of Exploring the Illusion of Free Will and Moral Responsibility (2013) and Science and Religion: 5 Questions (2014).”

Anyway, here Caruso argues that rejecting libertarian free will is actually a beneficial act, and that accepting it has, as I’ve long asserted, bad consequences for our system of rewards and (especially) punishments.

And, just to get you riled up, I don’t see what advantages there are in accepting compatibilist free will as opposed to being a pure, incompatibilist determinist. The usefulness of compatibilism seems limited to its keeping philosophers employed and the Little People convinced that they do have some sort of “free will” after all, even if it’s not the kind of free will they think they have. The whole area of compatibilism appears to involve redefining terms that we thought we understood all along, much as Sophisticated Theologians™ do all the time with the term “God.”

I will maintain until my last breath that philosophers who are compatibilists should, if they really wanted to improve society, spend their time teaching about the consequences of the determinism they accept rather than engaging in a never-ending argument about semantics. I see nothing to be gained by promoting compatibilism. The only reason I bring that view up, in fact, is because I think it distracts us from the important issue of physical determinism, and because I think that some compatibilists are motivated by the Little People argument (they say so explicitly) as well as confusing people with their tortuous arguments.

Note: you’re not allowed to comment until you’ve listened to the whole video, for I want people to discuss Caruso’s points.