I’ve now read (and blurbed) Sam’s new book, Waking Up: A Guide to Spirituality Without Religion, which is a provocative synthesis of neuroscience, spirituality, and Sam’s own adventures in meditation and drug-taking. I recommend it, though I told Sam that I thought he’d get pushback from that segment of the atheist community (not including me) who cringe when they hear the word “spiritual.” The book will be out September 9, and you can get the Kindle version for about 12 bucks.
In the meantime, Sam has published the winner of his “Moral Landscape Challenge,” and responded to the winner’s essay. Here’s the original challenge:
It has been nearly three years since The Moral Landscape was first published in English, and in that time it has been attacked by readers and nonreaders alike. Many seem to have judged from the resulting cacophony that the book’s central thesis was easily refuted. However, I have yet to encounter a substantial criticism that I feel was not adequately answered in the book itself (and in subsequent talks).
So I would like to issue a public challenge. Anyone who believes that my case for a scientific understanding of morality is mistaken is invited to prove it in under 1,000 words. (You must address the central argument of the book—not peripheral issues.) The best response will be published on this website, and its author will receive $2,000. If any essay actually persuades me, however, its author will receive $20,000, and I will publicly recant my view.
When I saw this, I thought that nobody would ever see that twenty grand or a public recantation, for Sam’s a man who holds firmly to his views. But, given the criticism of his book, I looked forward to seeing who produced the “best response.”
My own take was that although Sam’s case for objective moral values—those values that maximize “well being”—wasn’t completely persuasive, his form of consequentialism did align almost perfectly with what most people think of as “moral acts.” My main problem was how one could adjudicate well-being when it conflicted between different issues (e.g., personal happiness versus societal harmony), as well as the possibility that what Sam considered moral would sometimes conflict with what we instinctively feel is moral (i.e., Sam’s version would have all of us give away a lot of our money to poorer people, but neither he nor the rest of us—and that includes Peter Singer—do that). The “is-doesn’t-denote-ought” issue didn’t concern me so much, since one can simply impose on the whole scheme a Rawlsian “veil of ignorance” (that is, a group of entities who don’t yet live on Earth, but make decisions about what will be moral before they take up a life on Earth—ignorant of whether they’ll be a billionaire, a factory worker, or a poor Indian farmer. To me, that makes Sam’s scheme even more “objective.”
Surprisingly, there were over 400 responses to the challenge, which tells you how seriously people take Sam’s views. The onerous task of judging them fell to the diligent and estimable Russell Blackford, who selected the as the winner an essay by philosopher Ryan Born. You can find it here.
As expected, Born takes Harris to task for deriving “ought” from “is”—that is, well-being was rejected as a “scientifically derivable” and objective criterion for morality. This critique was shared by many entrants. Blackford judged Born’s version the most persuasive.
I won’t go into all the tos and fros, but here’s the heart of Born’s critique:
Neither of your analogies invalidates the Value Problem. First, your analogy between epistemic axioms and moral axioms fails. The former merely motivate scientific inquiry and frame its development, whereas the latter predetermine your science of morality’s most basic findings. Epistemic axioms direct science to favor theories that are logically consistent, empirically supported, and so on, but they do not dictate which theories those will be. Meanwhile, your two moral axioms have already declared that (i) the only thing of intrinsic value is well-being, and (ii) the correct moral theory is consequentialist and, seemingly, some version of utilitarianism—rather than, say, virtue ethics, a non-consequentialist candidate for a naturalized moral framework. Further, both (i) and (ii) resist the sort of self-justification attributed above to science’s epistemic axioms; that is, neither is any more self-affirming than the value of health and the goal of promoting it. You might reply that the non-epistemic axioms of the science of medicine enjoy the sort of self-justification you have in mind for the moral (and likewise non-epistemic) axioms of your science of morality. But then your second analogy, between the science of medicine and your science of morality, fails. The former must presuppose that health is good and ought to be promoted; otherwise, the science of medicine would seem to defy conception. In contrast, a science of morality, insofar as it admits of conception, does not have to presuppose that well-being is the highest good and ought to be maximized. Serious competing theories of value and morality exist. If a science of morality elucidates moral reality, as you suggest, then presumably it must work out, not simply presuppose, the correct theory of moral reality, just as the science of physics must work out the correct theory of physical reality.
Well, I’m not aware of any “serious competing theories of value and morality”. That is, there may be ones that are proposed seriously, but I don’t see them as serious competitors to a kind of consequentialism that might be construed in a Rawlsian fashion. (I’m aware that some other philosophers see moral values as “objective”. Peter Singer told me he is starting to come around to moral objectivity, referring me to Derek Parfit’s book On What Matters. But I simply gave up on after glancing through its two thick, dense volumes.)
At any rate, I think Sam forked out the 2 grand but not the 20 grand, because his answer to Born, given here, is a rebuttal, “Clarifying the Moral Landscape“. It’s a long response—8 single-spaced pages when printed out—and Sam ably defends the value of his system of ethics over others. A brief excerpt:
Ryan wrote that my “proposed science of morality cannot offer scientific answers to questions of morality and value, because it cannot derive moral judgments solely from scientific descriptions of the world.” But no branch of science can derive its judgments solely from scientific descriptions of the world. We have intuitions of truth and falsity, logical consistency, and causality that are foundational to our thinking about anything. Certain of these intuitions can be used to trump others: We may think, for instance, that our expectations of cause and effect could be routinely violated by reality at large, and that apes like ourselves may simply be unequipped to understand what is really going on in the universe. That is a perfectly cogent idea, even though it seems to make a mockery of most of our other ideas. But the fact is that all forms of scientific inquiry pull themselves up by some intuitive bootstraps. Gödel proved this for arithmetic, and it seems intuitively obvious for other forms of reasoning as well. I invite you to define the concept of “causality” in noncircular terms if you would test this claim. Some intuitions are truly basic to our thinking. I claim that the conviction that the worst possible misery for everyone is bad and should be avoided is among them.
To me the notion of well being is intuitive, but I’m unconvinced that it’s objective. Nevertheless, neither do I find any other system of morality better than Sam’s. I think these statements, for instance, are accurate:
Ryan also seems to take for granted that the traditional categories of consequentialism, deontology, and virtue ethics are conceptually valid and worth maintaining. However, I believe that partitioning moral philosophy in this way begs the very question at issue—and this is one reason I tend not to identify myself as a “consequentialist.” Everyone knows—or thinks he knows—that consequentialism fails to capture much of what we value. This is true almost by definition, because, as Ryan observes, “serious competing theories of value and morality exist.”
But if the categorical imperative (one of Kant’s foundational contributions to deontology, or rule-based ethics) reliably made everyone miserable, no one would defend it as an ethical principle. Similarly, if virtues such as generosity, wisdom, and honesty caused nothing but pain and chaos, no sane person could consider them good. In my view, deontologists and virtue ethicists smuggle the good consequences of their ethics into the conversation from the start.
And, tellingly, Sam suggests this example, which reminds me of Hitchens’s Gendankenexperiment about an good act that could only be performed by a religious person:
Of course, intentions aren’t the only things that matter, as we can readily see in this case. It is quite possible for a bad person to inadvertently do some good in the world. But the inner and outer consequences of our thoughts and actions seem to account for everything of value here. If you disagree, the burden is on you to come up with an action that is obviously right or wrong for reasons that are not fully accounted for by its (actual or potential) consequences.
Readers, do you want to think of one?
In the end, the criterion of maximizing well-being seems to me (a admitted philosophical tyro) the best we can do for a system of morality. But is it objective? I don’t know, nor am I convinced that that issue is important. What’s more important is the empirical issue of how to measure well-being, and the less empirical one of how to trade off different forms of well being.
An addendum: Sam’s my friend, and I won’t take kindly to criticisms of his acumen or intellectual abilities in the comments. Stick to the issues. We’ve also discussed torture and profiling ad nauseum, so lay off those subjects as well. This post is about morality.