Yesterday I called attention to Russell Blackford’s review of Sam Harris’s new book, The Moral Landscape
, which asserts that one can use science to judge the morality of different behaviors. Sam responded in an email to me that, at my request, he’s allowing me to post. Sam emphasizes, though, that this is a personal email and not a polished piece meant for publication. (Nevertheless, Sam’s emails are as polished as most people’s books!) If you haven’t read Russell’s piece
, or my post from yesterday summarizing it
, it might be salutary to do so before reading Sam’s reply, which is below the line.
I just noticed your blog post about the Blackford review. At some point, I’ll have to respond to all of this at greater length. But, briefly, in response to your core points:
- How do we actually measure well being?; for that is what we must do to make moral judgments. The metric for well being of a person, or an animal, must differ from that of groups or societies, yet they’re to be put on a single scale. In some cases, of course, it’s easy; in others, seemingly impossible.
This is simply not a problem for my thesis (recall my “answers in practice vs. answers in principle” argument). There is a difference between how we verify the truth of a proposition and what makes a proposition true. How many breaths did I take last Tuesday? I don’t know, and there is no way to find out. But there is a correct, numerical answer to this question (and you can bet the farm that it falls between 5 and 5 million).
- Given that, how do we trade off different types of well-being? How do you determine, for example, whether torture is moral? In some case, as Harris pointed out in The End of Faith, torture may save innumerable lives, but there’s a societal effect in sanctioning it. How do you weigh these? How do you determine whether the well-being of animals outweighs the well-being we experience when eating meat?
These are all interesting questions. Some might admit of clear answers, while others might be impossible to resolve. But this is not my problem. The case I make in the book is that morality entirely depends on the existence of conscious minds; minds are natural phenomena; and, therefore, moral truths exist (and can be determined by science in principle, if not always in practice). The fact that we can easily come up with questions that are hard or impossible to answer does not challenge my thesis.
- There are behaviors that we see as moral, or at least not immoral, that Harris’s metric nevertheless deems immoral. We favor our children and family, for example, over other people. According to Harris, we shouldn’t do this unless it increases universal well-being. Don’t give money to your kids—give nearly all of it to poor Africans who need clean water and medicine. Yet people do not condemn others for giving their kids a marginal benefit in lieu of tremendous benefits to strangers.
Admittedly, I did not spend as much time on this issue as I could have — but the answer here seems pretty straightforward. There may be many equivalent peaks on the moral landscape: on some everyone might favor their friends and family to a degree that is compatible with universal well-being; perhaps on others everyone is truly impartial. No doubt there will be other regions lower down on the ML where people are highly biased towards their nearest and dearest, at a significant cost to everyone. Perhaps there are also regions where everyone is truly impartial, but their impartiality functions in concert with other factors so as to degrade the well-being of everyone. Every possible weighting of us-vs.-them can be represented in this space, along with all other relevant variables — and each will have consequences in terms of the well-being of everyone involved. Yes, there will be worlds in which some very selfish people make out rather well while causing great misery to others. And yes, it could be impossible to convince these people that life would be better if they behaved differently. But so what? These won’t be peaks on the landscape, and it will still be true to say that movement upwards toward a peak will be constrained by the laws of nature.
Blackford (along with everyone else) has gotten bogged down in the concepts of “should” and “ought.” We simply don’t have to think about morality in these terms. Yes, we feel certain moral imperatives — I can be overcome by remorse, for instance, and feel that I “should” apologize for something that I’ve done. But this is just a folk-psychological way of talking about my experience in relationship to others. What if my apologizing in this instance would create an immensity of suffering for everyone on earth? Well, then, I “shouldn’t” do it. And if I still felt a nagging sense that I still should apologize, I “should” ignore this very feeling. Whether we feel that we should do something, or can convince others that they should do it, is all but irrelevant to the question of whether we will be moving up or down on the ML (modulo the psychological cost of living with nagging feelings of “should”).
I’ve discussed this a fair amount in my public talks. Yes, it is possible for our moral intuitions to be misguided — and we need to learn to ignore certain framing effects. In this case, however, it is also possible that we are responding to the fact that the situations are not actually the same. If pushing a person is just BOUND to have a much bigger effect on us than flipping a switch–well, then, we have to take this effect into account. Needless to say, we could concoct a trolley problem that made this nonequivalence undeniable: just imagine a version in which the man you were being asked to push had the opportunity to plead for his life and show you pictures of his wife and children…
- According to Blackford, Harris fails to give a convincing reason why people should be moral. Blackford notes:
If we are going to provide [a person] with reasons to act in a particular way, or to support a particular policy, or condemn a traditional custom – or whatever it might be – sooner or later we will need to appeal to the values, desires, and so on, that she actually has. There are no values that are, mysteriously, objectively binding on us all in the sense I have been discussing. Thus it is futile to argue from a presupposition that we are all rationally bound to act so as to maximize global well-being. It is simply not the case.
Again, this totally misses the point of my argument. And the same annihilating claim could be made about any branch of science. There are no scientific values that command assent in the way that Blackford worries morality should. Why value human well-being? Well, why value logic, or evidence, or understanding the universe? Some people don’t, and there’s no talking to them. The fact that some people cannot be reached on the subject of physics — or use the term “physics” in ways that we cannot sanction — says absolutely nothing about the limitations of physics or about the nature of physical truth. Why should differences of opinion hold any more weight on the subject of good and evil?
I believe Sam is preparing a comprehensive reply to some of the criticisms of his book, so by all means continue this dialogue in the comments, refraining—as always—from invective.