111 thoughts on “Sam Harris on The Daily Show

  1. I wished that Stewart would’ve asked Sam Harris about the NYC mosque controversy. Think there would’ve been some disagreement between the two.

  2. I’m starting to get a feel for places where I think Sam is really on to something and where I’m less sure.

    I think that he’s right on the button when he says that some things can be shown, using science and reason, to be immoral and harmful. Genital mutilation, slavery, oppression of women, things like that. I’d argue that this is grossly underutilized especially where religion takes one stance and science takes another.

    Take the question of teenage pregnancy. I think that both religious and secular groups agree that this is a problem and should be reduced, however religious groups are using abstinence pledges, denying young adults sex education and trying to prevent the distribution of contraceptives. This can be shown empirically to be a failure – it isn’t merely an inefficient use of resources but it fails in its objectives!

    We have a very real example of where religious and secular ideas come into conflict, where we both have similar end goals but the “moral” stance of theists is actively harmful. Harmful to the young adults but also harmful to the goals they espouse.

    Then take the Jesus & Mo cartoon of a day or two ago, neatly skewering the Catholic church’s claim that they’re pro-life yet they act in ways which are actively harming life. Here again we’re seeing a conflict between religious implementations of moral goals and science.

    I think that Sam is onto something there and maybe we could hammer home these gaps as well. Religious people have certain moral values and goals yet the church and religious texts lead these well-meaning people into working against these goals! If we would listen to science, we could work to reach even religious morals far better than even the churches.

    1. I think you’re giving fundamentalist groups too much credit by assuming they share the goal of reducing teen pregnancy. Their real goal, I think, is ensuring that their sons marry “good” girls, i.e. virgins. To do that, they need to know who the “bad” girls are, and the way they achieve that is by keeping teens in the dark about contraception. So abstinence-only programs are not meant to prevent pregnancy; on the contrary, they rely on pregnancy as a marker for “bad” behavior.

      1. That is a part to be sure, but they are being presented as mechanisms for achieving these goals. To undermine this, they’re given the choice of ignoring reality, publicly acknowledging their real goals, or caving. The first two would publicly marginalize this religious movement and so they would all be big wins.

  3. Cue Giberson to frantically wave his hands and tell Harris he isn’t qualified to discuss such things.

    Oh wait. He has a PhD in philosophy and a PhD in neuroscience.

    Giberson? Only a single PhD… in what exactly, I can’t say because his bio in BioLogos and elsewhere is perfectly unclear. Physics, perhaps?

    Well. I see. OK then.

    1. Sam has one PhD, in Neuroscience. His undergrad degree is in Philosophy. But it matters not—your point stands.

      Giberson is billed on HuffPo as a “religion and science scholar,” whatever that means…

  4. After I made my first blog comment recently several people took the trouble to politely accuse me of inconsistencies, logical errors and character flaws. Reponses that, furthermore, do seem to be pretty well justified.

    I will try and do better this time.

    I am probably one of your more conflicted commentators. I accept the philosophical case against any form of interventionist deity but I retain an emotional attachment to the fuzzy religion, good intentions and down at heel charm of the Church of England.

    I have not yet read The Moral Landscape – it will not be available in the UK until next year – but certain broad themes come through so clearly from Sam Harris’s discussions of the book on the internet that I believe I can reasonably comment upon them without sight of the book.

    I will start with Sam Harris’s deep seated aversion to moral relativism which seems to be the driving force behind his goal of an objective moral order based on science.

    Intriguingly, the only other well known figure who has shown a similar crusading opposition to moral relativism is Pope Benedict XVI.

    In 2005 Benedict preached the homily in the mass preceding the conclave that elected him as Pope warning against the threat to Catholicism posed by the rise of “a dictator ship of relativism”. This was seen as his manifesto and Benedict has been banging the drum for his campaign against moral relativism ever since.

    There is another similarity between this odd couple.

    Sam Harris believes – as I understand it – that objective moral values can be derived from universal features of the human mind discoverable by neuroscientific measurement.

    The Pope for his part believes that objective moral values can be derived from the natural law implanted by God in human nature – or to put it in secular language , the human mind – and discoverable by human reason – which presumably would include neuroscientific measurement

    I stress that that I am not claiming that these parallels show that Sam Harris is a covert theist. Nor am I am seeking to make any criticism of his scientific or philosophical ideas. For the purposes of this comment I am assuming that his project will at some point be successful.

    What interests me is what these parallels reveal about the implications of the project.

    Sam Harris has set out a blueprint for science to fill religion’s shoes completely. Religion’s explanations as to how the universe functions lost authority to science long ago and now Sam Harris is calling on science to take over religion’s last remaining outpost by bringing to an end to whatever competency religion has left in the field of morality.

    Furthermore, Sam Harris envisages science doing more than simply assuming religion’s now rather threadbare pretensions in moral matters. The achievement of the science of morality prophesied by Sam Harris will give science the untrammelled authority that religion enjoyed in moral matters in pre-enlightenment times. Moral philosophy – the other post-enlightenment contender in the morality stakes will be elbowed to one side.

    It is the vaulting ambition of Sam Harris’s programme that, in my view, explains the structural similarity between Sam Harris’s and the Pope’s moral ideas. The comprehensive moral framework made familiar to people in Western Europe and North America by two millennia of Christianity is to be replaced by a similar framework but one based on science.

    Another result to be welcomed is the end of disputes between accommodationist and non-accommodationist bloggers about Stephen Jay Gould’s Non-Overlapping Magisteria. Non-theist scientists will no longer need to complain about the way in which NOMA restricts science to the world of fact and reserves the world of morality to religion. This turf war will be triumphantly decided in favour of science. Religion’s sole remaining role as the arbiter of morality will disappear as moral questions becoming matters of fact falling indisputably within the sphere of science.

    In these circumstances the logical consequence of NOMA’s exclusion of religion from the empirical world is the elimination of religion.

    All that remains, of course,is for Sam Harris and his fellow neuroscientists to come up with the objective measurements of human well being needed to launch the project.

    1. Erm…with all respect…the similarities you observe between Harris and Ratzinger are about as significant as the fact that Hitler and the Six Million Dollar Man both wore mustaches.

      Harris thinks that a moral calculus can be derived in much the same way that Newton derived the mathematical version to describe the motions of the planets.

      Ratzinger thinks that morality should be derived by careful study of the sociopathic ravings of a bunch of Bronze Age goatherders with delusions of grandeur — goatherders who, it must be observed, considered the practices of genocide, mass rape, slavery, torture, genital mutilation, and more to be the utmost examples of righteousness.

      Since your basic premise is so far off, I won’t even pretend to address the rest of your nonsense.

      Cheers,

      b&

      1. Bit harsh there dude. Once we get past the set-up there is some interesting, and I don’t mean “interesting, where did I step in this dog-shit” interesting, stuff there.

        1. Is it harsh? What matters is if it’s true.

          I’d hardly call someone who thinks it’s okay to shield child molesters “anti-moral relativism”.

          Although I do agree that his statements about the Pope do not inform the rest of what he said.

    2. Intriguingly, the only other well known figure who has shown a similar crusading opposition to moral relativism is Pope Benedict XVI.

      Ratzinger may talk the moral absolutism talk but one only has to look at their “psychologists and psychiatrists at the time said paedophiles can be treated so we released then back into the population” defence. It seems that their current stance on handling paedophiles has been informed by changes in the field of psychology.

      And then there is the “well everyone else does it too” defence.

      These are not the hallmarks of hard line moral absolutists.

  5. Sam Harris is pretty much a Utilitarian. The problems of Utilitarianism (“the greatest happiness for the greatest number”) have been explicated.

    Roughly, what is the constant that relates happiness to number?

    * Is one person feeling ecstatic worth 1000 people feeling a little bit glum?

    * How about one person dying so that several can live somewhat more comfortably/happily?

    * Or (one that exercises me) is the unhappiness of battery hens worth my pleasure in buying cheaper eggs?

    Is there ever going to be a Harris constant by which we can resolve such problems?

    1. I’ve written on this before, including in the previous Harris thread.

      Harris seems to think that there is a Platonic ideal of morality (or, perhaps, a number of such ideals).

      To me, it seems blatantly obvious that such a thing is a skyhook in the same sense that creationism is a skyhook.

      Morality is an emergent property that comes from an optimal strategy (in the sense of game theory) for living one’s life.

      We see examples of convergent evolution resulting in species with radically different family histories solving problems in remarkably similar manners. For example, sharks and dolphins have very similar body plans, and cephalopods and vertebrates similar eyes. This is not because there is a Platonic ideal fish or eye, but rather because physics places limits on what is and isn’t effective.

      We see similar examples of moral codes, with Hammurabic brutality giving way to civil liberties and social support infrastructure.

      Permit me to again post what is the best such strategy I’ve thought of to date:

      I. Do not do unto others as they do not wish to be done unto.

      (The First Rule may be broken only to the minimum degree necessary to otherwise preserve it.)

      II. And as ye would that men should do to you, do ye also to them likewise.

      III. An it harm none, do what thou will.

      The rules must be applied in that order. For example, following the second rule is not permissible in circumstances which require violating the first rule (except as provided for by the Exception).

      For explication and examples, please see the last thread.

      Cheers,

      b&

      1. Good post. I think much of what you’ve said Harris would agree with. But I’m not sure about the Platonic ideal bit. He is explicit that human wellbeing is thoroughly dependent on our biology which is of course, produced by evolution. So when he talks about the wellbeing of “other conscious creatures” I reckon that any evaluation of what is in the best interest of them would depend on an understanding of their neurobiology for example. This is to say that if certain values (a kind of fact on Harris’ view) were discovered for Homo sapiens those values wouldn’t necessarily transfer to another conscious species evolved elsewhere in the universe.

        1. Yeah. I’m seriously not getting the Platonic Ideal thing.

          There’s a difference between thinking that something is an identifiable natural kind and thinking it’s a Platonic Ideal.

          For example, consider predators, prey, parasites, and symbiotes. Those things emerge in lots of situations, for scientifically explicable reasons, but there’s no Platonic Ideal of any of them.

          In chaos and complexity theory terms, they’re attractors—easily reached and easily preserved conditions, in certain game theoretical situations which are not rare, which therefore therefore to to be nonrandomly common.

          Saying that you think it’s meaningful to talk about “parasites” and “symbiotes” is just not the same thing as assuming there’s a Platonic ideal of either.

          Likewise, saying that you think morality is a certain kind of phenomenon is just not the same thing as assuming it’s defined in Platonic terms. It could be defined in scientific terms, like “symbiosis.”

    2. Not to dismiss these issues, I think we’re missing a bigger picture which Harris is raising. Even if we may not agree on what the best society looks like, he’s giving a good way of saying what it does NOT look like.

      Seeing as how many religious groups are advocating something that can be shown to be one of the worst societies, I think this scientific critique can at least help us fight them.

      1. I agree completely.

        It’s time to emphatically toss religious dogma as a way of determining morality on the trash heap.

        We can argue over whether Sam Harris is really a Utilitarian or Platonist and whether you can derive an ought from an is when religious thugs stop stoning and mutilating women, killing gays, bombing abortion clinics, blocking stem cell research, strapping explosives to children, flying planes into buildings and so on ad nauseam.

        A system of morality based on maximizing human well being and minimizing human misery can do far better than this.

  6. What I don’t get about what Sam Harris is trying to do is that it seems he’s trying to turn scientists into ethical philosophers without asking first whether there’s any relevance to ethical philosophy.

  7. Amazing that a comedy show can present an adult intellectual conversation better than the so called TV news shows. Way to go, Daily Show.

    1. I agree that the Daily Show most often presents issues in a much more insightful way than any new show. I also agree with the observation that there weren’t many laughs. If he had loosened up just a bit and engaged Stewart’s schtick, I think that he would have enticed the Daily Show demographic to further consider his work and maybe even buy the book. However, a fair number of the DS web site comments are quite harsh, which seems to indicate that his message did not get through to many.

  8. I completely agree that science has something (many things, presumably) valuable to say about our most pressing moral debates. I also agree that scientists (and everyone else for that matter) should not shy away from making moral judgments, or from bringing relevant evidence to bear in doing so. Sam seems to be overselling it though, and implying that science can *determine* which action, choice, system, etc is morally better. Nonetheless, I agree that science has something meaningful to say about which is morally better, and that religion has nothing relevant to say.

    1. I agree that the term “determine” is loaded and likely inadvisable, but from some of his explanations, it seems to me his actual points are not that simplistic.

      I don’t think Harris is claiming that we can (or ever will) rank-order all goods on a linear scale, and choose among them deterministically. He’s only claiming that reasonable people can come to some useful partial orderings and often agree on some things that count as clearly better or clearly worse.

  9. I would say that the only means we have for discovering anything about the world is empirical (evidence-based) enquiry. Science is the most rigorous and methodical form of empirical enquiry, but less rigorous forms like history and philosophy have their uses too. Metaethics is a matter of empirical enquiry about the nature of morality. And I would argue that the best interpretation of the evidence is that there is no such thing as objective moral values, i.e. there is no fact of the matter as to what is moral, because the property of being moral is not an objective property. “Moral” is, roughly speaking, a label that people assign to things they approve of.

    Judging by the online articles he wrote a few months ago, Harris is very confused about the whole issue. He seems to be going the same route as many “moral naturalists”, in redefining “moral” to mean something like “that which maximises well-being”, and then conflating that with the ordinary meaning of the word. In other words, moral naturalism of this sort commits a fallacy of equivocation. But Harris seems even more confused than the average moral naturalist.

    Harris also seems to make an error which is common among the general public (but not philosophers) of thinking that the only alternative to moral realism is moral relativism. But there are other types of moral anti-realism, and it seems to me that among philosophers moral relativism is relatively unusual. Most anti-realist philsophers seem to be error theorists (which is roughly my position) or non-cognitivists.

    The subtitle of his book, “How science can determine human values”, is very poorly chosen. This makes it sound like he’s referring to the values that humans hold. Of course that’s a matter for empirical enquiry, and formal science can help, though we can get a long way towards knowing people’s values just by listening to what they say. But Harris says his project is to establish a science of objective values, not just human values. To be fair, subtitles are often chosen by publishers, not authors, so perhaps Harris isn’t responsible for this cock-up.

      1. Yep, a position roughly like Mackean error theory is the most plausible way to go, and roughly where I see myself, though there are sophisticated forms of relativism like Gilbert Harman’s that are also quite arguable – and the differences in these theories may not be all that important in the scheme of things. The truth of the matter as to who is right between Mackie’s followers and Harman’s followers may come down to some very murky and confused facts about what ordinary people actually believe about morality.

        Without having read the book yet, I gather that its main target is what I call vulgar cultural (or moral) relativism. That’s actually a fairly easy target … which is not to say it’s not worth writing a popular book about. After all, it’s a view that has quite a lot of popularity.

        As for Ratzinger and others, they often accuse people of relativism when what they are actually referring to is merely some kind of consequentialism, such as one or other form of utilitarian theory. These theories throw out immutable moral rules and ask us to look for the consequences of actions in the situations where they take place. Some people call this “situational relativism”, but it’s not relativism in the usual philosophical sense of that word. In fact, it’s unfair to call to call consequentialists “relativists”, thus suggesting that they hold a rather dubious metaethical theory like vulgar cultural relativism.

        1. Good point, Russell. I may have been underestimating the proportion of relativist philosophers, by failing to include the more sophisticated forms. But I think those forms are misguided too. I agree with you that the meaning of moral judgements can’t be completely separated from what people believe about morality. It seems to me that most people believe they are making judgements on matters of objective fact, and to take those judgements as relative is failing to do justice to the speaker’s meaning. If sophisticated relativists’ own moral judgements are relative, then they are speaking a different moral language from most people, and need to be careful not to conflate the two meanings.

          The way to get a more objective picture of what most people really mean would be through a carefully worded survey. That’s one way that “science” (or “experimental philosophy”) can help answer questions about morality.

  10. I haven’t read the book yet, as it is not yet available here, but just viewing this clip leads me to believe that Harris is confused in some fundamental way.

    That is, NO ONE (perhaps apart from a handful of sociopaths) defends pointless suffering, nor defends actions on any other basis than that they are supposedly ‘better’ (in some way).

    Those who argue in favor of the burqa, for example, do not argue that it is bad, therefore it should be worn. Rather, they argue that it is BETTER for society (and, indeed, for the women who are veiled) that the veil be worn. And indeed, many women who wear the veil say the same.

    The idea that what is ‘right’ or ‘good’ is that which promotes human flourishing is much older than utilitarianism, even. But it does no useful work unless one can specify — in some non-question-begging way — what that flourishing is and how actions might promote it.

    I see Harris and read the discussion of his book and cannot escape the feeling that he is just trying to claim objective truth for his own preferences.

    1. From the video:
      Morality and value clearly relates to human and animal well-being.

      Really? Because it’s perfectly moral for me to eat this candy bar, even though it negatively affects my well-being. And most people would say it’s immoral to kill 1 person to save 5, even though 5 people alive equals a lot more well-being. I think Harris needs to explicate on this “obvious” point.

      Our well-being emerges out of the laws of nature… Human well-being relates to genetics and neurobiology and psychology and sociology and economics…

      Ignoring for the moment the fact that Harris hasn’t defined well-being yet… All of these sciences are related to human happiness, but the brain is ultimately where the buck stops. No matter what your financial situation, or your social life, or your physical condition, with the right manipulation of the brain it would be theoretically possible to make you feel happy no matter how “objectively shitty” your life was. So what’s our motivation to improve the world instead of build the Matrix?

      And what about sociopaths, whose brains are different in a significant way from most? Whereas you or I would feel bad about hurting someone else, sociopaths do not. They appear to not have the physical capability to do so. If all of us were sociopaths, does Harris think our “objective” ideas about morality would include “not hurting others”? Because I’m not convinced this is objective at all.

      Lastly, what if some human was born with a mutation in the brain that meant true happiness for that person could only be achieved by hurting others? (Kind of like our normal moral intuitions, only in reverse.) How do we maximize this person’s well-being, considering their happiness is mutually exclusive to ours?

      1. re sociopaths:

        I’d say, and I think Harris would say, that sociopaths are broken moral units, in objective, scientific terms.

        Morality serves an evolved function, and works in particular kinds of ways. It is a natural kind of phenomenon.

        Sociopaths don’t do the morality thing.

        Notice that intelligent sociopaths are often capable of understanding morality from the outside—they have a pretty good idea what counts as right and wrong, but don’t care.

        That’s a lot like an alien anthropologist, who isn’t a moral being but studies morality among moral beings.

        Such a being would have no problem identifying morality as a natural kind, which they happen not to participate in. They could understand what Harris is talking about.

        Nobody should expect any moral system to motivate a sociopath to care about others.

        I don’t think Harris is claiming to be able to do that, and I don’t think it matters much. Hume was right to the extent that you can’t get from is to ought that way.

        Or rather, you may be able to get from is to ought, scientifically, by understanding what an “ought” is. Still, that won’t rationally make you care about oughts.

        Harris is not trying to convert sociopaths. He’s trying to inform morally normal people about morality.

        1. Yes, but all these means that the sociopath is not making any mistake about the world. That is just another way of saying that morality is not objective, in the sense that the word “objective” has in metaethics (and arguably in ordinary discourse).

          Now, Harris may be using the word “objective” in some non-standard or revisionist way, but if so I hope he says so in his book. I think he could have defused much of the criticism of his TED talk if he’d just made that point. He could even borrow a leaf from Dan Dennett’s book and claim that morality has the only kind of objectivity worth having.

          1. So are prescriptions for physical health for humans also not “objective”? An intelligent alien who ignored human prescriptions for physical health would not be making any mistake about the world would he? I’m trying to get my mind around these different senses of “objective”. The sociopaths failure to make a mistake about the world stems from the fact that the rules don’t apply to him right?

          2. Russell:

            Yes, but all these means that the sociopath is not making any mistake about the world.

            Right. My example sociopath can tell what’s right and wrong, and doesn’t care. She’s not in error, factually; she’s just bad.

            That is just another way of saying that morality is not objective, in the sense that the word “objective” has in metaethics (and arguably in ordinary discourse).

            I think it is pretty close to the sense of “objective” in ordinary language. (Or at least consistent with it.)

            Most people can easily imagine sociopaths who know right from wrong but just don’t care, and evil shitheads who know right from wrong and enjoy doing wrong, e.g., hurting people for fun.

            (The ideas are so familiar that they’re stock characters in movies.)

            Most people expect that good people will care about right and wrong, and bad people won’t, or will perversely choose the wrong.

            It doesn’t occur to them that if anybody doesn’t care, or likes doing wrong, that that somehow would make morality not objective. (Why on Earth would it?)

            I’d guess that most people would find the idea utterly bizarre—the idea that some people being morally screwed up somehow implies that there’s no real truth as to whether they’re screwed up.

            Most people don’t think that calling somebody amoral or evil immediately creates a paradox; they think it can be a reasonable thing to say.

            I’m pretty sure most people’s reaction to your statement would be that can’t be right—what would even be the point of distinguishing between good people and bad people, or normal people and moral monsters, if the existence of bad people/monsters would mean that the distinction was itself meaningless or just arbitrary?

            I am quite unclear on the normal sense of “objective” morality in metaethics, though.

            Do metaethics folks really expect that “objective” morality would actually motivate sociopaths to do right?

            It seems to me that both folk psychology and scientific psycholoogy say that knowledge of right and wrong isn’t always motivating. (Or isn’t always sufficiently motivating; everybody “falls short” sometimes.)

            Everybody’s familiar with the idea of knowing that something is wrong, and doing it anyway, because you’re just not a good enough person that the knowledge translates to right action, or due to a transient moral weakness.

            Do metaethics folks really think otherwise—that “objective” moral knowledge would necessarily motivate people to actually be objectively moral?

            That idea seems bizarre to me.

            Now, Harris may be using the word “objective” in some non-standard or revisionist way, but if so I hope he says so in his book. I think he could have defused much of the criticism of his TED talk if he’d just made that point. He could even borrow a leaf from Dan Dennett’s book and claim that morality has the only kind of objectivity worth having.

            Or perhaps that it’s the only kind of objectivity that it’s at all reasonable to expect. You might want the kind that would automatically make people do the right thing, if they know the difference, but it’s unreasonable to expect that, given that everybody knows some people are assholes.

            Expecting assholes to disappear in a puff of logic would be pretty naive, in light of either folk psychology or scientific psychology.

            This reminds me of a joke about what philosophers really want–that what they want is to be able to make an argument so clearly logically compelling that if you don’t agree with it, you die.

            Nobody really expects that, even of objective morality, do they?

          3. Wow, this sociopath reference is really creating unhelpful tangents.

            Is defining “objective” really that hard? Look, “Transformers 2 was a bad movie” is subjective. It’s my opinion, based on my mind. I haven’t stated anything about the world, but I have stated something about the effect the world has on my brain – specifically, that this movie does not induce pleasure in my brain, for whatever reason.

            “Transformers 2 is a bad movie given that it does not meet the following criteria for a good movie…” is an objective statement about reality (assuming all of my criteria are objective; let’s just say they are for the sake of argument). This statement isn’t one about the goings-on in my brain; it’s a statement about whether or not TF2 has such and such characteristics. It’s an actual fact that you could argue as representing reality or not. Thus, objective.

            Many people think that morality is objective. The idea of deforming someone’s face when they’ve done nothing wrong is so repugnant to me, that it’s easy to think that the moral rule against doing so is some objective thing, like a Platonic form, that is out there in the universe. But it’s not; we have enough knowledge to acknowledge that now. Our moral intuitions are a quirky set of feelings bestowed on us by evolution that make us feel really strongly that certain things are just wrong, just because they are and that goes for everyone ever.

            But it’s just a feeling. If the circumstances of our evolution had been different, the stuff we get moral about would be at least slightly different too. Even as we are, different cultures and life experiences program every person’s moral intuitions at least a little bit different from every other person. And if you’re born with some brain abnormality, as a sociopath is, you don’t really experience those moral feelings at all.

            That was the point of bringing up sociopaths. They are one part of the mountain of evidence to show that these moral ideas are just feelings in our heads. They are subjective. “Hurting people is wrong” isn’t a statement about the world; it’s a statement about the effect of the world on my brain. Namely, that the idea of hurting people for no reason is distasteful to me. It causes a reaction in my brain that says not good. That’s all.

            Sam Harris thinks he has a way of making these feelings more than just feelings. He says, “well isn’t it obvious that we oughtto increase well-being?” But remember, you can’t get an is from an ought, so where does this one come from? Where does this idea come from that we ought to increase “well-being”?

            I’m open to hearing answers, but it seems to me that the only place it can come from is Sam Harris’ moral intuition – which is the subjective source of moral feelings that this whole argument was devised to avoid. Increasing well-being feels like the right thing to do. Here we are following our moral intuitions again, except now we’re calling it science.

            And if via this method we could make everyone feel like they were doing the right thing, that would be a big win. I would have no objection, because what could be wrong with that? But, in order to enact Harris’ idea, we have to step on a lot of toes. We have to tell a lot of people that their culture practices or their particular moral intuitions about X are wrong, and they have to do things our way now. The problem is that we’re not only following our subjective feelings again, but using them to justify trampling over others’ feelings.

            Does it still seem like the right thing to do? Yes, it does. Preventing women from getting their faces melted with acid feels really great. So great that it makes one want to say “fuck those guys’ moral intuitions; we’re gonna save those women.” Which is fine if you admit that you’re just doing what your brain made you feel like doing.

            But there’s no objectivity in it.

      2. “Really? Because it’s perfectly moral for me to eat this candy bar, even though it negatively affects my well-being. And most people would say it’s immoral to kill 1 person to save 5, even though 5 people alive equals a lot more well-being.”

        Just two points. First, I doubt that most people would say it is immoral to kill 1 to save 5. It would depend on the specific scenario. If it is vivisecting someone against their will to harvest their organs, most would say immoral. But what about in the context of war or other similar situations? Surely framed in that context most people would say moral.

        As for the first sentence, I don’t see how that observation is a problem for his thesis. On his view, it is possible to be mistaken about what is morally good/ok; it is possible to value the wrong things. I would also like to suggest that your example isn’t specific enough. If we’re just talking about “eating a candy bar”, does that really diminish your well-being? Might it even improve it in certain contexts? But if we’re talking about eating a candy bar everyday, and the rest of your meals being likewise nutritionally deficient, then we know that that is diminishing your higher well-being and the well-being of others potentially.

        1. @Paul W: My comment above was a little rambling. I think the basic issue I have is that Harris is saying that we ought to maximize well-being. But is there any objective premise that can get you to that conclusion? Where does that “ought” come from? I see it as coming from our moral/emotional selves; in other words, we humans just feel good about maximizing well-being. It feels right. Which, hey, is fine – if we can make everybody feel happy and good then I’m all for that.

          Except Harris’ idea is that we can tell people on the basis of objective science that what feels good to them is actually wrong (and isn’t it convenient that those “objectively wrong” acts also happen to feel wrong to us? For example, throwing acid on someone’s face: feels wrong to us and IS wrong, according to science.) So we’re basically doing what feels right to us but not allowing others to do what feels right to them. Which is why I brought up sociopaths – hurting others doesn’t feel wrong to them. You call them morally broken, but if the relative numbers of “normals” and sociopaths on this planet, were reversed, we would be the broken ones. Why should sociopaths acquiesce to the rules we make based on our subjective feelings about maximizing well-being?

          Why should anyone?

          @Nick: I’m just pointing out the disconnect between the human moral intuition and what contributes to well-being. Yes, there is overlap between the two, but there is also much difference. There are plenty of things that are good for us that are morally neutral (e.g. eating healthy), and there are plenty of things that feel wrong even when there is no objective reason for it being so (e.g. the way we are averse to using others as tools to accomplish an end [throwing a fat man onto a rail to stop a runaway trolley and save 5 people], but we are not averse to actions that have the same consequences yet are accomplished via less personal means [flipping a switch to divert a runaway trolley, resulting in 1 person getting run over instead of 5]).

          Our moral intuitions are quirky and not very logical – what you would expect if evolution had endowed us with them via chance mutation that was selected for on the basis that it was “good enough.” But if we’re going to create a logical, scientific account of what contributes to human well-being, our moral intuitions are not going to get us all the way there.

          1. I picked up The Moral Landscape yesterday, haven’t started reading it yet. But my understanding right now is that Harris is not so much saying that we ought to maximize well-being as pointing out that we are all already interested in increasing well-being and that well-being is a domain of objectivity because we are lawfully bound to nature.

            On the sociopaths point: If the relative numbers of sociopaths and normals were reversed, you wouldn’t be talking about Homo sapiens would you?

            “So we’re basically doing what feels right to us but not allowing others to do what feels right to them.”

            Yeah, I think Harris is arguing that there are many things which feel right which really are right (they conduce to greater well-being) but there are also things which feel right but actually move people in the opposite direction (objectively).

            And Tim, I’ve got a dumb question. How do you italicize text?

          2. If the relative numbers of sociopaths and normals were reversed, you wouldn’t be talking about Homo sapiens would you?

            Of course I would. Homo sapiens is not defined by the ability to feel remorse, and there is no rational reason for defining sociopaths as not part of our species. We can reproduce with them as far as I know, and our courts certainly try them as humans!

            I think Harris is arguing that there are many things which feel right which really are right (they conduce to greater well-being)

            You’re glossing over the crux of the issue – who says that it’s “right” to increase well-being? Especially when doing so means telling other people that they can’t do X anymore? Who are we to say that? I’m saying that the only reason Harris wants to increase well-being is because it feels good to do so, but where’s the objectivity in that? How is increasing well-being because it feels good any different from throwing acid on some girl’s face because it feels good?

            How do you italicize text?

            You’ve come to the right man! You use html tags. to start and to end, only without spaces.

          3. Well, adding spaces to my html was a completely pointless exercise! Let’s try that again.

            Enclose the letter i in to start your italics and enclose /i in to end them.

          4. ::deep sigh::

            Ok. Last try. Type a less than, then the letter i, then greater than to start your italics. Do the same thing to end them, only add a / before the i.

          5. Again, that would be a non-standard use of the word “objective”. But is it the only kind of objectivity worth having? Perhaps. But by using the word “objective” in his TED talk and being a bit of a dick to Sean Carroll when Carroll queried it, he caused a lot of confusion. To his credit, he apologised to Carroll, but he could have defused the whole thing much more easily than he did.

            It’s a bit like free will. The whole idea of libertarian free will doesn’t even seem coherent, but it’s what a lot of ordinary people seem to mean by “free will”. But many philosophers (the Stoics, Hume, Dennett, etc.) point out that we do have free will in other senses that can be defined, and even that these are the only kinds of free will worth having.

            Harris could say that the metaethicist’s idea of morality being objective is incoherent, even though a lot of ordinary people seem to share it. But, he could say, there is some other sense in which morality is objective, and this is the only kind of obectivity worth having. I think the case would be weaker than with free will, but maybe not so weak as to be untenable as a suggested revision of terminology. I’d still want to say that this other kind of “objectivity” is a form of subjectivity, but he does get to use revisionist definitions as long as he’s clear that that’s what he’s doing.

            Anyway, maybe he covers this in the book. I now have a copy but need to put aside some time to read it and think about it.

    1. So you don’t think the social instincts of our species are really central to who we are? I mean, you seem to be saying that Homo sapiens, a social primate species, could just as well be an anti-social primate species. That sounds to me like “a species of wolf could just as well be a herbivore”. I think you would be talking about a completely different creature. I’m no expert on sociopathy but it seems that a population of humans, 98% of which were sociopaths, would not even remotely resemble any known human society.

      Thanks!

      1. Nick,

        We could talk at length about what defines a human. But what would be the point? Sam Harris thinks it’s obvious that we ought to increase well-being. I wonder if we have the right to do it against others’ will. I wonder if it’s objectively right to do so. A sociopath wouldn’t think so. Paul W. says that (Harris would say that) a sociopath is a “broken moral unit.” I say, “yes, exactly.” A sociopath doesn’t have a functioning part of the brain that would tell him/her to feel bad about others’ misery. Get it? We normal people want to do this because our brains are making us feel bad. Harris’ thesis boils down to
        1. What’s moral is what increases well-being.
        2. We should increase well-being because it feels bad when we don’t.

        I know most of us haven’t read the book yet, but if Harris can supply a better replacement for number 2, I would like to know what it is. Otherwise, the train of thought is doomed to continue thus:

        Q: Why should we do what feels good?
        A: Because we’ll all be happier if we live without fighting against our moral intuitions. Even if there is no objective morality, doing things that feel wrong puts a lot of stress on a human mind. Life will be better if we treat others well, rather than treat others badly, secure in the knowledge that there is no objective reason not to do so.

        It’s a decent argument. We can only resist the programming of our brains so much – it’s definitely easier to go with the flow. The problem is, again, that the heinous acts that our moral intuitions tell us to loathe, are being perpetrated by other humans whose moral intuitions are telling them something different. So we won’t all be happier if we follow our intuitions. Our intuitions involve repressing others’ intuitions, and I’m not sure I see the objective justification for that.

        1. I don’t feel like that’s accurate. Harris’ doesn’t argue (I think) for why we should increase well-being, he points out that we are all already disposed to try to increase well-being. It is a part of being human. This isn’t to say that we are innately inclined to seek the greatest happiness of the greatest number–from Harris’ talk at the third Beyond Belief meeting: “It’s true that there is a limit to our sensitivity and our concern for the suffering of others but those limits are themselves part of our person and collective concern”.

          So there is no “number 2”. That we should increase well-being doesn’t need to be justified. But people have different ideas about what well-being is and about how to “move upward”. And it seems to me that a sizable amount of the differences in conceptions of well-being derive from different beliefs about the world, particularly religious beliefs. Insofar as the conceptions of well-being are the same, surely science is the best tool to discriminate between better and worse ways to get there. But where they are not the same, Harris seems to be saying that some really are better or worse than others. That there is an “objective” difference that is realized, and is thus ascertainable, ultimately, at the level of the brain. And this objectivity Harris says is analogous to the objectivity of the domain of physical health.

          Even if there is no objective morality, doing things that feel wrong puts a lot of stress on a human mind. Life will be better if we treat others well, rather than treat others badly, secure in the knowledge that there is no objective reason not to do so.

          That “life will be better” is the objective reason.

          I’m gonna read the book now.

          1. So there is no “number 2″.

            Then it’s easy to make one, just by asking the question. Why should we increase well-being? More importantly, why should we stop others from doing what they want to do in order to increase someone else’s well-being? Harris needs to have an answer. I can’t see how the answer is anything other than “because my brain makes me want to.”

            That “life will be better” is the objective reason.

            Why did you ignore the problem with that argument? You know, the one that I stated in the next paragraph in the sentence beginning with “The problem is…”?

          2. Why should we increase well-being?

            You do realize that is exactly like asking “Why should we increase physical health?” right?

            Tim, by your logic, the discipline of medicine could not get off the ground or could not be scientific until that question was answered.

            Here is Harris on this from his essay Moral Confusion in the Name of Science:

            “One of my critics put the concern this way: “Why should human wellbeing matter to us?” Well, why should logical coherence matter to us? Why should historical veracity matter to us? Why should experimental evidence matter to us? These are profound and profoundly stupid questions. No framework of knowledge can withstand such skepticism, for none is perfectly self-justifying. Without being able to stand entirely outside of a framework, one is always open to the charge that the framework rests on nothing, that its axioms are wrong, or that there are foundational questions it cannot answer. So what? Science and rationality generally are based on intuitions and concepts that cannot be reduced or justified. Just try defining “causation” in non-circular terms. If you manage it, I really want hear from you . Or try to justify transitivity in logic: if A = B and B = C, then A = C. A skeptic could say that this is nothing more than an assumption that we’ve built into the definition of “equality.” Others will be free to define “equality” differently. Yes, they will. And we will be free to call them “imbeciles.” Seen in this light, moral relativism should be no more tempting than physical, biological, mathematical, or logical relativism. There are better and worse ways to define our terms; there are more and less coherent ways to think about reality; and there are—is there any doubt about this?—many ways to seek fulfillment in this life and not find it.”

            As for the “unhelpful tangents”, I had to pursue that line of thought to show you that your hypothetical scenario is utterly useless in this discussion.

          3. So what you’re telling me is that you can’t answer my question, and neither can Sam Harris.

            Harris is basically saying not to ask the question because it’s too hard to answer. After all, we can’t prove the axioms of logic or science, so why should he have to provide an argument for his conclusion? No, I don’t think it works that way. If it’s so obvious that we should maximize well-being, then it should be simple to give the argument for it.

            You do realize that is exactly like asking “Why should we increase physical health?” right?

            Yes.

            You don’t seem to realize that “we should increase physical health” is a value judgement. A subjective opinion. Humans agree on it because it is human nature to value health. But there is no law of the universe saying that physical health must be increased. If you accept the premise that increasing physical health is a good thing, then the prescription that “we should do it” logically follows. But you need to have that premise, and it can come only from our subjective values about health.

            As you say, this is the same as with increasing well-being. If you say that increasing well-being is a good thing, then the conclusion that “we should do it” logically follows. But the premise is just a preference built into your mind. And based on that preference, you’re prepared to override the preferences built into other people’s minds, and tell them what they can and cannot do?

            I say, “not so fast.” Harris, or you, need to present the argument for why that’s what we should do.

            And you still haven’t responded to my argument that you ignored in your previous comment.

          4. One thing at a time. We can’t seem to resolve a single point of disagreement, so why try to tackle several at once?

            Everything you’ve said in your last post implies that medicine likewise can’t be a science in any sense. You’re applying a double standard. We’re not going to go any further until you get consistent with your logic.

          5. Everything you’ve said in your last post implies that medicine likewise can’t be a science in any sense.

            Do you realize how often you assert things without backing them up? I’d like to tell you exactly why you’re wrong, but then again, I don’t know exactly why you’ve come to this erroneous conclusion, because you didn’t provide any of your thought process for me to follow.

            But I’ll do what I can.

            Of course medicine is a science. It’s based on objective fact. That I have the ability as an adult to digest lactose is an objective fact. That I ruptured the ACL in my right knee at the age of 10 is an objective fact. That I am at a healthy weight for my size is an objective fact.

            Notice there are no “should”s in any of those sentences. You seem to think of is and ought as the same thing. They’re not. Add a “should” to the sentence and suddenly we’re not talking about objective reality anymore – we’re talking about a subjective value judgement. “I should try to stay at a healthy weight.” Oh really? Why is that? What is the argument that leads to that conclusion? The only argument that can lead to that conclusion – and this is like the 5th time I’ve said it this thread – is one that involves subjective values in your head. Watch:

            Let’s say that I am having trouble staying at a healthy weight for my size. It would take great effort on my part to halt my weight gain. We can make an argument like this…

            1. If you value staying at a healthy weight to the extent that you would expend great effort to do so, then you should expend great effort to stay at a healthy weight. (If A, then B)
            2. You do, in fact, value staying at a healthy so much that you would expend great effort to do so. (A)
            ————–
            3. Therefore, you should try to maintain a healthy weight even if it takes great effort. (B)

            It’s a simple argument. Once more, with just the symbols:
            1. If A, then B
            2. A
            —(therefore)—–
            3. B

            Number 1 is a premise we can all accept – it’s only logical that if you value something to extent X, then you’re willing to put in effort X to keep it. Number 2 is what each of us has to provide for ourselves. Number 2 is our values. The conclusion is not valid without it. It’s objectively true, based on medical science, that the state of my health is what it is, but it is not objective that I should do this or that about it. Such arguments can only be based on what we value. That is the difference between the is and the ought, and the difference you have ignored.

          6. Tim,

            See my comments about money and banking, and about Robin Hood being a criminal, in my replies to Russell today.

            I think that may clarify what we’re talking about.

            You’re right, of course, that what we’re talking about is stuff in people’s heads, and whether they do or don’t care about certain kinds of things.

            Nobody here is claiming to have an argument that will make an amoral person moral.

            I think you’re being greedily reductive in saying that it’s just about “what makes us feel bad.”

            When I feel bad about doing something wrong, that’s different than when I feel bad about saying something that’s merely stupid, or when I feel bad because I hit my thumb with a hammer. The differences matter a lot to what we’re talking about.

            There’s a distinctive logic to moral judgments, and only certain things count, whether in fact you care about them in the usual way or not.

            The actual claim is that like money, morality is an identifiable and distinctive kind of thing, with a certain domain of reference and a certain kind of internal logic.

            If that story is true in a strong sense, we can (in some clear cases) make statements about what’s morally better or worse in much the same way we make statements like “wanton killing of humans is illegal” or “$200 is more money than $100.”

            What’s “illegal” or “more money” does depend on certain things about human activities, but it’s not simply subjective.

            As with money, there has to be a certain logic to the system for something to count as money. It’s an artificial system, but not an arbitrary one. (You can’t, on a whim, decide that $100 is more than $200, and the basic rules of money were not so much invented by humans as discovered.)

            And arguably, unlike the issue of what’s ” or “illegal,” what’s moral is not a highly contingent matter of what some humans decided they’d agree would be one or the other.

            As with money, there’s a logic there to be discovered, which follows naturally from certain innate principles, external truths, and basic rationality. (At least in easy, clear cases. There are a lot of gray and weird areas as well.)

            You can opt out of any of these games—you may not care what’s illegal, or what’s money, or what’s immoral.

            That doesn’t mean that those terms don’t have specific referents, explainable in objective terms.

          7. I haven’t asserted anything without backing it up. The problem is that you can’t see the contradiction in your own thinking. You have conflated two different value judgments. We were initially talking about the value judgments “we should increase physical health” and “we should increase well-being”, those two being analogous for the respective areas of human experience physical health and well-being. So those would be the goals that start us on the endeavors of systematically investigating health and well-being and you have admitted that those are essentially innate desires. But you have been objecting to the ideas of Harris on the grounds that that initial value judgment has not been justified. I simply pointed out that you could apply the exact same thinking to the discipline of medicine. One couldn’t even begin to systematically investigate human health until they had justified “we should increase physical health”. You completely evaded this. You replied that I was saying the task is too hard. But I wasn’t saying that. I was saying that just as it doesn’t need justification in the case medicine, it doesn’t for well-being. You failed to give any reason as to why the one should but the other shouldn’t. What you did was shift the value judgment. You started talking about the specific value judgments that individuals make when scientific information about the causes and conditions of their health have been provided to them and they have to make decisions. Cost-benefit analyses essentially. An analog of this might be the judgment a person has to make when provided the scientific information that, for example, people who have frequent contact with regular friends report higher happiness (just a hypothetical to make the point). So is it the project of objectively investigating well-being that you object to? Or making prescriptions on the basis of whatever knowledge would be produced?

            You haven’t pointed out the difference that makes a normative science of ethics different from a normative science of medicine. They would both seem to begin with an axiom about what is desirable, in a general sense, and then proceed to seek lawful relationships among the relevant subject matter that can be used to guide our actions. Of course there will be cost-benefit analyses to be made. And yes, the specific costs and benefits will sometimes relate to what we value. But these individual differences don’t seem to pose a problem for the practice of medicine for you. So why should they for a normative ethics?

          8. @Paul W: I will have time to read your comments later today and will respond then.

            @Nick B: I haven’t asserted anything without backing it up.

            Nonsense. You asserted, without argument, that I implied that “medicine likewise can’t be a science in any sense.” How about this – your comments imply that there is an army of lavender elephants ransacking the South African countryside. Shall we play by your rules and agree that I can make assertions like this without backing them up? Or will you refrain from doing that to me?

            One couldn’t even begin to systematically investigate human health until they had justified “we should increase physical health”

            Completely false. You can investigate anything without deciding to act on it. I can investigate broken bones but not set them, or bacteria but not kill them. I can investigate the human moral intuition without dictating to people how they should live their lives. You’re conflating “studying something” with “making prescriptions about it.” I think we agree that there’s nothing wrong with study. I hope we agree that you need very good justification if you’re going to dictate to people how to live their lives.

            And yes, the specific costs and benefits will sometimes relate to what we value. But these individual differences don’t seem to pose a problem for the practice of medicine for you. So why should they for a normative ethics?

            Because in medicine I choose for myself what treatment I want, based on my values. You and Harris are saying that we will choose for the human race what they can and cannot do, based on our desire to increase well-being. I have stated this difference several times now.

            And that brings me back to my question, which you apparently cannot answer:

            How do you objectively justify dictating to others what they can and cannot do? Just give me the argument. That’s what your entire position rests on. If you can’t back it up, you have nothing.

          9. @Paul W: I agree with much of what you say in your response to Russell below, and let me also say that I appreciate the clarity with which you say it.

            That said, on to the disagreements…

            Finding out there’s no God, and that the structure of morality isn’t woven into the fabric of spacetime in spooky ways doesn’t make you suddenly stop caring about right and wrong.

            As with money, a crucial part of this is that even if it’s not what you naively thought it was, it’s still good for what you thought it was good for.

            Not exactly. What I thought morality was good for was that it provided laws that ought not be broken. Morality used to be for me, and still is for many, a way of saying what is just “wrong.” Killing people for no reason is just wrong and you cannot do it.

            But morality isn’t good for that. It isn’t a law that is out there in the universe somewhere. So even though we still care about right and wrong, morality isn’t what we thought it was, and that matters. It means we have to think twice about enforcing what we feel is right over what others feel is right. Without going any further, I think we can agree on this much, yes?

            There’s a distinctive logic to moral judgments, and only certain things count, whether in fact you care about them in the usual way or not.

            Well first of all, let’s be clear. The reason some concepts have moral relevance and others don’t is due to nothing other than chance and natural selection. The reason “whether or not to kill this guy” is a moral decision and “whether I should eat my beans with a fork or a spoon” is not is because we evolved an emotional response to one but not the other. Theoretically, if we had the power to rewire people’s brains, we could make someone feel that eating beans with a fork is morally wrong, as well.

            That said, I’m not sure where you’re going with this point. Our moral intuitions have basic rules (though they are somewhat malleable by culture and logical reasoning). How does that influence this discussion?

            ——–
            I want to talk about one last thing that’s been bothering me. Admitted, I haven’t read TML yet, but it seems like Sam Harris’ thought process about what well-being is and how to increase it must be taking cues from his moral intuition. I have the suspicion that he isn’t objectively measuring well-being and looking for ways to increase it, but instead taking the actions that his moral intuition tells him are wrong and finding ways to justify that feeling with appeals to a decrease in well-being. Or maybe I’m totally wrong. But examples such as the following, where moral intuition and well-being do not coincide, come to mind.

            1. Most people would feel bad about lying to a respected superior to cover up a mistake that you made. What if there was a case where the only difference lying would make was in whether or not you were punished/reprimanded/thought less of? In other words, telling the truth did not serve any practical benefit and all it would accomplish would be getting you in trouble. Objectively, well-being would decrease if you told the truth. And yet lying still feels wrong.

            2. What if you stole something that you needed from a wealthy individual who would never notice it was gone? In this scenario, well-being would practically increase as a result, because your well-being would go up and the other person’s wouldn’t go down. Yet it still feels wrong.

            3. Killing 1 person to save 5 feels wrong, if you’re harvesting organs or pushing a fat man onto railroad tracks. But it feels okay if you’re just flipping a switch. In all cases, the well-being equation is basically the same.

            4. Someone has just killed your mother right in front of you. Most people would agree that some kind of physical agression against this person is entirely warranted, even if the only purpose it serves is to inflict pain and vent your rage (and even if your mother’s killer does not plan on harming you in any way). But aren’t we decreasing well-being here? Therefore, you are unwarranted in attacking the person who killed your mother?

            Sam Harris’ model doesn’t really feel right in any of these examples, does it? But of course, the entire idea is that we were going to be contradicting someone’s feelings the whole time. It just doesn’t feel so good when those feelings are ours. So… what to do?

            And just one final example: slavery. This should be an easy one. Or is it? The well-being of the enslaved goes waaaay down. But the slave-owner’s well-being goes waaaay up! Of course Harris’ model has to provide one way of weighing one against the other. And I can bet you that the slave’s captivity is going to end up weighing more as a negative than the slave-owner’s quality of life weighs as a positive. But do you really think Harris didn’t engineer his model this way because his moral intuition told him that slavery just couldn’t weigh less? Otherwise, how else did he come up with the equation? Aren’t we back to just following our intuitions again and enforcing them on others??

  11. Tim et alii,

    I think a lot of the objections to what Harris is saying—or what people think he’s saying—are naive.

    Harris doesn’t have to have an algorithm for solving moral problems, or a formula for precisely quantifying moral goods, or anything of the sort.

    He also doesn’t have to justify his own fairly straight Utilitarian leanings (if that’s what they are).

    I think one reason he talks about “flourishing” is to distinguish between Utility (happiness) and other ideas of what makes life better or worse.

    As I understand it, his talk of a moral “landscape” is partly meant to allow for different ideas of which peaks are “higher” than which other peaks, and which valleys are “lower,” but still preserve the idea that by almost any moral standard, some valleys are much lower than some peaks.

    We may never agree on what is ideal—morality may never be a precise quantitative science, and I think it won’t—but that is not necessary for science to enable moral progress.

    For example, I have some fairly straight Utilitarian intuitions, and think that the average well-being is more important than the minimum well-being. I therefore disagree with Rawls and Singer about the “minimax” principle, i.e., extremely heavy emphasis on improving the lot of the very worst off, at almost expense of the general good.

    In practice, though, Rawls and Singer and I could agree on a whole lot of things that would count as making things better. We’d all agree that maximizing the welfare of the best off at the expense of the average well-being is A Bad Thing. We might disagree about what Utopia would look like, in principle and in practical terms, but substantially agree on avoiding most dystopias.

    As I understand him, Harris is not suggesting that we’ll ever have an exact science of morality, where we can turn the crank and out pops the one correct moral choice. (I think he makes it pretty clear we shouldn’t expect that.)

    I don’t think that he’s claiming that there is one Right and True absolute morality, e.g., that we’ll ever convince all the Kantians to be Utilitarians.

    What he is claiming is that aside from goofy stuff like Divine Command Theory and particular dictates of particular gods and so on, moral systems tend to converge to a useful degree, for scientifically explicable reasons.

    Consider gay marriage. We don’t have to resolve all political and moral issues with a magic forumula/algorithm to see that gay rights are a Good Thing. Without unsupportable claims about God disapproving of people’s choices of genders and orifices, discrimination against homosexuals is unsupportable. We have scientific evidence that being gay isn’t a choice, by and large, and that allowing gay marriage doesn’t destroy straight marriage, or cause hurricanes or the collapse of societies. You can be a Benthamite Utilitarian or a Kantian or a Virtue Ethicist, and a libertarian or a socialist, and still converge to the same answer: discrimination against gays is stupid and harmful, and morally wrong by any reasonable standard.

    There may well be gray areas in any science of morality, and I would expect that there would be, as there are in other sciences.

    The term “moral” may turn out to be intrinsically somewhat ambiguous, as the word “alive” is.

    Is a virus alive? Is it alive when it’s actively controlling a cell? I think you can make a reasonable case either way.

    Similarly, consider an evolving proto-life autocatalytic soup of chemicals, with something resembling metabolism, but no resembling genes. Is that alive?

    In both cases, the answer is “yes and no”—it depends on what aspect of prototypical life you are most interested in. (E.g., genes vs. metabolism.)

    And if you ask which is more alive, again the answer could go either way.

    That does not change the fact that my dog is clearly alive, and my cup of coffee is clearly not alive. We don’t need our concepts to be absolutely sharp-edged and universally applicable for them to be useful.

    In science, they’re typically not. (Just consider “life” and “species,” for starters.) Most scientific concepts don’t have classical definitions with absolutely necessary and absolutely sufficient conditions.

    Harris’s inability to deliver a formula or algorithm for a science of morality is likewise not a fatal obstacle for his project. It’s only to be expected.

    I can’t defend the details of his view, because I’m not clear on them myself, but I think he’s doing basically the right thing.

    He’s pointing out that moral reasoning is largely reasoning from evidence, and from a very few basic evaluative principles that are very widely shared for scientifically explicable reasons.

    It is no accident that moral systems are universally justified in terms of human flourishing, cross-culturally.

    (Even Divine Command theories are frequently justified by saying that obedience to divine command is good for you in some way or other—e.g., that God knows what’s good for you better than you do, or that God’s wrath will make disobedience bad for you.)

    I think Harris is arguing that

    1) morality is systematically concerned with limiting selfishness and promoting more general flourishing, both in terms of its evolutionary function and at a basic psychological level. (The latter is actually more important, but the former explains the regularity.)

    2) various conceptions of flourishing tend to be correlated (or at least believed to be correlated), such that there can be considerable agreement despite considerable disagreement

    3) On rational reflection in light of actual facts, moral systems tend to converge usefully. In particular, the capacity for broad empathy does not go away on informed reflection, but excuses for denying others moral consideration generally do.

    The latter is an empirical claim, which needs study, but I think it’s basically right. (And it’s the basic theme of Peter Singer’s The Expanding Circle.)

    I’d add a further claim:

    4) religious moral schemes tend to coevolve with societies to justify the status quo, and in particular to justify the exploitation of foreigners and social inferiors. They do so by creating and justifying ingroup/outgroup distinctions, vilifying those who are different, etc. This is part of the normal evolved function of religion, and why religious morality tends to be at travesty. The widespread idea that we need religion for morality couldn’t be more wrong—religion tends systematically to obscure real moral issues and focus on divisive sideshows in ways that rationalize oppression and exploitation.

    We can argue about varieties of moral realism and quasi-realism and so on, but I think most of us more or less agree with the most important points Harris is pushing. We may quibble, but we shouldn’t lose sight of that.

    He’s pointing out that when it comes to religious morality, the Emperor has no clothes, and we actually have a pretty decent set of clothing ourselves. Not opulent, but not bad.

    1. I don’t mean this in at all a snarky way, but I think Harris would benefit from adjusting the way he’s presenting his ideas, without changing their substance. For instance, I think you’ve put his case better than he has. (I haven’t read TML, but I listened to his TED talk and read his various replies to critics in its wake.) Of course, it’s possible that your putting words in his mouth, and that your case actually is more reasonable than his…

      1. Yes – the thing is, Harris didn’t present it in that way in the TED talk. He spoke about objectivity – and that has a standard meaning in the metaethical literature, and more importantly the standard philosophical meaning seems to reflect the kind of ultimate objectivity that a lot of ordinary people seem to want morality to have.

        When he was queried on this, he didn’t so much clarify the things he said in the TED talk as go on the offensive against people who’d questioned his arguments and terminology. He lost a certain amount of good will – though I for one have continued ever since to say a lot of things that sing his praises – and created a certain amount of confusion.

        The trouble is, the TED talk just wasn’t very good. It was a slick performance, and it got applause from its immediate audience in the auditorium, but there were all sorts of problems with the way he presented the argument. But when he was criticised he seemed unwilling to admit that he’d made mistakes. That’s understandable, of course; none of us enjoy admitting mistakes.

        But maybe the book doesn’t have these problems. There was time for him to fix the problems, if the book was infected by them, in the light of the criticisms, before the book reached its final form. So, I guess I should leave it for now, and say more when I’ve read it.

    2. Speaking of formulas and algorithms, I know Stewart was making a joke, but how annoying was it that he kept asking how can we “mathematically” determine whether one act is better than another? “Science” and “math” are not synonyms!

  12. Richard Wein:

    Metaethics is a matter of empirical enquiry about the nature of morality. And I would argue that the best interpretation of the evidence is that there is no such thing as objective moral values, i.e. there is no fact of the matter as to what is moral, because the property of being moral is not an objective property. “Moral” is, roughly speaking, a label that people assign to things they approve of.

    I disagree. When people say that something is wrong, they are generally presupposing that there is some standard of right and wrong that goes beyond their current knowledge and attitudes.

    They might be wrong about that, in which case Error Theory would be true, but it’s clearly not just a statement of a personal attitude.

    I don’t think that Error Theory is true, either, because I think the crucial presuppositions of normal moral talk are salvageable.

    To show that, I’d have to tease out the crucial presuppositions, but I think that can be done with cognitive psychology and cognitive anthropology.

    I believe it’s a fact that people systematically justify moral schemes (when pressed) in terms rather like Harris’s “flourishing.”

    I don’t think the “natural kind” of morality is very simple—there are several sometimes competing principles—but I think Harris is basically right that justification in terms of the more general good is the most basic. (Psychologically as well as evolutionarily.)

    He seems to be going the same route as many “moral naturalists”, in redefining “moral” to mean something like “that which maximises well-being”, and then conflating that with the ordinary meaning of the word. In other words, moral naturalism of this sort commits a fallacy of equivocation.

    I don’t see that. I don’t think it’s redefining “morality” to say that it’s crucially (if not exclusively) about limiting self-interest and promoting a more general good.

    I think that’s scientifically correct. That is just the kind of thing that morality naturally is. Morality has a certain natural subject matter (self/other conflicts of interest) and a certain natural valence (limiting selfishness and promoting a more general good).

    If you’re not interested in self/other conflicts, you’re not a moral unit. If you don’t have a moral sense that counters selfish interests with a concern for others, you’re a broken moral unit.

    This isn’t a circular definition—it’s an empirical claim, like claiming that the circulatory system circulates blood to oxygenate tissues. Morality is an evolved faculty with a function, and with scietifically identifiable malfunctions.

    (The story isn’t simple, though, because there are several levels of analysis involved. What counts as a malfunction at one level may be a feature, not a bug, at another.)

    1. Actually, error theorists don’t deny that morality may have some kind of function or purpose or point. In fact, that’s exactly what they tend to say. J.L. Mackie is very clear on this. I actually think he concedes too much, if anything, since I think that we can make demands of morality that go beyond whatever functions it has actually had historically. But I don’t think you’ll find any error theorist who denies that morality has, historically, functioned in certain ways. Indeed, that is part of the theory as Mackie develops it (I don’t believe that other leading error theorists such as Richard Garner or Richard Joyce would take a different view).

      The question isn’t whether morality has historically functioned in certain ways – which error theorists emphasise. It’s whether there are any moral standards that are inescapably binding or authoritative, such that someone who does not accept them and could get away with breaching them is making a mistake about the world or the nature of reality. You can call such a person “broken”, but that is tendentious language and beside the point. The question is whether such a person is making a mistake about the world or the nature of reality. The answer is surely “not necessarily”.

      Someone might know everything there is to know about the world, including the historical functions of morality, then reply, “What’s that to me?” This is not the sort of person we want to associate with, and such people would be dangerous to us, but the point is that they are not making a mistake about the facts.

      Another way of looking at this is to ask whether an intelligent alien being that refuses to be bound by our morality is necessarily making any mistake about the world. As far as I can see, the answer is, “No.” It might know everything there is to know about the world, including the historical functions of morality for human societies, but go ahead and kill and eat us if it feels like doing so and can get away with it. It can know everything there is to know, but reply, “What is that to me?” Our moral rules do not bind it inescapably in the way that facts about the world do.

      Error theorists say that a lot of ordinary people – and many philosophers – make an error, because they are disinclined to accept this. That disinclination may even have come to infect the language, so that “morally wrong” arguably means “forbidden by a standard of conduct that has an inescapably binding force, much like that of a fact about the world”.

      These people are inclined to think that there are spooky moral facts, similar to empirical facts, Out There in the metaphysical fabric of the world, or that morality is Up There in the mind of a god, or somehow logically a priori, or that morality has some other basis that goes deeper than its social functions for a particular species with a particular evolved psychological nature. A lot of philosophers develop explicit theories like this, and a lot of ordinary people talk as if they think that something like this must be right, even if they leave it to philosophers and theologians to work out how it could be right.

      Error theorists think that this kind of thinking about morality is widespread – perhaps so much so as to infect the language itself – but that it is in error.

      It looks to me as if they are probably right, though of course there are interesting empirical questions as to how much ordinary people really do think in this spooky way about morality.

      1. I think ordinary people do often think in spooky ways about morality, when they try to think deeply about it, but I don’t think that’s fatal.

        People can have basic misconceptions about things, but still refer successfully to them, and say true and important things about them.

        For example, consider some kids who don’t understand money and banking. They may have profound misconceptions about money in the bank, e.g., that money is the same thing as currency, and that money in the bank is currency in the bank, and that particular bills and coins in the bank belong to particular people. They may have no idea that most money isn’t currency at all—it’s just an abstraction implemented by bit patterns in computers, manipulated according to certain strict rules.

        Suppose one of those kids has $100 in the bank, and another has $200. If the latter one says “I have twice as much money in the bank,” that statement is meaningful and objectively true, even if both of them mistakenly think it means there are piles of currency with their names on them.

        (I’m basically just applying principles of the “New” or “Causal” theory of reference here.)

        If these kids then have it explained to them how money and banking work, what will their response be?

        It won’t be to say that they were wrong to think they had money in the bank, or wrong to think one had twice as much as the other. It’ll probably be more like, “Oh, OK, whatever.” It just won’t matter to them that they were quite mistaken about the nature of the money they have, or what it means to have that kind of money, as long as they know how much it is and can spend that much—and they certainly won’t stop caring about money, or how much they have.

        I think one of Harris’s points is (or should be) that we’re in an interestingly similar situation with respect to morality. We may not understand what it really is, or how it really works, but we know it’s important and we care about it.

        Crucially, when we do learn what it really is, we don’t stop caring about it. The issue doesn’t go away. Finding out there’s no God, and that the structure of morality isn’t woven into the fabric of spacetime in spooky ways doesn’t make you suddenly stop caring about right and wrong.

        As with money, a crucial part of this is that even if it’s not what you naively thought it was, it’s still good for what you thought it was good for.

        Ordinary people have certain intuitions about morality, e.g., that it’s somehow about behaving in ways that are good for people. It’s largely about benevolence, and even when it’s not directly about that, it is assumed to have positive effects overall.

        Those intuitions do survive radical transformations of the concept of morality, so that (as with learning about money) we’re recognizably talking about the same actual thing we were talking about before, even if we understand it very differently in certain obvious respects.

        Money in computers is as real as money needs to be, and morality in our heads is as real as morality needs to be. It still has certain crucial characteristics we always thought it did, even if it’s made out of something surprising.

        In particular, even ordinary people have an intuitive grasp of certain things about morality that are correct.

        In particular, they understand that moral reasoning is reasoning, and not just a matter of opinion with no rules. Like money, there are rules on manipulating the representations involved, and those rules are crucial to making it what it is.

        (Ordinary people do understand that it is possible to be mistaken about morality—it’s not just a matter of what you like, and you can make errors of fact or inference, and get the wrong answer. They also realize that you can learn that such mistakes are mistakes. I think this recognition that you can be mistaken about morality is one of the things that makes the concept of right and wrong able to survive radical transformations and still be recognizably morality. E.g., the Euthyphro Dilemma doesn’t convince people that morality doesn’t exist, even if it convinces them that it’s not a matter of what God wants, as they might previously assumed.)

        No-rules money doesn’t even count as money—e.g., if you can just fabricate money freely, it stops working as a medium of exchange. You might prefer to increase the counter representing your money and make yourself rich, but if people are allowed to do that, it’s not even money anymore.

        (Notice that that’s not a matter of subjective opinion. It’s an objective fact. If you think you can freely change the rules of manipulating money, and wish money into existence, you just don’t understand what money necessarily is, and are quite mistaken.)

        Similarly, no-rules morality just doesn’t work as morality. There are constraints on what you can think and do that are necessary for your reasoning to count as moral reasoning, or your actions to count as moral actions.

        Arguably, that’s not a matter of opinion, either. If you think you can cobble up any old set of axioms and have it count as a moral scheme, you’re just wrong—you don’t understand what morality is. Like money, morality works in certain ways and not others.

        The fact that ordinary people can’t articulate those deep constraints doesn’t make the claim false. (Most people don’t know banking rules, either, although they can recognize the logic behind them if somebody explains it.)

        I think that most of us here are ordinary people in the most important sense, and for most of us, learning that there’s no spooky divine will involved in morality didn’t make us think that morality doesn’t exist—that was never what was really important about morality.

        An important feature of this kind of account—which I think argues against Error Theory, but I’m not sure—is that people can correctly use concepts that they’re unclear on, and even substantially mistaken about, to say meaningful and true things.

        In the money example, if one kid says to another “I have twice as much money in the bank as you,” we might elaborate that as “I’m not sure what money really is, and I’m not sure what counts as it being in a bank, or what counts as my having it in the bank, but I’m pretty sure that I have twice as much as you.”

        Both kids assume that there are fairly precise meanings to the words, even if they don’t know them, and don’t need to know them for most purposes—e.g., deciding who can afford what.

        I think Harris is saying something related about morality. He’s saying that yes, there is such a thing as morality, and yes, people are right that it’s largely about what they think it’s about in certain respects—limiting assholery, making the world better, avoiding moral mistakes, etc.—even if it doesn’t work the way they think it does in other respects. (It’s not magic.)

        He’s also saying that the surprising facts about morality do have some important consequences that people should know about—e.g., that certain “moralistic” crap is crap, because it’s based on moral mistakes. (E.g., obedience to a nonexistent moral authority, who even if He existed would be an asshole, not a moral authority at all.)

        1. BTW, Russell, the foregoing is not meant to be arguing against your claim that Harris is using a nonstandard meaning of “objective” in philosophical terminology.

          I guess what it is arguing is that it’s a reasonable usage of objective, and more consistent with ordinary people’s use of ordinary language than it might appear at first.

  13. The question isn’t whether morality has historically functioned in certain ways – which error theorists emphasize. It’s whether there are any moral standards that are inescapably binding or authoritative, such that someone who does not accept them and could get away with breaching them is making a mistake about the world or the nature of reality.

    I’m stumbling this idea of something being binding in a seemingly magical way.

    I’m enough of a committed materialist cognitivist humanist that I don’t expect something being “morally binding” to imply any kind of spooky metaphysical force, or anything like that.

    As I understand Harris’s argument, morality is a particular kind of phenomenon with particular properties, in particular certain core principles that you converge to on rational reflection.

    If you opt out of morality (e.g., if you’re a sociopath) or make a moral mistake and can’t recognize it, you are morally bad or wrong, but there’s no God or Karma that’s going to get you for it, or anything like that. Other humans may recognize the problem and treat you accordingly (if they care), but that’s about it.

    I wouldn’t say that means that moral principles are not binding on you. Like laws, they’re “binding” on everybody, even though there’s no metaphysical magical enforcement, and even if there’s no actual enforcement at all.

    Saying that somebody is doing something wrong or is bad person is a lot like saying they’re breaking the law, or that they’re a criminal. Its just a statement about their failing to conform to a certain kind of principle, which they and you may or may not personally care about, but should be able to recognize.

    You can call such a person “broken”, but that is tendentious language and beside the point. The question is whether such a person is making a mistake about the world or the nature of reality. The answer is surely “not necessarily”.

    Consider calling Robin Hood a criminal. That’s not tendentious or beside the point—it’s an objective fact. (In a ficitious world, of course.)

    You may or may not care about the laws that make Robin Hood a criminal, but they exist, they are legally binding on him, so he’s a criminal.

    You may even object to the laws, and think that Robin Hood’s a great guy doing things you like, but if you say he’s not a criminal, you are simply objectively mistaken.

    Similarly, if Harris is right that there’s an identifiable set of core moral priniciples constitutive of morality, then if you’re going against those principles, you’re being immoral.

    You don’t have to care, and you may not get caught or punished, but if he’s right that there are facts like that about morality, then there are at least some corresponding facts about when people are doing right or wrong.

    That’s all I’d ever expect of a “binding” moral system, given that I don’t believe in spooky metaphysics.

    When I say that somebody’s a “broken” moral unit, all that means is that their moral sense is failing to function in accordance with the principles of moral senses.

    Robin Hood is a similarly “broken unit” in legal terms—he’s failing to be a proper “law-abiding citizen” unit. Given that I don’t like the legal system in that fictitious universe, and think it justifies his vigilanteism, that’s fine by me and I’ll root for him. Still, I can recognize that he’s a malfunctioning unit in legal terms.

    Likewise, suppose you have a few rampaging sociopaths who like each other in an aesthetic way, and root for each other pursuing their destructive urges. They’re all broken units from a moral point of view, whether or not they care about that, or even if they like it in themselves and each other.

    As for the “historical function” issue…

    I agree—and I think that Harris would, too—that the historical function is not the only thing, or even the main thing.

    What matters is what the moral faculty or competence consists of, whether there’s a recognizable core set of principles, and whether it converges to useful agreement. (On sufficient rational reflection in light of actual facts.)

    This is a complicated and tricky subject, but I think Harris could make a pretty good case. (I don’t have his book yet, so I don’t know if he does.)

    It would go roughly like this:

    There are several natural principles of morality, but they’re not equally important. In particular, on rational reflection, people recognize that some are necessarily subsidiary to others—or only fail to due to moral mistakes. (Errors of fact, failure to reflect, etc.)

    Each of these principles may have have an innate basis, and be more or less free-floating, developmentally. All of them show up in some form in every culture, for scientifically explicable reasons.

    Normal moral development involves trying to integrate these principles into a workable moral system, to make sense of the moral world—there’s a higher-level principle of consistency in trying to square these sometimes conflicting principles with each other. How that’s done may not be mostly innate, but a matter of rational reflection and incremental adjustment of priorities, etc.

    What specific cultures and religions do is largely to paper over failures of rationality in this integration process by introducing spurious “facts,” pounding on certain principles at the expense of others, etc.

    1. Impartiality and benevolence. (Roughly a la Rawls’s Veil of Ignorance.) Moral systems generally arbitrate between selfish interests and group interests, limiting the former in favor of the latter; any system that doesn’t would hardly count as a “moral” system.

    2. Honesty and rigor—its assumed that you’re supposed to get morality right, not kid yourself, no be sloppy and make moral mistakes, not sneak self-serving stuff into overtly altruistic stuff, etc. This may start out, developmentally, as a free-floating principle, but on reflection it ends up being subsidiary to the first principle—it’s hard to actually act in ways that are actually for the general good if you’re getting it wrong.

    3. Obedience to authority (e.g., God) For some people this may be the strongest moral impulse, especially as children, but on rational reflection it isn’t sustainable that way—e.g., you realize a la Euthyphro that you can’t be obedient to a moral authority if that authority doesn’t exist or isn’t actually morally authoritative. On sufficient rational reflection, you end up realizing that there are no moral authorities, except insofar as trusting them can be justified using the other principles.

    4. Fairness. Early in development, this is mostly a localized sense of fairness in particular interactions and exchanges, but under rational reflection ends up being made more general, global, and hierarchical. (Some forms of fairness are more important than others, the big picture matters more than little things, etc.)

    5. Discipline and Order. Some people seem to be particularly strongly attracted to discipline and order, but on rational reflection generally realize that discipline and order are mostly valuable in service of some more fundamental principle.

    6. Character and courage. Like most of the principles, this generally ends up subsidiary to other principles. (E.g., courage based on a mistake, or toward a bad end, is not all that admirable.)

    7. Rule following. Some people may be more inclined than others to see morality as crucially a matter of following righteous rules, irrespective of their actual consequences. IMHO this is the trickiest issue for convergence to a shared morality, but I won’t go into detail here.

    There may be a couple of other relatively free-floating moral principles, but not many, and I think the pattern here is clear—none is more important than the principle of impartial benevolence, at least after enough rational reflection in light of the relevant facts. People may have several different, free-floating moral principles evolved into them, but most of them end up seeming rather pointless without being justified by something like promoting the greater good, unless you make a moral mistake like believing in a big Sky Daddy who likes obedience above all.

    In this process of convergence, you may lose something significant, rather than being able to integrate it into a consistent moral scheme.

    So, for example, some people may just instinctively care a whole lot about obedience to authority, and be disappointed if they can’t find a suitable authority to be obedient to. Their “heads” might always be in conflict with their “hearts” no matter how much they think it through correctly.

    I don’t think that’s what happens to most people. Most people either never realize that they’re wrong about moral authorities, or can give them up without too much pain, and focus on what’s left—e.g., principle #1 as the driving force of morality.

    There are some interesting questions here about personality types and clusters of principles that tend to go together.

    For example, it seems to me that a lot of “conservative morality” overstresses obedience, discipline, character, and certain forms of “fairness” at the expense of impartial benevolence. Its my opinion that that’s largely because those people aren’t in reflective equilibrium—they are typically factually mistaken (eg., about the existence of a morally authoritative God and what He wants) or insufficiently rational and reflective. (E.g., failing to realize that those things are only especially valuable as means to more basically good ends, as I think they eventually would in light of the right thought experiments.)

  14. @Paul. (I’m starting a new top-level comment, as the old discussion was getting too deeply nested.)

    We agree that moral values (and laws, etc) exist only in people’s heads. And you’re right that people engage in moral reasoning. Their moral judgements are influenced by their pre-existing moral values. But it doesn’t follow that their moral values can be true.

    Let’s say a particular person believes that murder is wrong, and this leads him to believe that X is wrong, where X is a specific murder. It doesn’t follow that X _is_ wrong. And that remains true even if most people share his belief that murder is wrong.

    How does this compare with, say, the rules of chess? Since the rules of chess also exist only in people’s heads, how can I say it’s a true fact that P-K4 is a legal chess opening? The difference is that the word “chess” is inherently associated with a certain set of rules. A game played by some other rules would not be “chess” as we normally understand that word. On the other hand, the word “wrong” is not inherently associated with any particular moral code. Nor do people generally take “wrong” to mean “wrong by the moral code of most people”. So it doesn’t follow that X is wrong just because murder is against the moral code of most people.

    The case of the law, which you mentioned, is similar to that of chess. Even if a specific sets of laws is not mentioned explicitly, it will normally be implied by the context, e.g. the laws of whatever jurisdiction the conversants are located in.

    Your money example, “$200 is more money than $100”, also follows from the accepted meanings of those words. If for example, you chose mathematical axioms which made “100” more than “200”, you would not be using those numerals in their normal sense.

    1. Their moral judgements are influenced by their pre-existing moral values. But it doesn’t follow that their moral values can be true.

      Could their moral values be true (or truer) if morality relates to human wellbeing?

        1. If morality is about promoting human wellbeing, and if certain moral values are objectively more conducive to that wellbeing, then would it be right to say that certain moral values are truer than others?

          So I guess I’d say their trueness would lie in the degree to which they accomplish their goal.

          1. Why are you attempting to redefine “true”? “True” means “an accurate description of reality,” not “accomplishes its goals well.”

            Under the former, correct definition, “I am true” makes no sense. Under your definition, it does, since I accomplish my goals well. Also under the second definition, chemotherapy is truer than homeopathy, since chemotherapy better accomplishes the goal of getting rid of cancer.

            But chemotherapy is not “true.” Chemotherapy describes a technique that works, and to say that it works is to say something true, but a noun itself cannot be true. The same goes for a value, such as “valuing helping others.” It cannot be a true or false description of reality because it doesn’t describe reality at all – it is just a noun phrase.

          2. Now that is a statement that can be true or false.

            If I were to evaluate whether it is true or false, I would say that “better” is hopelessly vague. You have to say “better at something.” If you said “certain moral values are better at increasing well-being than others,” that would be a better place to start. Of course, first we would have to define and measure well-being before we could see if one set of values increased it or not.

            “Valuing helping others” increases well-being, but at a cost. Helping others takes time and energy, which seems to mean that your well-being must decrease. And what if you’re helping someone accomplish an “evil” (according to our moral intuitions) act? Is that an increase in well-being?

            But let’s assume we’ve successfully worked all of these kinks out (and that is a BIG assumption, I hope you realize.) I’m guessing you’re tempted to say this:

            “If certain moral values are better at increasing well-being than others, then we should live by those values.”

            Well, that’s a statement. It’s not a conclusion, unless you provide some argument for it. You may want it to be true, but can you prove that it is?

          3. “If certain moral values are better at increasing well-being than others, then we should live by those values.”

            It would be a logical conclusion if one was interested in increasing wellbeing, no?

            So are we all interested in increasing wellbeing? I think the answer is obviously yes.

          4. “Yeah? You think the dudes responsible for this share your values about well-being?

            Are you arbitrarily excluding or have you not considered the possibility that they are simply mistaken about how to achieve greater wellbeing? What you have just done is identical to pointing to someone trying to cure their cancer by homeopathy and saying “see, not everyone has the goal of increased health”.

            Why is it difficult for you to imagine that some people could be profoundly ignorant and mistaken about how best to promote flourishing?

            And Tim, you haven’t pointed out a single difficulty in “my” position to me. I will consider what you said in the comment you linked to. The difficulties you imagine to have pointed out would seem to include:

            1. the goal of increasing wellbeing is unjustified.
            But what is unjustified is your demand that such a goal be justified. I have pointed out to you countless times that you don’t do that for medicine or any other science. It is a completely absurd demand. You have evaded this point over and over.

            2.all the “kinks” have to be worked out before we could begin any such conversion (e.g. does well being still increase if we help someone commit an evil act?)
            Another absurd demand. Feel free to tell me how that makes sense.

            3.well-being must be precisely defined and methods of measurement must be established before the conversation can begin
            It’s funny, that looks like double standard again. You would never make such a demand in the discipline of physical health. We know enough about what we mean to embark on the project, and like Harris pointed out, the definition of health (and measures of it, for that matter) are truly open-ended. Such is likely to be the case for a normative ethics.

            4.the position is one of “dictating to people what they can and cannot do”
            No. What is being proposed is making scientifically true statements about what others should and shouldn’t do with respect to the goal of increasing wellbeing. The analogy with medicine seems almost perfect to me. In medicine, scientifically true statements are made about what people should and should not do with respect to the goal of increasing physical health. In order for this to happen, all that needs to be true are the following:

            1.morality is about promoting well-being

            2.objective differences in the amount of well-being enjoyed by different people must exist

            3.these differences arise out of the workings of the physical universe and are thus discoverable, in principle, by science

            4.humanity generally, shares the goal of increasing well-being

            All I’m saying is that we have a foundation on which to begin a conversation. It may turn out that there are unforeseen difficulties or foreseen but unappreciated difficulties and it won’t go very far. But can we not even begin?

          5. Nick B.: “1. morality is about promoting well-being”

            You are using a vague statement to conflate several possible assertions:
            A. People’s moral values are mainly concerned with promoting well-being.
            B. People’s moral values are solely concerned with promoting well-being.
            C. An action is moral to the extent that it promotes well-being.

            A is probably true. B is probably not true, but let’s grant it for the sake of argument. You need C to get you to your conclusion. But C presupposes that the morality of an action is a real property, which is the point in question.

            In short, you are failing to distinguish between (1) the moral values people hold and (2) what is actually moral (if anything). You are conflating them under the vague word “morality”.

          6. @Richard: I think I’ve realized what the fundamental error (or dishonesty) is in Harris’ argument – it’s a similar use of vague definitions to conflate two ideas.

            In normal language, labeling something as “moral” means that it is objectively right and good. The human race in general is under the illusion that objective morality exists, and at least while some people in science and philosophy admit otherwise, generally to call something “moral” is to say that it conforms to some Moral Code that is out there in the universe (spooky morality, as you call it).

            Harris begins his discussion of “morality” by giving a new definition, and saying that “moral” just means “increasing/preserving well-being.” Whereas the original meaning of the word is connected to ideas of right and wrong, punishment and praise, the new meaning of the word is not. Calling an act “moral” used to mean it was “right” and “you should do it.” Now it just means “it increases well-being.”

            But then Harris does something that is either an act of dishonesty or confused logic. He goes back to using the old definition of morality, in which “moral” means that something is right and good and should be done. In this TED talk he talks about how everyone has a differing opinion on what is moral- which is only true if you take the old definition. No one is stoning their daughters to death and saying they’re increasing well-being, but they are saying it’s the right thing to do. Here, Harris writes that “Secular liberals, on the other hand, tend to imagine that no objective answers to moral questions exist”, which is again using the old definition. Secularists do not say that there are no differences in well-being between peoples; they say that there is no such thing as objective right and wrong.

            Harris cheats. He gets his foot in the door by talking about well-being (still using the word “morality” to describe this), and then uses arguments based on another definition of morality to support his point.

    2. A is probably true. B is probably not true, but let’s grant it for the sake of argument. You need C to get you to your conclusion. But C presupposes that the morality of an action is a real property, which is the point in question.

      OK, for the sake of argument, B is granted to be true. This may not work but this is how I’m thinking about this. Actions are moral to the extent that they promote well-being in the same way as moves in the game of chess are good to the extent that they advance the goal of winning the game.

      1. @Nick B.

        “Actions are moral to the extent that they promote well-being in the same way as moves in the game of chess are good to the extent that they advance the goal of winning the game.”

        You need to be clear whether you’re taking this as (1) a definition of the meaning of “moral”, or (2) a substantive assertion about what sort of things happen to be moral (which assumes that the meaning of “moral” is already given). (In his online articles, I think Harris equivocated between the two alternatives. He didn’t seem to see the distinction.)

        In the chess analogy, what you’ve given is a definition of “good (in chess)”. So by implication you’re giving a definition of “moral”. But that’s not what the word normally means. You are making just the error that Tim and I have been ascribing to Harris: stipulating a new meaning for the word, not noticing that it’s quite different from the normal meaning, and so conflating the two meanings.

        This is analogous to insisting that the sun is hot, and defending the claim by defining “sun” to mean moon. You can make any sentence true by redefining a word to suit your purposes, but then you’ve changed the subject. Of course in your (and Harris’s) case it’s not so clear this is what you’ve done, because the normal meaning of “moral” is not so clear. That’s why quite a lot of philosophers make this error. At least you’re in good company!

        Since you and Harris are the ones insisting you can show there are moral facts, the onus is on you to show that your definition corresponds (at least roughly) to the normal meaning of the word. I will say only briefly why it isn’t. Moral claims are prescriptive (normative): they commend or condemn actions. But your meaning is not prescriptive. Your so-called “moral” claims only give people information (about which actions promote well-being). Imagine someone telling you, “that’s wrong!”. Do you feel that he is giving you information about whether that action promotes well-being? Or do you feel he is urging you not to do it and/or condemning you for having done it?

        1. Oops. Of course my sun/moon analogy should have said “cold”, not “hot”. It reads like nonsense as it is!

        2. You are using a vague statement to conflate several possible assertions:
          A. People’s moral values are mainly concerned with promoting well-being.
          B. People’s moral values are solely concerned with promoting well-being.
          C. An action is moral to the extent that it promotes well-being.

          I could have answered that better. When I said “morality is about promoting well-being” I meant that ‘peoples moral values are mainly, if not solely, concerned with preserving/promoting well-being’. I would say, ‘therefore, an action is moral to the extent that it preserves/increases well-being’.

          But C presupposes that the morality of an action is a real property, which is the point in question.

          The morality of an action is no more or less a real property than is the soundness of medical treatment.

    3. Richard,

      I wrote a long response a few days ago, but have been hesitatant to post it. (Printed, it’s 6 single-spaced pages.)

      I’ll try to look at it today, and see if I can edit it down.

      At any rate, I wanted you to know I haven’t just been ignoring you.

        1. Tim,

          Given that you liked Richard’s comment, above, I tried to more or less answer both of you in my too-long response—at least to clarify a few basic things that would make it easier to address specifics.

          I haven’t been ignoring you, either. 🙂

          If I can’t figure out how to condense it today, I may post a chunk or two and see how that goes.

    1. No you haven’t realized it.

      Harris is simply pointing out that “ideas of right and wrong, punishment and praise”, etc. have the functional role of “increasing/preserving well-being”. On Harris’ view calling something “moral” still means that it is “right” and “you should do it”.

      Any thoughts Richard?

      1. Then Harris believes in objective morality and is wrong. Plain and simple. There are no Moral Laws in the universe.

        1. How is what you just said relevant to the point that I corrected you on? There is no bait and switch. You have no point.

          No one is stoning their daughters to death and saying they’re increasing well-being…

          A few points:

          1. Just because someone doesn’t consciously think to themselves that such an act is increasing well-being doesn’t mean that the emotional reaction they’re having and the actual act is not ‘purposed toward’ that end. Whether a person is aware of the function of their emotions is irrelevant.

          2. You’re in no position to make that claim. It seems highly possible that many members of the community would give a rationale for the stoning of adulterers in terms of it being for the greater good of the community. Adultery may well be viewed as an act that is so destructive to community that it cannot be tolerated and awful brutality must be used to deter would-bes.

          3. You seem to be doubtful that moral values have anything to do with promoting the well-being of individuals. All I can say is that it is an extraordinary conclusion. How you manage to look at, say, the quintessential moral precept-‘treat others as you would like to be treated’, and come away thinking that, I’ll never know.

          1. How is what you just said relevant to the point that I corrected you on? There is no bait and switch. You have no point.

            There is no “right” as most people understand it, and there is no objective “you should do” anything. If that’s what you or Harris believe, then you’re wrong, just as if you believed in a supernatural creator.

            In response to 1 and 2: I’ll grant you that what you describe is possible, in some cases. Here’s my bottom line- Harris needs to keep his language consistent, and it seems like he isn’t. I agree that there are objective differences among people in well-being. As long as that is all Harris is saying, we have no problem. However, if Harris is trying to get an ought from an is, and say we are scientifically justified in forcing people into behaving a certain way, I do not agree.

            In response to 3: Of course you’ll never understand my thinking. You don’t read what I write.

            You seem to be doubtful that moral values have anything to do with promoting the well-being of individuals.

            Actually, I seem to espouse exactly the view that I already explained to you, when I wrote: “I’m just pointing out the disconnect between the human moral intuition and what contributes to well-being. Yes, there is overlap between the two, but there is also much difference.”

            Fact: There are things that people find intuitively “moral” that do not promote well-being, and there are things that promote well-being that are considered wrong or neutral by the human moral intuition. The correlation is a weak one.

          2. You have alleged that Harris’ is pulling a ‘bait and switch’ with the word “moral”. There is a “normal” meaning of the word and there is a meaning invented by Harris. He is going back and forth between these meanings in his argument.

            You haven’t been very explicit about the “normal” meaning but what I’ve gotten from you is this: it entails the notion that “morality” or ‘moral values’, are absolute and transcendent (“conforms to some Moral Code that is out there in the universe (spooky morality)”). By which I mean true for everyone, eveywhere, always, and ‘coming from without’ or somehow written into the fabric of the universe. If I am wrong here, correct me. And then there is Harris’ meaning–morality relates to human and animal well-being.

            But these are not two alternative meanings of “moral”. For they concern different aspects of “morality”. On the one hand you have a statement about the metaphysical status, essentially, of moral values, and on the other you have a statement about the function of, or, with what they are concerned.

            On Harris’ view, moral prescriptions would be provisional and contextual (dependent on our neurobiology, specific situations, etc.), but nonetheless objective because moral questions must have right(and wrong) answers.

            So I fail to see equivocation. What am I missing?

            “Harris is trying to…say we are scientifically justified in forcing people into behaving a certain way.

            You have also said that he wants to “dictate what others can and can’t do”. You have made these statements without providing any evidence. Do you care to substantiate them? The truth is that all that Harris has in mind is criticism. He no more wants to force people to behave a certain way than he wants to force them to believe the facts of physics. And you accuse me of not reading.

            There are things that people find intuitively “moral” that do not promote well-being, and there are things that promote well-being that are considered wrong or neutral by the human moral intuition. The correlation is a weak one.

            I agree with the first sentence but I would note one thing. It’s not that I think most, if not all moral intuitions promote well-being, but that, if you scratch the surface, they intend, in one way or another, to promote it. (I suspect you know I thought that, but just to be clear). I don’t think the correlation is anywhere near as weak as you think. In fact I suspect it is quite strong. But even if the overlap is not complete, I don’t quite see how that is a problem for his thesis.

          3. But these are not two alternative meanings of “moral”.

            First of all, this – “morality relates to human and animal well-being” – is not a definition, any more than “gravity relates to planetary motion” is a definition of gravity. Harris defines “moral” or “right” or “morally true” as “increasing well-being.” This is not the same as the sentence, “correct according to objective laws of the universe,” which is the normal meaning of moral (your description of my definition was good, btw). One describes that which increases well-being; the other describes that which conforms to a Moral Law. To give an example of the “logic”al difference between them, the normal definition allows you to make prescriptions based on nothing (e.g. “You must not kill people). Harris’ definition allows you to make only prescriptions that have an antecedent (e.g. If you want to increase someone’s well-being, you must not kill them [which is simply common sense, and has a lot less oomph than the more absolute statement above]). Therefore, the two definitions are decidedly different.

            What Harris does is use language to obscure this, using words like “relates to” and “depends on” instead of just saying what morality is. This is suspect. It is also suspect that Harris would use the word “morality” at all, when “increasing well-being” is really what he’s talking about, and that is not what most of the English-speaking world understands the word to mean. And look at the argument Harris has with a woman about whether it is “wrong” for women to be forced to wear burqas. Is there any doubt that the only reason they disagree is because the woman is still using the normal definition of the word “moral”? Of course beating women if they refuse to wear a burqa decreases their well-being! Clearly the other woman in the conversation doesn’t realize that Harris’ point is as simple as that- she’s stuck on the usual definition of “moral,” and who can blame her? Harris shouldn’t be using the word.

          4. As for the notion of dictating to others what they should do (and not just “what they should do if they want to increase well-being,” but “what they should do“), I must admit that I’ve gone over his statements again and I haven’t found anything of that nature. I sense it’s coming, because why else is Harris using such ambiguous language, as well as a word (“moral”) that really doesn’t belong in this discussion? But I guess it hasn’t happened yet, and so I concede that point. I will keep a sharp lookout for such prescriptions as I read the book.

            The correlation is a weak one.

            I wrote this in response to your false claim about my position. I never said how it related to Harris’.

        2. You should check out the 3rd Beyond Belief meeting if you haven’t already. The morality section in particular. It’s great. thesciencenetwork.org

  15. We agree that moral values (and laws, etc) exist only in people’s heads. And you’re right that people engage in moral reasoning. Their moral judgements are influenced by their pre-existing moral values. But it doesn’t follow that their moral values can be true.

    I don’t think it’s that simple. Maybe I should have never said “in people’s heads,” because I think there’s something bigger and more interesting going on.

    Part of the point of my money-and-banking example is that certain sets of rules are not arbitrary, e.g., the basic rules of monetary exchange, or rather the underlying principles of those rules.

    The basic principles money are not merely a human construction, IMHO, but rather a discovery of a few principles that could be discovered many only roughly similar species elsewhere in the universe—pretty much any intelligent social species whose individuals are largely but not exclusively self-interested. Like the rules of arithmetic, we shouldn’t be surprised if other species come up with those rules to solve the problem of flexible exchange—if they face the problem, we should be surprised if they don’t.

    You mentioned chess, which is an extremely specific, highly historically contingent game. The concept of morality I’m trying to defend is not like that—it’s much, much more basic. Rather than comparing it specifically to chess, it’s more appropriate to compare it to a more general and basic concept, like two-party competitive game. There are basic principles of two-person competitive games that, like money, are not so much invented as discovered—any social species that’s playful is very likely to discover them; they’re extremely unlikely to “discover” the specific and peculiar rules for knight moves and castling in chess.

    In chaos and complexity theory terms, chess is not an attractor, or at best a very tiny one. It’s a specific phenomenon that’s contingent on a lot of fiddly little historical stuff about particular situations in particular places and times. In contrast, two-party competitive game is a much more general kind of phenomenon that can be expected to arise in a variety of circumstances around the universe, for much more basic reasons. It’s a fairly large attractor—an abstract stable “state” that can be reached via a variety of contingent paths, starting from a fairly broad and random range of initial conditions.

    And in fact, phenomena rather like two-party competitive games do arise independently, in many places, on Earth alone, and even among species we wouldn’t normally call “intelligent”—game theory is applicable to evolutionary biology, etc.

    One basic idea I’m pushing is that such attractors are legitimate scientific kinds with a more interesting ontological status than comparatively arbitrary or contingent sets of rules like chess.

    Many things that we speak of in science are attractors like that—e.g., vortices, refraction, replication, life, computation, vision, parasitism, intelligence, arithmetic, etc. They may emerge naturally in a fairly wide variety of concrete circumstances, so long as certain fairly general and abstract regularities are satisfied.

    If the necessary underlying regularities do not exist, or for some random reason certain regularities not emerge, then the phenomenon doesn’t happen. So, for example, if parts of waves don’t move faster than other parts of waves, you don’t get refraction.

    That sort of thing is generally not simply a matter of opinion. There’s a recognizable “natural kind” of phenomenon with distinctive features, and the absence of certain distinctive features means it’s objectively not that kind of phenomenon.

    Consider two-party competitive games. To even have such a game, you have to have two parties that interact. They also have to do something more or less like taking turns, or having comparable rate limits on their activities. It doesn’t have to be symmetric, e.g., in an asymmetric game one party may be able to make five moves in a row and the other only one at a time, but if one participant can just keep acting indefinitely while the other is blocked from acting, it ceases to be a competitive game at all; there is no competition, just a strong party beating up a weak one, and what you have is just a beat-down, not a game at all. Game theory only applies to rule it out as a valid competitive game, because minimal criteria have not been met.

    Different competitive games may have essentially no specific rules in common, and no specific events that count as the same kind of thing (e.g., a turn). Superficially, they vary wildly in seemingly every way.

    Still there is a deep constraint on those rules. There has to be at least a weak form of fairness, whether it’s implemented a specific rule about turn-taking, or by some kind of action dependencies that creates rate limitations, or something.

    Without knowing the specific rules of the game, you know that if one party simply keeps taking turns indefinitely without the other party ever being able to do anything, cheating is going on. Whatever the minimal fairness-enforcing rules are, they’re not all being followed.

    (Oddly, just after I wrote most of the above, I took my feisty little dog to the dog park and watched him play-fighting with a much bigger dog friend of his. The bigger dog tussled with him for about an hour, but always gave him a handicap of some sort—e.g, she’d fight lying down, or standing up but keeping her back feet planted and only using the front part of her body, or using her mouth but not her paws, or vice versa. A lot of dogs behave as if they understand the principles of two-party competitive games, e.g., that if it’s too uneven a match, it’s not much of a game. Some dogs keep it fairly even, while some insist on winning most of the time and others are willing to let that happen, as long as they win occasionally. If one dog insists on winning all the time, though, the other dog will eventually opt out of the game one way or another, e.g., by submissively forfeiting, running away, or getting pissed off and more seriously fighting back. They seem to think that just using superior size and strength to consistently win is cheating, or at least notice that it that it stops being fun.)

    Some of the underlying issues here are

    1) what it means for something to be objectively real,

    2) what it means for something to be prescriptive or normative, and

    3) what it means for it to be objectively normative

    Briefly, I think a lot of things are objectively real, but in a different way than most people naively guess. (I don’t think that difference is fatal to the idea of objective morality i a clear and useful sense.)

    So, for example, predator/prey dynamics are real in any sense we normally demand in science. They are not written into the fabric of reality in any spooky way, but there are principles that are fairly strictly enforced by emergent regularities.

    Predator/prey relationships are not just a matter of opinion, or specific to any particular individual or even any phylum. Predation is a fairly big attractor, because it’s a fairly easily reachable and stable state that can be found by numerous evolutionary paths, all over the “tree of life,” and all through the history of life.

    Given that predation works in certain ways and not others, and works better in some ways than others, we can justifiably say that it’s an objective fact whether something is a predator.

    More interestingly, we can also say that its an objective fact whether something is a good predator.

    For example, something is a good predator if it’s good at killing things and eating them, without being harmed in the process.

    Something that’s unwilling or unable to kill and eat enough things is a bad predator, objectively speaking.

    There are better and worse predators, and such (nonmoral) “normative” talk isn’t the least bit spooky—we don’t have to appeal to any magical beings or spooky weirdness, just regular old science. The laws of predator/prey dynamics are not written into the structure of the universe in any obvious sense, but they do emerge in scientifically comprehensible ways from the normal workings of reality.

    I think one of Harris is (or should be) arguing that we can talk about morality in a similar way. Morality is a real, non-spooky emergent phenomenon that is scientifically explicable, and which implies a set of norms. A good moral agent will do certain things and not others, and a bad one will do the others.

    This “normative” talk of good and bad moral agents isn’t necessarily any more subjective or spooky than talking about good and bad predators.

    To make the argument work, though, you need an argument that the scientific concepts of good and bad moral agency map onto the scientifically naive, folk psychological concepts of good and bad agency, and do so in a way that most people would care about.

    The fact that most people have some incorrect preconceptions about about spooky immaterial moral weirdness isn’t necessarily fatal, either.

    Whether we consider two different concepts to refer to the same thing is a very subtle issue in general, but let me give an example where we can make such an identification between partly-wrong naive concepts and a scientific account.

    Many people have spooky ideas about what life is. Many people—most, I suspect—are vitalists. They do not realize that there is no life force or vital essence whatsoever, and being alive is enirely a matter of being a certain kind of machinery functioning in certain ways.

    Suppose that people who believed in a spooky life essence said that if things don’t have that essence, they’re not alive.

    That would be weird. The word “life,” like most scientific terms, isn’t something with a classical definition in terms of necessary and sufficient conditions, such that if something doesn’t fit that preconceived definition, it doesn’t count.

    The word “life,” like most scientific terms, refers to an actual observed phenomenon in the world, whatever it actually turns out to be. We have prototypical examples of life, and when we come to understand what they actually have in common, we realize that’s just what life is.

    I take Harris to be making a similar claim about morality. Like predation or life, it’s a natural phenomenon that we have examples of, and when we come to understand how it really works, preconceptions of spooky magic just turn out to be mistaken.

    The argument about morality is more difficult, however, because Harris does want to preserve the intuition that (real, scientifically explicable) morality is something that we will mostly still care about in a certain way. (I.e., that our mistaken preconceptions aren’t what we’re mostly attached to when we have moral feelings, and that our native concepts of morality are flexible enough to adapt to the scientific reality.)

    I think he’s largely right, but that’s a difficult argument and this way too long already, so I’ll stop for now.

  16. Oh, what the heck, here’s another chunk, where the rubber starts to meet the road.

    Richard:

    Let’s say a particular person believes that murder is wrong, and this leads him to believe that X is wrong, where X is a specific murder. It doesn’t follow that X _is_ wrong. And that remains true even if most people share his belief that murder is wrong.

    Right. I agree, and I don’t think that undermines the points I’m actually making. I’m not claiming that we will converge to complete agreement on the moral values of specific acts in most cases, such as how justified a particular killing is. People may often disagree about whether the justification is “sufficient” in particular cases.

    But in the case of murder, you can bet that there’s a constraint somewhere in any moral system, such that there is a difference between an unjustifiable killing (murder) and a justified one. (E.g., death by misfortune, justifiable homicide, or legal execution).

    Theres a deep constraint common to all moral systems (or so I claim) such that there are some standards of justification for harming others. (At least others within a certain universe of moral consideration.) If you don’t have that, you don’t have a moral system, just as you can’t have a competitive game without at least a minimal fairness constraint.

    There’s a minimal fairness constraint in morality, too, I think, such that major harm to others is, by default, wrong. It may be justified in particular cases, but there has to be some justification. You have to argue that the victim was a serious threat of some sort, or that they somehow aren’t within the universe of moral consideration, or that its acceptable collateral damage, and the end justifies the means in terms of some greater good.

    It’s no accident that you can’t, in any culture, justify wanton killing—you can’t just say “I didn’t like him,” or “I thought it would be fun to watch her die.” You have to rationalize it. (I’m not saying that the rationalizations are always spurious.)

    Sometimes those rationalizations are mostly implicit, indirect, and contorted. E.g., you might have a very stratified society in which the people at the top are allowed to fairly wantonly kill a few of the people near the bottom now and then. Such setups inevitably have their own rationalizations, however—e.g., that the people at the top are better people and the people at the bottom somehow deserve their lot (e.g., for sins in a past life), and that killing the people at the bottom isn’t really all that bad for them because it just sends them to the next place they were going anyway, etc. Or it may be justified in terms of a need for strong leadership and collateral damage—society won’t work without a strong leader, and we have to accept that strong leaders kill weak people now and then, and on the whole we need to be ruled with an iron hand.

    Moral systems do vary a lot in significant ways from culture to culture, but the universal patterns of rationalization tell you that there are deep and important similarities underlying those differences.

    There are only a handful of basic principles of moral justification, shared across cultures. The differences beetween cultures’ specific moral codes are mostly rationalized by using the very same few principles in combination with a variety of spurious fact claims. (E.g., about God’s wisdom or Karma, or who did what to whom, or what is necessary to avoid social chaos and ensuing widespread harms.)

    Just look at how in-groups vilify out-groups. Their justifications for Othering people are invariably riddled with spurious fact claims—e.g., an international conspiracy of Jews, atheists being atheists because they choose to rebel against God, black people having the Curse of Ham or at least some profound genetic inferiority, God endorsing patriarchy or promising somebody someone else’s land, some group’s progenitor wronging another group’s progenitor, liberals hating America, the Devil tempting people into Sin, demon possession, homosexuals perversely choosing to be perverts, etc. The Other is malicious and/or stupid and/or unable to control her baser impulses, or in thrall to the wrong god, or a bunch of cheese-eating surrender monkeys, or just so disgustingly ugly and stinky that we don’t feel that they’re like us and owed moral consideration.

    If moral schemes were not basically quite similar, and could differ arbitrarily without rationalization in terms of alleged facts, none of that would be necessary, and we wouldn’t see this consistent pattern of made-up shit rationalizing misapplication of universal moral principles. If there wasn’t a universal rule that wantonly harming other humans (at least) is wrong, there wouldn’t be a universal pattern of vilifying and dehumanizing others to rationalize harming them.

    How does this compare with, say, the rules of chess? […] A game played by some other rules would not be “chess” as we normally understand that word. On the other hand, the word “wrong” is not inherently associated with any particular moral code. Nor do people generally take “wrong” to mean “wrong by the moral code of most people”. So it doesn’t follow that X is wrong just because murder is against the moral code of most people.

    Right. Murder—unjustifiable homicide—is wrong because it follows pretty straighforwardly from a basic feature of any moral system: some constraint on causing harm to others without good reason. (It’s hard not to count homicide as a serious harm, all other things being equal, because being alive is instrumental in benefiting from whatever else you might consider good.)

    What counts as murder—which homicides are unjustifiable—differs between moral codes, but mainly due to explicit or implicit appeal to alleged facts.

    Another interesting feature of moral systems is that there is a recognition that people may make moral mistakes—that they may think something is wrong when in fact it’s not, because they believe something that is untrue. (E.g., that the Koran is the reliably divinely inspired, and that the God in question is a moral authority.) Everybody knows that you can be confused about morality, and that it’s important to get it right—to reason correctly from true facts. Everybody at least understands that other people’s moral errors may rest on mistakes of fact, and that rationality is pretty relevant—that’s why people typiclly argue over the alleged facts when they argue about morality.

    I don’t think that commonsense view is wrong. People do make moral mistakes in very much that way, and arguing about the relevant facts is important.

    This is quite unlike chess in that the specific rules of a particular game don’t need any justification except in the sense that the game turns out to be fun to play. And if you like playing a different game—say, checkers or Halo 3 or touch football, that’s okay—you’re not obliged to play chess, because it’s just one game among many, and none is terribly important and obligatory. It’s okay for the rules of chess to be contingent and arbitrary in many respects.

    Morality is different because it regulates all social interactions, and you just can’t opt out. (Well, you can, if you’re more or less sociopathic, but others will likely interfere in a way they wouldn’t if you opted out of playing chess and chose to watch TV instead.) It can’t be optional because if people choose different systems arbitrarily, they affect each other by doing things like fighting over property, killing each other, etc.

    The choice of moral schemes is also highly constrained, which is why people have to work so hard to rationalize their major differences and vilify the people who differ.

    The case of the law, which you mentioned, is similar to that of chess. Even if a specific sets of laws is not mentioned explicitly, it will normally be implied by the context, e.g. the laws of whatever jurisdiction the conversants are located in.

    Hmmm. I’d say that basic morality has jurisdiction everywhere, if people are talking about morality at all. If I say, for example, that “female genital mutilation is wrong,” I’m not just saying I don’t like it, or that where I’m located, it is generally agreed to be wrong, or that the large majority of all people agrees with me. I’m saying something much stronger and less arbitrary than that.

    I’m saying that by basic constitutive principles of morality, and given the facts of the actual world, it’s generally wrong in the situations where people actually do it.

    I’m implying several things, roughly like this:

    1) There are basic principles of morality that we’d generally agree on, given sufficient rational reflection in light of actual facts, and that they are not simply arbitrary—it’s not just a matter of consensus, but of why that consensus is possible. (This is a presupposition of normal moral talk. I don’t think it’s wrong, although it is weird.)

    2) Like wanton killing, wanton female genital mutilation is the kind of thing we would agree was wrong, on rational reflection, given those not-merely-consensus principles of morality and basic facts about females, genitals, mutilation, etc. (We’d agree that FGM is bad, all other things being equal.)

    3) In the cases where FGM typically occurs in the real world, there are no actual facts that rationally justify overriding that default judgment—people’s judgments of FGM being a good or even okay thing are objectively mistaken in one way or another.

    (For example, people who have their daughters’ clitorises removed are not generally people who think its fine or fun to wantonly harm their daughters in a way that reduces their potential to be happy—they’re not psychopaths. They think it’s justified harm, which is actually better for the daughter and/or her family and/or society in the long run. Their thinking so is generally based on factual errors, e.g., belief in a certain kind of God who is a moral authority, and believing that the God prescribes certain things about sex, women, and society which somehow end up making FGM a good idea.)

    The key idea here is that when normal people say that “X is wrong,” they’re not merely expressing an emotion, or appealing to a merely arbitrary principle of moral judgment, or appealing to a current consensus.

    They’re saying that anybody who disagrees is making a moral mistake of some sort—they’re mistaken about the constitutive principles of morality, or mistaken about some relevant facts, or have made an erroneous inference in reconciling more specific moral principles with facts and basic moral principles, or have simply made an error in applying the more specific moral principles to an actual case.

    That’s similar to calling someone a criminal, in that it has fairly clear propositional content, whether or not it’s used to imply an attitude as well. A psychopath could rightly acknowledge that cutting off women’s genitalia is wrong, but still giggle and say it’s a lot of fun, for him, and it doesn’t make him feel the least bit bad that it’s wrong.

    I’d say that FGM is objectively morally wrong in the sense that it goes against a constitutive principle of morality without a sufficient justification that’s not based on moral mistakes.

    1. Hi Paul,

      Instead of showing how moral claims can be true (let alone actually demonstrating the truth of any moral claims) you keep giving examples of true statements (like “$200 is more than $100”) and asserting that if those statements can be true so can moral claims. That doesn’t follow. I have already taken a couple of your examples and shown how they’re significantly different from moral claims. I don’t see how your latest posts have added anything new, except for giving yet another example:

      “For example, something is a good predator if it’s good at killing things and eating them, without being harmed in the process.”

      We can say that something is a good X, if X implies some goal against which we can measure. A predator has the goal of eating and surviving, so we can say (very roughly) that something is a good predator if it’s good at eating and surviving. This is similar to my chess example, because chess has a goal (winning the game) and we can say an opening is good if it’s conducive to that goal.

      The whole point of moral claims is that they are supposed to be true _regardless_ of an agent’s goals. Murder is supposed to be wrong even if it’s conducive to one’s goals. So there doesn’t seem to be any way to apply the same criterion to morality.

      I don’t really want to spend any more time on this, so I’ll make this my last post.

      1. Richard,

        It seems to me that you’re missing my point.

        You seem to want something unreasonable from moral statements being true.

        The whole point of moral claims is that they are supposed to be true _regardless_ of an agent’s goals.

        Sure. But that is not unlike the case of being a good predator.

        If a predator decides for some reason to become a vegetarian, it’s a bad predator.

        Whether its goal is to be a predator or not, it’s a bad predator. If it manages to pursue other goals it prefers, it’s still a bad predator.

        Murder is supposed to be wrong even if it’s conducive to one’s goals.

        Right. So if somebody chooses to be a murderer, and eat their victims, they’re a bad moral agent. They may be a good predator, and a fine young cannibal, but still a (morally) bad person, i.e., a bad moral agent.

        So there doesn’t seem to be any way to apply the same criterion to morality.

        You seem to want norms about morality to work very differently from other kinds of norms.

        Consider the norms of say, rationality. They’re not just arbitrary—if you want to be rational, there are inferences you can draw and inferences you can’t.

        If you don’t care about rationality, and proceed to draw invalid inferences and fail to draw valid ones, it doesn’t matter whether that’s consistent with your goals—you’re objectively irrational.

        Or take Nick B.’s example of medicine. Good doctoring requires a positive intent toward the patients health. If you go around killing perfectly healthy patients, you’re a bad doctor.

        You may have your reasons for doing so—e.g., killing a dictator you know will commit genocide if you don’t, or just killing off a competitor for personal gain.

        Doing bad medicine may be consistent with your goals, but it’s still bad medicine. It violates norms of medicine that are not arbitrary—they’re part of what it means to do medicine and do it well.

        Notice that in general, norms are not spooky or magical, and there are no spooky principles at work.

        Notice also that violating a norm is only that—violating a norm, which you may or may not care about. That doesn’t mean that it’s not true that you’re violating a norm, or that norms are subjective or arbitrary.

        If you don’t want to do good medicine, you can do bad medicine, until maybe somebody who cares manages to stop you.

        If you don’t want to be rational, you can be irrational, until maybe somebody locks you away.

        Similarly, if you don’t want to be moral, you can be immoral, until maybe somebody who cares about morality stops you.

        In all these cases, and with the “predator” case, there’s a non-arbitrary logic to the norms of the particular domain. You don’t have to care about them, but they’re not just made up and “subjective.”

        They reflect constitutive principles of the domain in question, so it’s not just a matter of opinion whether somebody is being irrational, or doing bad medicine, or (I claim) being immoral.

        It seems to me that there’s no reason to say the issues aren’t objective, in clear cases. There are clearly invalid inferences, clearly bad medicine, and (I claim) clearly immoral acts, irrespective of your actual goals.

        My question is this: why do you expect morality to be different in that regard?

        That doesn’t just go against the scientific account of morality, but against common sense understandings of morality.

        Most people know that something being wrong doesn’t automatically make it undesirable, given your goals, if you don’t care whether it’s wrong.

        Most people understand the idea of bad people who just don’t care enough about morality, e.g., sociopaths.

        Any theory that didn’t allow for people to do immoral things for nonmoral reasons would be unrealistic.

        You seem to want much more than objective truth to moral claims—you seem to want some kind of magical argument that will necessarily motivate immoral people to be moral, irrespective of their goals.

        That’s unreasonable, and seems incoherent—how on earth could you be rationally motivated to do something that doesn’t follow from your goals?

        I’m sketching a theory of how you can go from is to ought in a certain sense—by understanding what a domain of discourse is, you can recognize the norms implicit in that domain.

        I don’t pretend that by doing that, I can make you care about that domain and its norms.

        You may say that if I can’t, they’re not objective norms of morality.

        If that’s what you mean, answer me this:

        Why not say the same kind of thing about rationality? If you’re not a rational person, and just don’t care to be, I can’t argue you into being rational, can I?

        Does that mean that there are no objective norms of rationality? Is it impossible to objectively say that somebody is irrational?

        I don’t think so.

    2. Whoo, that’s a lot of text!

      You seem to be arguing that there is a set of core moral principles that define the human moral intuition. I don’t think anyone was arguing otherwise. I, for one, was not.

      But to call that intuition “objective” is quite another matter. Intuitions about what is right and wrong are entirely in our heads. As I wrote before, had the circumstances of our evolution been different, we very likely may have evolved a different moral intuition, in which the moral values we assign to various scenarios would have been at least slightly different. Therefore, it is objective to say “most humans feel it is wrong to harm others without reason.” It is not objective to say “harming others without reason is wrong,” unless your definition of “wrong” is simply “not in keeping with the prescriptions of the human moral intuition.” In which case, the two sentences mean exactly the same thing, only the latter is more misleading.

      The key idea here is that when normal people say that “X is wrong,” they’re not merely expressing an emotion…

      That they are having an emotional reaction is exactly what current studies on the human moral intuition have found! For example, http://www.princeton.edu/pr/pwb/01/1022/

      There are basic principles of morality that we’d generally agree on, given sufficient rational reflection in light of actual facts, and that they are not simply arbitrary—it’s not just a matter of consensus

      No, we agree on them because we’re the same species and have, by and large, the same moral intuition program built into our brains. Most disagreements are caused by modifications to that program that are written by our environment as we age. Remember, “rational reflection” cannot get you from an is to an ought – ever. No matter how much bad comes from the wanton murder of other humans, there is no argument that can get you from that premise to the conclusion, “you must not do it.” And yet, as a species we almost entirely agree on that conclusion because a negative emotional reaction to undeserved murder is programmed into our brains. That is the objective fact here. It’s not that murder is objectively wrong, it’s that humans generally feel that it is.

      That said…
      I’d say that FGM is objectively morally wrong in the sense that it goes against a constitutive principle of morality without a sufficient justification that’s not based on moral mistakes.

      I agree with you regarding FGM given your definition of “wrong”. Basically what you’re saying is that a normal human moral intuition, untarnished by false beliefs or cultural propaganda, would disapprove of FGM. That may be objectively true. And I think understanding that better may take us somewhere as a society, presuming we can get around the obstacle of dealing with cultural values that are “wrong.” (For example, members of a society in which a man is shamed if his daughter commits adultery, and in which shame is a really bad thing naturally would feel justified in lethal retribution to punish the incident. I think dealing with such phenomena would be a matter of showing people that there is another way, a way that works better and can still feel right, as opposed to the current cultural practices that feel right but also result in otherwise needless death.)

      1. Tim Martin,

        I think we need to get clearer on two different questions, and on the fact that they are very different questions:

        1. Is there a coherent “natural kind” (objectively identifiable category of phenomena) of morality, such that some things are moral systems, or moral judgments, and other things simply are not.

        2. Does this allow us to decide, by the basic constitutive principles of morality, that some things are “better” in moral terms than some other things, even if it does NOT allow us to specificy a particular moral system as the objectively right one, or give a total ordering of what is morally better than what in every case, or even most cases.

        3. Does doing this count as identifying “objective” moral principles, if not everybody cares about those things. (E.g. sociopaths.)

        I say yes to all three. You seem to be saying no to both one and two, and maybe to 3 independently of 1 and 2.

        One of your objections seems to be based on an assumption that what counts as moral is unconstrained or not usefully constrained, by definition—it’s a purely human activity involving arbitrary preferences, and humans are allowed to prefer anything over anything else and call it their “morality.”

        I think that’s just false. I think that people are referring to something in particular when they talk about “morality,” even if there’s also considerable variation in the details, and that it’s qualitatively different from other kinds of preference schemes in a way that can be identified scientifically and objectively.

        You seem to think that for something to be “objective” or have an “objective” truth value, it can’t be dependent on regularities in specifically human-like psychology, or only interesting to humans because of pecularities of human interests.

        You also seem to conflate the idea of something being of distinctively human interest with the idea that it’s merely a matter of human consensus. Those are very different things, and that’s what several of my examples are designed to demonstrate. I don’t think I’ve done a good enough job making the significance of the examples clear.

        I think both of those assumptions are clearly false, as can be shown with a simple example:

        Michael Jordan is a better athlete than me.

        That’s an objectively true statement.

        The idea of athleticism is quite variable between humans, and of interest only because humans are certain kinds of animals, yet some things are clearly more athletic than some other things.

        For example, some people’s favorite “athletic” sport may be baseball, and others’ may be basketball, or even pub darts or non-competitive snowshoeing. They may have rather different ideas of what makes somebody a great athlete—speed and strength, stamina, fine motor coordination, real-time strategic thinking, the ability to recognize what others are thinking and outwit them, he’s highly competitive and doesn’t choke under pressure, etc. Some of those distinctions may be very important to some people, but not apply to others’ prototypical sports at all.

        And yet, it is true by any standard that Michael Jordan is a better athlete than me. He’s stronger, faster, defter, and strategically quicker and smarter than me. Name a sport, and he’ll be better than me at it if he tries.

        That is not a matter of opinion or arbitrary preferences. It’s an objective fact.

        It doesn’t matter that the concept of athleticism is rather vague, and that there may be irreconcilable differences between various people’s ideal of “better athlete,” with a lot of fuzz around the edges and quite a few gray areas. The concept is clear enough that some statements about who’s just “better” (athletically) are objectively true or false.

        It doesn’t really matter that the concept of athleticism only makes sense for certain kinds of animals. For example, a sufficiently intelligent individual of a non-mobile asocial species (say, an intelligent sponge from a distant planet) could understand the concept, whether it was capable of being athletic or not, and irrespective of whether it could care about the concept in anything remotely like a human way. (It might, for example, abhor the ideas of moving around and of competing for no practical end, but still recognize that Michael Jordan is much better at those things than me. It might just be morbidly curious about organisms that would do such disgusting or scary things.)

        This is also not just a matter of consensus or majority preferences. Athleticism is a distinctive enough kind of thing that it has its own internal logic, even if we only recognize or care about the category for odd reasons having to do with our psychologial makeup and cultural history.

        So, for example, if most humans were not interested in gratuitious competition or gratuitous exertion, but some were, and only a small percentage actually participated, we could still have very much the same concepts of sports and athleticism. They’d just be of minority interest, like stamp collecting or recreational computer programming. A majority that abhorred gratuitious exertion and competition could recognize what athletics is well enough to recognize that Michael Jordan is a better athlete than me.

        I claim that all those observations also apply to morality.

        For example, suppose it turned out (very surprisingly) that only 10 percent of people were authentically moral, and the other 90 percent were actually sociopaths with no moral sentiments whatsoever, who were just very good at faking it.

        Notice that that’s not an incoherent idea—it’s a pretty clear idea, because we have definite enough intuitions about what kinds of things count as morality and what kinds of things don’t, even if we can’t articulate them, and it’s clearly not a matter of majority agreement. Faking it for personal gain, like a sociopath, just doesn’t count, any more than throwing a game for a payoff under the table counts as being a good sportsman.

        Like athleticism, morality being the kind of thing it is does not depend on it being universal, popular, or even typical. It is a certain kind of naturally occurring phenomenon with its own internal logic, and some things clearly count and many others clearly don’t. And given the kind of thing it is, it has certain natural norms, whether or not a particular person values them, and even if only a minority does.

        I hope it’s clearer now why I think many people are reading way too much into the word “objective” when it’s applied particularly to morality. They want it to mean more than it does when applied to anything else, such as athleticism.

        1. I completely agree with you regarding athleticism, because if I ask you what athleticism is, or what it means to be a better athlete than someone, you can tell me. You can say “he’s stronger, faster, defter, and strategically quicker and smarter than me. Name a sport, and he’ll be better than me at it if he tries.” And I can say “yes that’s true,” and thus we agree.

          But what is morality? What does it mean for something to be moral versus not?

          Sam Harris would answer that something is moral if it increases well-being. To which I would say, “fine, that’s objective. Now why are you using the word ‘moral’ to describe ‘well-being’?”

Leave a Reply to Tim Martin Cancel reply

Your email address will not be published. Required fields are marked *