Sam Harris and the Moral Landscape Challenge

June 13, 2014 • 6:57 am

I’ve now read (and blurbed) Sam’s new book, Waking Up: A Guide to Spirituality Without Religion, which is a provocative synthesis of neuroscience, spirituality, and Sam’s own adventures in meditation and drug-taking. I recommend it, though I told Sam that I thought he’d get pushback from that segment of the atheist community (not including me) who cringe when they hear the word “spiritual.”  The book will be out September 9, and you can get the Kindle version for about 12 bucks.

In the meantime, Sam has published the winner of his “Moral Landscape Challenge,” and responded to the winner’s essay. Here’s the original challenge:

It has been nearly three years since The Moral Landscape was first published in English, and in that time it has been attacked by readers and nonreaders alike. Many seem to have judged from the resulting cacophony that the book’s central thesis was easily refuted. However, I have yet to encounter a substantial criticism that I feel was not adequately answered in the book itself (and in subsequent talks).

So I would like to issue a public challenge. Anyone who believes that my case for a scientific understanding of morality is mistaken is invited to prove it in under 1,000 words. (You must address the central argument of the book—not peripheral issues.) The best response will be published on this website, and its author will receive $2,000. If any essay actually persuades me, however, its author will receive $20,000, and I will publicly recant my view.

When I saw this,  I thought that nobody would ever see that twenty grand or a public recantation, for Sam’s a man who holds firmly to his views. But, given the criticism of his book, I looked forward to seeing who produced the “best response.”

My own take was that although Sam’s case for objective moral values—those values that maximize “well being”—wasn’t completely persuasive, his form of consequentialism did align almost perfectly with what most people think of as “moral acts.” My main problem was how one could adjudicate well-being when it conflicted between different issues (e.g., personal happiness versus societal harmony), as well as the possibility that what Sam considered moral would sometimes conflict with what we instinctively feel is moral (i.e., Sam’s version would have all of us give away a lot of our money to poorer people, but neither he nor the rest of us—and that includes Peter Singer—do that). The “is-doesn’t-denote-ought” issue didn’t concern me so much, since one can simply impose on the whole scheme a Rawlsian “veil of ignorance” (that is, a group of entities who don’t yet live on Earth, but make decisions about what will be moral before they take up a life on Earth—ignorant of whether they’ll be a billionaire, a factory worker, or a poor Indian farmer. To me, that makes Sam’s scheme even more “objective.”

Surprisingly, there were over 400 responses to the challenge, which tells you how seriously people take Sam’s views. The onerous task of judging them fell to the diligent and estimable Russell Blackford, who selected the as the winner an essay by philosopher Ryan Born. You can find it here.

As expected, Born takes Harris to task for deriving “ought” from “is”—that is, well-being was rejected as a “scientifically derivable” and objective criterion for morality. This critique was shared by many entrants. Blackford judged Born’s version the most persuasive.

I won’t go into all the tos and fros, but here’s the heart of Born’s critique:

Neither of your analogies invalidates the Value Problem. First, your analogy between epistemic axioms and moral axioms fails. The former merely motivate scientific inquiry and frame its development, whereas the latter predetermine your science of morality’s most basic findings. Epistemic axioms direct science to favor theories that are logically consistent, empirically supported, and so on, but they do not dictate which theories those will be. Meanwhile, your two moral axioms have already declared that (i) the only thing of intrinsic value is well-being, and (ii) the correct moral theory is consequentialist and, seemingly, some version of utilitarianism—rather than, say, virtue ethics, a non-consequentialist candidate for a naturalized moral framework. Further, both (i) and (ii) resist the sort of self-justification attributed above to science’s epistemic axioms; that is, neither is any more self-affirming than the value of health and the goal of promoting it. You might reply that the non-epistemic axioms of the science of medicine enjoy the sort of self-justification you have in mind for the moral (and likewise non-epistemic) axioms of your science of morality. But then your second analogy, between the science of medicine and your science of morality, fails. The former must presuppose that health is good and ought to be promoted; otherwise, the science of medicine would seem to defy conception. In contrast, a science of morality, insofar as it admits of conception, does not have to presuppose that well-being is the highest good and ought to be maximized. Serious competing theories of value and morality exist. If a science of morality elucidates moral reality, as you suggest, then presumably it must work out, not simply presuppose, the correct theory of moral reality, just as the science of physics must work out the correct theory of physical reality.

Well, I’m not aware of any “serious competing theories of value and morality”. That is, there may be ones that are proposed seriously, but I don’t see them as serious competitors to a kind of consequentialism that might be construed in a Rawlsian fashion. (I’m aware that some other philosophers see moral values as “objective”. Peter Singer told me he is starting to come around to moral objectivity, referring me to Derek Parfit’s book On What Matters. But I simply gave up on after glancing through its two thick, dense volumes.)

At any rate, I think Sam forked out the 2 grand but not the 20 grand, because his answer to Born, given here, is a rebuttal, “Clarifying the Moral Landscape“.  It’s a long response—8 single-spaced pages when printed out—and Sam ably defends the value of his system of ethics over others. A brief excerpt:

Ryan wrote that my “proposed science of morality cannot offer scientific answers to questions of morality and value, because it cannot derive moral judgments solely from scientific descriptions of the world.” But no branch of science can derive its judgments solely from scientific descriptions of the world. We have intuitions of truth and falsity, logical consistency, and causality that are foundational to our thinking about anything. Certain of these intuitions can be used to trump others: We may think, for instance, that our expectations of cause and effect could be routinely violated by reality at large, and that apes like ourselves may simply be unequipped to understand what is really going on in the universe. That is a perfectly cogent idea, even though it seems to make a mockery of most of our other ideas. But the fact is that all forms of scientific inquiry pull themselves up by some intuitive bootstraps. Gödel proved this for arithmetic, and it seems intuitively obvious for other forms of reasoning as well. I invite you to define the concept of “causality” in noncircular terms if you would test this claim. Some intuitions are truly basic to our thinking. I claim that the conviction that the worst possible misery for everyone is bad and should be avoided is among them.

To me the notion of well being is intuitive, but I’m unconvinced that it’s objective. Nevertheless, neither do I find any other system of morality better than Sam’s. I think these statements, for instance, are accurate:

Ryan also seems to take for granted that the traditional categories of consequentialism, deontology, and virtue ethics are conceptually valid and worth maintaining. However, I believe that partitioning moral philosophy in this way begs the very question at issue—and this is one reason I tend not to identify myself as a “consequentialist.” Everyone knows—or thinks he knows—that consequentialism fails to capture much of what we value. This is true almost by definition, because, as Ryan observes, “serious competing theories of value and morality exist.”

But if the categorical imperative (one of Kant’s foundational contributions to deontology, or rule-based ethics) reliably made everyone miserable, no one would defend it as an ethical principle. Similarly, if virtues such as generosity, wisdom, and honesty caused nothing but pain and chaos, no sane person could consider them good. In my view, deontologists and virtue ethicists smuggle the good consequences of their ethics into the conversation from the start.

And, tellingly, Sam suggests this example, which reminds me of Hitchens’s Gendankenexperiment about an good act that could only be performed by a religious person:

Of course, intentions aren’t the only things that matter, as we can readily see in this case. It is quite possible for a bad person to inadvertently do some good in the world. But the inner and outer consequences of our thoughts and actions seem to account for everything of value here. If you disagree, the burden is on you to come up with an action that is obviously right or wrong for reasons that are not fully accounted for by its (actual or potential) consequences.

Readers, do you want to think of one?

In the end, the criterion of maximizing well-being seems to me (a admitted philosophical tyro) the best we can do for a system of morality. But is it objective? I don’t know, nor am I convinced that that issue is important. What’s more important is the empirical issue of how to measure well-being, and the less empirical one of how to trade off different forms of well being.

An addendum: Sam’s my friend, and I won’t take kindly to criticisms of his acumen or intellectual abilities in the comments. Stick to the issues. We’ve also discussed torture and profiling ad nauseum, so lay off those subjects as well. This post is about morality.

 

 

181 thoughts on “Sam Harris and the Moral Landscape Challenge

  1. Agree entirely with your penultimate paragraph.

    A lot of the criticism seems to be falling into the trap of making the ideal the enemy of the good.

  2. Subscribe.

    This is one of the toughest questions of sentient thought/action. I am torn between what I wish and alternatives.

  3. I’ve pre-ordered Waking Up: A Guide to Spirituality Without Religion and am looking forward to reading it.

    I’ve always found Sam Harris’s analogy to health as a justification for applying evidence based rational thinking (I won’t use the word science here as it is too often conflated with what people in white coats do in laboratories) to matters of human well being.

    I have yet to hear an argument that counters Sam Harris’s views on human well being that also does not also counter our views on health.

    The ongoing squabbles about ought versus is merely cedes the issue of maximizing human well being to those who have the worst possible credentials, namely the religious.

    Time to stop playing masturbatory word games and get down to the business of making the world a better place.

    1. Um, as far as I’ve noticed, the most complaints about Harris aren’t about his views on well-being. They’re about his views on morality. (Yeah, Sam Harris thinks they’re the same thing. That’s the problem.) If you’re looking for masturbatory word games, Sam Harris’ claim that “ought to x” is synonymous with “if you x, then you will maximize well-being” is a good candidate.

      1. So what are the alternatives to well-being as a goal of ethics? I see “virtue” but how would virtue be defined without involving well-being? Isn’t the point of not raping, to quote a recent example, to maximise the well-being of those not raped?

  4. I read both segments (the Born/Blackford challenge and Sam’s response) when they appeared on Sam’s blog.

    Nice post sir.

    I was shakily in agreement with Sam when I read his book, The Moral Landscape. After reading his response, I am much more comfortably in agreement with him. I also think he made an admirable defense of his ideas and that he did a good job of showing where his critics come up short.

  5. I also pre-ordered his new book. I find Sam to be (along with you, Jerry) to be one of the clearest thinking authors I’ve encountered and (also like you) an excellent writer.

  6. I have no problem with the word “spiritual.” See Andre-Comte Sponville’s “The Little Book of Atheist Spirituality.”

    1. I think the “twitch effect” regarding the word spirituality had perfectly rational beginnings as it is so often used as a weasel-word by nonbelievers who can’t quite admit to atheism publicly. It also gets used far too much as a buzzword to indicate all manner of hippy-hoppy, tree-huggy woo.

      That said, I am keenly interested to read what Sam has written about it. I’ve been skeptical about meditation’s usefulness for anything much beyond relaxing and recharging; which in mind mind is no small thing in itself.

      1. I feel the same way. I find the word, “spiritual” odious because of its longish association with woo. However, I know Sam is going to convince me it’s okay and I’ve pre-ordered his book.

        1. If you take all supernatural connotations away, just what the heck can ‘spiritual’ mean? if you mean an appreciation of beauty or grandeur, why not use those kinds of terms and leave the spirits alone? A Guide to Spirituality Without Religion to my ears sounds as silly as A Guide to Ghost-hunting for Skeptics.

          1. I’ve argued that myself. I am interested in reading how Sam Harris addresses this, as I’m sure he does.

          2. IMO spirits belong in mixed drinks or on the rocks. I don’t need them in my beauty and grandeur.

          3. Similarly, I most emphatically believe in the power of the soul…when it’s to be found in music and food. Put on some Aretha, serve up some ribs and black-eyed peas and greens and cornbread, and now we’re talkin!

            b&

          4. I believe in the power of the sole – when it protects my feet from pokey things on the ground and gives me adequate arch support.

          5. I understand Saffron is supposed to grow well here in the Sonoran Desert. I hope so, because that’s probably the only way I’ll ever cook with it….

            b&

          6. As a former religionist, I suspect “spiritual” typically means having (or prone to having) a highly emotional state associated with elevated oxytocin levels.

        2. “Longish association with woo”. Exactly right. But also the word probably reflects a pining for a feeling of awe in the face of powerful mystery, or to put it simply “a longing for woo”.

          1. I have similar concerns.Might not “aesthetic sensibility” serve the the same semantic purpose.

  7. My only serious critique of Sam’s take on morality is that I don’t think he properly addresses the importance of autonomy.

    Essentially, his whole thesis boils down to the Golden Rule. And, indeed, the Golden Rule is one of the pillars of civilization, essential to its health and stability.

    However…if it becomes primary, then very bad things can happen — as we saw with Torquemada. If you place the Golden Rule above all else, then, indeed, it is entirely reasonable to torture somebody for weeks or months or even decades here on Earth in order to save that person’s soul from eternal torment in Hell. You’d want the same for yourself, just as you’d want the doctor to inflict momentary incredible agony upon you to set a broken bone so you might heal and recover fully.

    The “maximize wellbeing” approach can equally reasonably lead to the various hypothetical scenarios where, for example, one person is cannibalized in order to feed many others. If the benefit to the many outweighs the detriment of the one, wellbeing has been maximized.

    In contrast, if the first principle adopted is that you should never do unto others as they don’t wish to be done unto — with a caveat permitting minimal such intervention in order to prevent them from violating this rule — then these problems never manifest in the first place.

    Or, in other terms, the question is whether or not one should value freedom for all over some measure of prosperity. Even if you could sacrifice freedom in exchange for prosperity, would the price be worth it?

    Cheers,

    b&

    1. But why be so totalitarian about freedom? There might be small things you are not allowed to do because it harms others that have no overall effect in your freedom.

      Take the rich paying more taxes to increase everyone’s well-being. If a person has 10 billion dollars and is taxed 9 billion, is it onerous to ask this person to “subsist” on a billion dollars?

      1. It’s more complex than that, though, isn’t it? Taxing him $9 billion doesn’t necessarily improve the well being of all. It will certainly improve avenues of corruption, ennui, nepotism, and cronyism within the taxing entity. Perhaps the $9 billion could best be served by starting up that commercial space flight company for which the earner has dreamed for decades, but now is powerless to achieve. Perhaps not if he is a disrespectful, greedy scumbag.

        For me, morality boils down to respect. The Golden Rule shouldn’t be ‘don’t do unto others what you would not have done unto you’, or however it’s supposed to be worded, but should instead be ‘Respect others as you yourself want to be respected’, with the added caveat of ‘those who have relinquished respect due to heinous, immoral, or dangerous actions will not be treated with respect.’ I certainly can’t articulate it completely, enough to cover all bases. All I know for sure is that having that tenet as a central life theme can take a person far. I’ve seen it time and again.

        1. Some people don’t care if they are respected and don’t care to respect others (many in the second case). I don’t see respect as any possible solution.

          1. Some people don’t know what’s good for them.
            (Sam Harris covers this in his book)

        2. Do you think misuse of tax money is the rule in western societies? Despite your negative view of government I think we have achieved much with taxes funding NIH and NSF. We just need to divert the money hemorrhage into the military. You could also give heave tax breaks if indeed the money will be invested in a program that benefits society.

          Regardless, this doesn’t change the point that it is unnecessary for society for an individual to have 9 billion dollars.

    2. It is also a version of the Golden Rule not to do unto others what you would not have done unto you. This prohibitive version is better than yhe prescriptive one (do to others what you want done to you) just because is not an open invitation for psychotics or masochists. The problem with your version is that it fails when the other person doesn’t know what’s good for him/her or cant communicate his/her preferences – exactly the kinds of individuals which boundary moral questions often affect, e.g. children, vegetative, or just unable to communicate due to time constraints.

    3. Even that formulation is problematic.

      What about a two-yo who doesn’t want her parents to brush her teeth?

      What about the hoarder or person with some other self-destructive psychopathy who doesn’t want his family to stage an intervention?

      1. Oops. Entire comment should’ve been preceded by this quote:

        “In contrast, if the first principle adopted is that you should never do unto others as they don’t wish to be done unto — with a caveat permitting minimal such intervention in order to prevent them from violating this rule…”

      2. Obviously, it comes with a presumption of competence on the part of the individual. We have well-defined standards of determining competence; your toddler example is of somebody who’s not competent. Somebody who’s suicidally depressed as a result of receiving bad medication would be another clear-cut example.

        Those sorts of situations will always require judgement. Even children should have the ultimate decision whether to undergo or continue chemotherapy and radiation treatment, for example, especially in cases where the odds aren’t good even with treatment or the expected benefits are marginal.

        The default position should be to assume competence and thus grant autonomy, with exceptions being well-defined and…well…exceptional. (Even if stubborn aversion to toothbrushing is common amongst toddlers, it’s a rather exceptional phase in an individual’s life.)

        Cheers,

        b&

        1. “…with exceptions being well-defined…”

          Probably easier said than done. Of course, implementing any ethical system that attempts to be good and just is easier said than done.

    4. As usual, Ben, you nail it. It’s all about the after-life, or lack there of. Sam Harris is way smarter than I am but I kept saying as I read The Moral Landscape, “What about the after-life?” over and over.

      If someone believes that the epistemic axioms comes from mundane science, then great, but most people in western society give credence to a form of revealed wisdom and an after-life, and thus subscribe to an arbitrary value system. This disrupts the notion of a scientifically-based moral system.

      I’d beat my kid every day and make him wear a hair shirt if I thought it meant his eternal soul would be saved. No amount of science would change that if I believe Leviticus or the Koran came directly from G-d.

      Thanks, Jerry, for your concise presentation of the arguments. I know I couldn’t wade through 8 pages of a single-spaced argument.

    5. (Excuse me if this a duplicate, I hit “post” twice.)

      As usual, Ben, you nail it. It’s all about the after-life, or lack there of. Sam Harris is way smarter than I am but I kept saying as I read The Moral Landscape, “What about the after-life?” over and over.

      If someone believes that the epistemic axioms comes from mundane science, then great, but most people in western society give credence to a form of revealed wisdom and an after-life, and thus subscribe to an arbitrary value system. This disrupts the notion of a scientifically-based moral system.

      I’d beat my kid every day and make him wear a hair shirt if I thought it meant his eternal soul would be saved. No amount of science would change that if I believe Leviticus or the Koran came directly from G-d.

      Thanks, Jerry, for your concise presentation of the arguments. I know I couldn’t wade through 8 pages of a single-spaced argument.

      1. Was that a joke or are you really unwilling to read eight pages? I think you pretty much disqualify yourself from any serious discussion when you declare eight pages as too much to be worth the effort.

          1. Addendum: OK, I’ve read Sam’s post and it doesn’t change my agreement with Ben Goren and my comment above.

            You are correct, JBlilie, it is clearly written, as I had expected having read 3 of Sam Harris’ books.

            Longer response: Harris says, “But the fact is that all forms of scientific inquiry pull themselves up by some intuitive bootstraps.”

            If those “intuitive bootstraps” include an omnipotent omniscient sky wizard that makes up rules that don’t conform to natural law the all bets are off on trying to make value judgments. I might agree with Harris’ moral system based on empirical observation, but if 40% of the world’s population eschews the concept, what’s the point?

          2. Further comment. Harris says, “I don’t believe that any sane person is concerned with abstract principles and virtues—such as justice and loyalty—independent of the ways they affect our lives.” Again, what about how they affect our “after-lives.”

            When a parent mutilates their daughter’s genitalia or denies her an education, it is done ostensibly for her own good, to ensure she gets into heaven. A pro-life Catholic might force his daughter or wife to die from pregnancy complications rather than opt for a medically-indicated abortion, because it’s “God’s will”, and an abortion would ensure an eternal hell for the woman. The “intuitive bootstraps” he is pulling on are different from someone else’s.

            OK, maybe Harris is implying that those individuals who act in accordance with their eternal soul are not “sane persons”, but then it negates a whole lotta folks (judging from the popularity of books and movies that have come out lately about the reality of heaven, and the popularity of Church’s and preachers). I guess I’m not as sanguine about building a universally accepted moral system.

          3. …which all again goes to demonstrate that the problem isn’t a failure to maximize wellbeing, but a failure to respect personal autonomy.

            We see it as a necessary evil to administer vaccines to children, even over their crying protestations. Those who engage in ritualistic genital mutilation see their actions in similar terms. But vaccination protects society as an whole and has negligible repercussions; mutilation only “protects” the victim.

            b&

          4. “…mutilation only “protects” the victim.”

            Is that true? I always assumed that female circumcision was ostensibly to removed the pleasure of female orgasm, which, if left unchecked, would lead to female promiscuity and a breakdown of their social structure. In effect, circumcision is an innoculation, or am I wrong?

          5. …and by doing what is in the best interest of the community, the circumcised girl is acting morally, or so the argument would go.

          6. Leaving ethics up to the religious, as it largely has been for most of history in practice if not necessarily in academics, just doesn’t work for me.

            That people use boot straps that can easily be shown to have no accord with reality is precisely why science should be brought to bear on these issues more. Weeding out bootstraps that don’t comport well, or at all, with reality is job number one, and one that the tools of science are very well suited to.

            Getting everyone to play nicely together to determine morals that will apply to all borders on fantasy no matter what tools or boot straps you could use. It seems so often people view things as absolutes. In reality absolutes don’t occur and they don’t have to for things to get better. Figuring out what works best, as best as is possible with the tools at hand, and working in that direction is better than not, and all that is really possible.

      2. desertviews,

        Sam has indeed talked to the issue of an “after-life” in terms of his moral theory.
        He’s done so fairly often.

        The first point is that Sam would say that IF there is an afterlife…DEPENDING on what it is and whatever consequences are attached to it…that will surely figure into any moral calculus. The problem is that there are so many versions of the afterlife, some in which alter depending on what you do on earth, some of which don’t, that you’d have to take each theory one at a time.

        But that gets to the second point deeply relevant to Sam’s thesis:

        Sam’s whole point is that we are talking about facts in the empirical domain of inquiry. Anyone adhering to his thesis would have to be able to give good empirical evidence for an afterlife and the REALITY of it’s specific consequences, before it ought to be taken seriously in our moral consideration.

        As it happens…there is no good evidence for an afterlife, so on Sam’s thesis we don’t have to account for it in the moral landscape.

        The fact a whole lot of people BELIEVE in an after life is simply to say that a whole of of people believe wrong things about reality.
        And Sam’s thesis is that we OUGHT to be grounding our moral reasoning in the reality as our best science describes it. That’s what Sam is arguing: let’s stop basing morality on fantasy, and base it on reality.

        That a lot of people believe in an afterlife no more undermines Sam’s thesis than the beliefs of Young Earth Creationists undermine actual scientific theories of geology and biology.

        Vaal

        1. Vaal, thanks for your reply. I’m just trying to understand how Harris or anyone can deal with someone who would say, “The Bible says it, I believe it, that settles it.”

          The only time I hear or read Harris discuss the after-life is to dismiss it. Fine. I get it. But how do you construct a universal moral system that includes people who do not subscribe to an empirical view of existence?

          For most individuals in the USA, the “intuitive bootstraps” are not based on reason and empiric evidence, they are based on revelation. They intuit the G-d “causes” all the effects we experience.

          I appreciate your answer, but arguing that 60% of the population is simply wrong seems unsatisfying to me, especially if the point of the exercise is to prescribe a universal value system.

          1. It’s intended to be an ethical system based on empirical facts. Are you insisting that that is an unacceptable foundation? You’d have to if you’re going to ditch it because there’s a lot of disagreement – stubborn, thoughtless, perverse disagreement in this case – over what the facts are.

            You’d get that with _any_ consequentialist system. Feed it inaccurate data about what’s the case, get nightmare scenarios as the right thing to do. You can get the same results with any practical guides to health, driving, cooking, etc., yet we still insist there that the proof is in the pudding, and that the dish would come out awful given these instructions if you were cooking in a vacuum doesn’t have us throw out the cookbook and adopt virtue cooking or deontological medicine.

            I would think – I would think I would have a huge amount of company thinking – that where religions make people do bad things is where they make people believe incorrect things. That’s all it takes to keep the fact that religions believe in an afterlife and weigh consequences to it from being effective an objection here.

          2. Respectfully – your concern is one that’s often made by people who, for lack of a better word, lack imagination to see how differently the world and its people can be organized. Or to realize how much it already has changed, or to fail to see past one or two generations

            It turns out that beliefs of societies really do change. 1,000 years ago everyone believed in some fort of life-dictating superstition. They believed in astrology, or witchcraft, or gods of the ocean, etc. Today, astrology and witchcraft are basically defunct, and we have entire countries that are almost entirely atheistic. Huge portions of Europe are increasingly ejecting Gods and superstitions. Non-believers in the US, while still small, is also a rapidly expanding group.

            Your criticism generally seems to be this: Most people have heaven or hell included in their well-being calculus, and so arguing that their equation is wrong isn’t useful.

            But it is useful. It actually matters that people get their equation right. And that means something deeper than just believing they have it right. If huge groups of people believed that witchcraft was part of their well-being, it would actually matter that we convince them to reject that variable. And to think that this can’t happen or isn’t a useful enterprise is to simply lack imagination. Sure, it maybe not be useful to a stubborn, single person. But the kinds of changes we are talking about are global and civilization-based. Arguing that certain beliefs are wrong takes generations, and each generation that’s brought up in the light of refutations of bad ideas really does change. This has already happened on every front. So it seems silly to believe that it can’t happen with morality and whether or not well-being is really playing out on the eternal landscape of heaven or hell

          3. Jeff Engel and jefscot,

            I know you both are correct and that Sam Harris has addressed this issue in his book Moral Landscape, opining that god-belief in some should not stop reasonable people from instituting a value system based on empiric observation.

            Thank you for your thoughtful responses. Having lived for two decades in a small Midwestern community, I’m not sanguine about expunging the deep seated superstitions from such communities…and I realize this is my own (possibly mistaken) opinion.

            I tend to agree with most of what Harris says, although I think the task of having even reasonable individuals agree on moral issues is more difficult that he thinks. Maybe it’s useful as a thought experiment. Fine.

            His basic utilitarian argument of maximizing what he calls the Good Life for the greatest number of people is problematic. One person may view basic human necessities that constitute a Good Life as different from another person. One may view an SUV that burns fossil fuels and can haul his brood of 7 kids around as required for a Good Life, and if it means that we support despotic regimes in the Middle east that keep their populations under control by means of Wahhabism, then so be it because bedouins are not part of his moral community. My Good Life is more important than the Good Life of millions of poor Saudi peasants. Who decides?

            While Harris may opine that members of the moral community need be autonomous conscious individuals, another may feel that 8-cell human embryos, while not conscious or independent, have the *potential* for such, and thus should be accorded full person-hood rights. Who decides?

            I realize that I may be overly critical of Harris’ well-written thesis, and he certainly has stimulated my own perspective, but problems with such utilitarianism have been criticized elsewhere over the last couple centuries.

        2. “…let’s stop basing morality on fantasy, and base it on reality.”

          I haven’t read Sam’s book, so perhaps I shouldn’t even be commenting, but based on what I *have* seen and read, this seems like a crucial point, and one that the critics I’ve seen ignore.

          At the very least, science can only help. If one wants to talk about right and wrong, one has to talk about what actions, events, situations, or ideas about things in the world are right or wrong, and that means you’d better get a handle on reality. What’s the best way to do that? Science!

          If science were to demonstrate conclusively that sexual orientation is biologically determined and absolutely not a choice, it would remove the primary objection homophobes raise against homosexuality. To me, possibilities like this make it look like science is eminently capable of shaping morality.

    6. Yup.

      [Sorry. Got a case of hang over still hanging over me. It’s summer and apparently it should be celebrated. :-/]

  8. Sam’s version would have all of us give away most of our money to poorer people,

    This fails on consequentialist grounds. It wouldn’t make any difference. As much as I despise right wing rationalizations I have to admit that they have a point here.
    The best way to save the poor is comprehensive sex (and everything else)education, free contraceptives and free abortion.
    All three of these things are considered immoral by some so it just takes us back to the is/ought problem. Who decides?
    Patricia Churchland has a very good book on the evolution of morality (I forget the name)but she shies away from considering other evolved behaviours that are considered moral by some but immoral by others.

    1. It also isn’t in line with human behaviour. There is a reason that communist systems aren’t very successful and inevitable tumble into corruption as people try to skirt it.

  9. An electron will never make normative claims. There is no evidence for it and Sam knows it. Nevertheless, Sam argues quite well that science is a method to develop moral theory and that the line between descriptive and prescriptive is the same…assuming “the criterion of maximizing well-being”.

    Science is here to stay (I hope) and it develops morality. It is the best game in town. That is top-down.

    On the other hand there is no evidence that nature cares at all about our well being. That is bottom-up. And I do not think Sam even attempted to convince anyone that a piece of dust 8.2 billion light years from us cares at all if someone dies a miserable death at the hands of a tyrannical theocratic geo-based lunatic.

      1. I am not sure that that will always be true.

        A microwave, in principle, has no concerns for my life. However, it has been engineered to serve me and it protects me from being irradiated and electrocuted. Google, among others, are making cars which I think we will someday all use that will functionally protect us while transporting us to our destination.

        Most engineering controls are providing a greater percentage of how we actually endorse morality. Medicine is also good example. It does not care and is certainly not conscious, but it is designed to improve our well being.

        These are good examples of how nature does or can play a critical role in shaping our morality. But Sam falls short, for most of us with who wanted a true solution for the “Derive an ought from is” (http://en.wikipedia.org/wiki/Is–ought_problem).

        1. “However, it has been engineered to serve me and it protects me from being irradiated and electrocuted.”

          Yes. Engineered because conscious beings cared about other conscious beings and themselves.

          My favorite summary of engineering (I’ve been a working engineer designing things on which people’s lives literally depend for 30 years now):

          An engineer can do well with $1 what any fool can do poorly with $2.

          1. Or better…physicists like to joke:

            If the boss wants one soon:

            a physicists can get it done in soon * pi.

            an engineer can get it one in soon * 2.

            and Scotty will get it done in 30 minutes or less

        2. Thinking about it, it is not particularly significant to not have a solution for the is/ought “problem”, because what we are talking about is rational reasons to learn to be good. Thinking like that, we see there must be a large number of reasons, and thus likely more than one solution.

    1. Kevin,

      I’m confused by your post.

      “On the other hand there is no evidence that nature cares at all about our well being. That is bottom-up. And I do not think Sam even attempted to convince anyone that a piece of dust 8.2 billion light years from us cares at all if someone dies a miserable death at the hands of a tyrannical theocratic geo-based lunatic.”

      I can’t figure out what you think that has to do with Sam’s thesis.

      Sam isn’t arguing that “nature” cares about our well being, he is arguing that only certain entities in nature – conscious creatures who can be aware of their suffering – care about their well being.
      So it seems irrelevant in this context to mention any bit of nature being uncaring.

      Cheers,

      Vaal

  10. If we look at morality from an evolutionarily perspective it is probably a byproduct of altruism and what allowed humans to live in large groups. There might be some objective rules about large groups of humans living together but in the end, these would have to be adapted to different situations and problems and we would have to decide what is the best course of action for each. I’m with Sam in that science and objectivity is the best way to decide a best approach, just like medicine, but the overall goal would be entirely subjective.

    I entirely agree with Sam’s utilitarianism. Arguments like the utilitarian monster are ridiculous because simple rules can prevent this: You can not decrease the well-being of many for a few, or, decreasing well-being should have a cost much greater than increasing it, etc.

    The problem without a solution is – what do you do when a large segment of the population does not share your moral values and may not care about the well-being of the many in the first place (see republicans)?

    PS: Did anyone else found the last two paragraphs of the response way off mark? It declares philosophy as the only way to discuss morality and ethics out of whole cloth. Didn’t seem like a solid argument to me.

    PS2: I really like Sam but he is making to much out of altered mental states. Mild forms can lead to increased creativity and merriment but that’s about it. We also lead such stressful lives that stopping for a moment to meditate would probably benefit a lot of people. Having said that I look forward to the book to learn something new and have my mind changed.

  11. The best book I’ve read so far this year was Joshua Greene’s Moral Tribes, and one of the things Greene does in that book seems to me to parallel Harris’ argument, in that he defends utilitarianism as a meta-morality to be used when judging the merits of different group-centric moral systems. But to me, Greene effectively softens some of the more absolute conclusions that would seem to follow from The Moral Landscape, and he explains in a footnote that he’s not convinced that Harris has made his case. I think Greene recognizes the reader’s intuitions about morality and understands that if his conclusions are not in sync with them, he is obligated to explain why. In fact, I thought Greene might have been a good candidate to respond the challenge.

    Bottom line: even if the scientifically calculated greatest well-being for all would be best achieved by all of us giving, say, 90% of our incomes to charity, Greene’s response is “we can’t all be superheroes responsible for solving all the world’s problems.” The best strategy is probably to aspire toward that “greatest well-being state” but temper our actions with what we instinctively feel to be moral, such as spending more on our own children.

    1. “. . . even if the scientifically calculated greatest well-being for all would be best achieved by all of us giving, say, 90% of our incomes to charity, . . .”

      I’ve always had problems with model scenarios like that. If the model does not correspond to reality conclusions based on it won’t be relevant. To make this one worth considering it first has to be established that the probability of the greatest well-being for all being achieved by everyone giving 90% of their incomes to charity is remotely significant. After all we are talking about using the tools of science to investigate these issues.

      At first blush it sure does not seem to be remotely possible. At the least it seems clear that a society in which that were possible would be a very different one than any currently on planet earth, and that in that society that would indeed be a good path to follow.

        1. You may have noticed that one of the few industrial sectors showing growth in recent years is new medical charities, mostly built around specific types of cancer that the figurehead either survived or died of. It is an industry like many others, selling a kind of satisfaction to customers (donors), supporting a bureaucratic infrastructure, offering marketing deals to other businesses (sale of mailing lists etc.), and generating profits for a small number of owners/investors. Whether any funds end up supporting medical research or subsidising treatments is anyone’s guess.

  12. I’m looking forward to reading Harris’ new book, so thanks for this incisive review.

    As you mentioned, the difficulty comes in the actual practice of ethics, weighing all the counter aspects.

    For instance, will Americans be willing to give up their standard of living in order to assure the well-being of everyone?

    Also, which parts of “flourishing” are more important? Would it be better to be poor and malnourished but emotionally positive versus well-to-do and well-fed but dissatisfied? Etc.

    What does one do when differing nations have different goals, different values, different worldviews?

    What will help the world community’s “flourishing” might oppose or damage our own nation’s flourishing.

  13. I agree that ethics should be objective, and true.

    I disagree on the utilitarianist principle being the best available. Sadly, it seems to lack explanatory power in well-known, settled moral questions. Virtue ethics, esp. stoicism, works a lot better. I suspect the reason we favour utilitarianism is that it’s naive, and everyone knows something about it.

    A better way to get objectivity is to gather data on what people consider vicious and virtuos, especially with regard to validity of that data.

    The is/ought problem has been solved, by Searle. And after that, I do believe moral philosophers tested both consequentialism, deontology, and Aristotelian virtue ethics in the 60s, 70s, and 80s, before starting on their own theory. As Lawrence Becker found out, it was basically Stoicism reinvented.

    Being outside the field, it’s hard for me to confirm, and philosophy being such a shambles, it’s hard to get a straight answer. Main problem is that 70% are antiscientific, subscribing without questioning to Kantian rationalism. Becker’s New Stoicism has a bibliography, though, that should aid further inquiries.

    Incidentally, Euthyphro is a good Platonian dialogue for secular ethicists. Socrates basically disproves gods-based ethics.

    1. Yeah … but the trouble with any culturally derived set of moral axioms is: The shifting moral zeitgeist.

      I’m sure the original stoics had a rule about the the moral way to treat one’s slaves.

      1. The “zeitgeist” can be counteracted by looking at all times and places at once, that is, not ignoring data. Another way is to construct experiments that isolate the subjects from society.

        Also Roman slavery was a lot different from what it became in the West in later years. Slavery was, in some cases, in fact a way to achieve social standing, so it wasn’t uncommon for people to sell themselves.

        Furthermore, while the Stoics didn’t have access to non-slave-based societies, the idea that slaves should be freed is originally from Zeno’s time. And even
        if there are Stoic rules of conduct towards slaves, this is not so much a bug as a feature. For while we have prohibited ownership of people, people still lose their freedom and have to work for little to no pay. We just call them by another name, today.

        Don’t write them off because they lived and worked in Roman times. Many of their findings are radically at odds with prevailing Roman thinking, and their intent was to improve society, not just live in the times.

        Also, a mark of generality, objectivity, and truth is independent reinvention. Not the only step, not even the most common, but one, nonetheless.

    2. Stephan Brun,

      The is/ought problem has been solved, by Searle.

      That’s quite an optimistic assessment given that Searle’s “solution” is controversial among other philosophers.

      I don’t find his solution particularly persuasive myself (his theory of “promise/obligation/institutional facts” seem to be based on presumed values, which sort of begs the question, among other things).

      I do find other solutions a bit more persuasive though 🙂

      Vaal

      1. That’s quite an optimistic assessment given that Searle’s “solution” is controversial among other philosophers.

        Given that the majority of philosophers consider rationalism uncontroversial, that doesn’t bother me much.

        I do find other solutions a bit more persuasive though[.]

        Interesting. Have a list of those?

    3. What worries me about Harris’s espousal of utilitarianism is it seems to be assumed, admittedly with some supporting evidence, but seemingly without attempts at showing it false. The latter bit is, to my mind, crucial to the scientific endeavour, and so it appears that despite his protestations to the contrary, he is in fact not using the scientific method.

      What I would like to know is: Why did empirical moral philosophers reject consequentialism and deontology in favour of virtue ethics, and why should we choose c. over any of the other two? Relativism I share his disdain for ….

  14. How can you say “Well, I’m not aware of any “serious competing theories of value and morality”? That is a remarkable statement and does your side no good.

    “I am not aware of any “serious competing theories of speciation except ID” is an analogous statement and would say more about the interlocutor then than science.

    1. Sorry, but I spoke the truth, and I don’t give a rat’s patootie if what I don’t know does my side “no good.” The “serious competing theories” come down to a form of consequentialism.

      Further, your analogy to ID is bogus, for ID is not a serious competing theory of speciation, because it’s not a serious scientific theory at all. Further, you appear to know little about it, since ID doesn’t explictly deal with the origin of species. Show me where ID talks in detail about how reproductive isolation arises between populations.

      Your thinking that the ID statement is similar does your side no good.

      1. I’ll leave aside the issue of whether all ethical theories are consequentialist…

        While ID is clearly not a serious competing theory of speciation, if we use the definition of science advocated by Harris in The Moral Landscape (and elsewhere), ID does in fact become a scientific theory! Harris may be right that calling his theory scientific or not is a matter of semantics, but that choice has consequences that suggest semantics =/= unimportant.

  15. “If you disagree, the burden is on you to come up with an action that is obviously right or wrong for reasons that are not fully accounted for by its (actual or potential) consequences.”

    There’s the rub. It’s always about consequences but the calculations of potential versus actual drives the argument. Case in point – drone strikes, or the T word.

  16. I’ll echo Steve Oberski. I’ve found all the “hub bub” over The Moral Landscape instructive. The Is / Ought issue obviously is really dear to the hearts of many, to the extent that they can’t see past it.

    I have never heard convincing arguments that refute Sam’s arguments for using science to study and inform our morality. To me this seems like a “no brainer” and I have been a bit surprised at how much criticism from skeptics / rationalists / atheists Sam has received.

    Much of the criticism seems to focus on the idea of Well Being being an objective basis from which to start building a moral framework. Though that may remain an interesting philosophical question (Is/Ought, Sub/Ob), I’ve yet to see a convincing argument for a better metric.

    And just what is the most accurate way to think of Objective / Subjective. Seems to me to be a fallacy to suppose digital, discrete values are accurate or especially useful in real world applications. I think a better way to think about it is a continuum from Objective to Subjective.

    I have always been curious about the seeming phobia of the science community regarding morality. It is like a pavlovian reaction conditioned by all the cliche “science is bad” memes expressed in literature and movies.

    1. History may be informing the objections. Science is a relatively young thing. Morality is much older. Everyone has some innate ability to contribute to a discussion on morals but science requires a bit of training to understand. And in a historical sense, science has been used (by scientists and non-scientists) to justify actions that most of us recognize to be completely immoral. I think that fact has made many people gun-shy when it comes to Sam’s argument.

      (FWIW… I pretty much agree with Sam’s argument. Science has been misused in moral decision making in the past, but that doesn’t mean that it isn’t the way forward for informing moral decisions. It just hast to be done right, to the extent that we can get it right.)

      1. Science may be young, relatively speaking, but as a method for arriving at moral solutions it will soon surpass all other methods to improve moral systems. This does not suggest that philosophy or political science are irrelevant. On the contrary, those fields will only be enhanced by incorporating research based based on how we can maximize wellbeing.

    2. For me, “is / ought” is a red herring and a non-sequitur. Indeed, it only makes sense if you assume some sort of absolute moral authority as is typically embodied by the gods.

      The relevant question is, “Given what you want, what is it that you should do to achieve your goals?” And, once you realize that your best chances of accomplishing almost anything involve enlisting the help of many other people (aka, “society”), the rest of the pieces fall into place pretty naturally and obviously. Add in evolutionary desires for survival, and you’re pretty much all set.

      Cheers,

      b&

      1. I think there were at least five instances when I started typing “the is/ought issue is a red herring” when I was working on the above comment.

          1. I’m not sure about the “tele” part, but I know a few people who would agree that I am “pathetic.” Personally I don’t think kids should talk to their dad that way but . . ., what are you gonna do?

          2. Ok, sorry. I figured Ben and I have had enough interaction here that he would know that was a joke and there was no insult intended. I guess it’s not obvious, though.

      2. The is/ought distinction keeps being misrepresented/understood, including by Harris (which made many of his arguments seem silly). The point of Hume’s argument is that we cannot reason to what we should want, we intuit or “feel” it first, after which we can apply reason.

        So your statement… “Given what you want, what is it that you should do to achieve your goals?” is exactly in line with what Hume was saying. He then explored what moral intuitions people generally underlie people’s moral judgments and “found” that it was (what we would call) utility, which is pretty darn close to well-being.

        Hume was not a moral nihilist or skeptic and made that clear in his writings. He was basically a utilitarian. People should check out his book: An Enquiry Concerning the Principles of Morals… It’s Enlightening 😉

        1. “The point of Hume’s argument is that we cannot reason to what we should want, we intuit or “feel” it first, after which we can apply reason… Hume was not a moral nihilist or skeptic and made that clear in his writings.”

          But this is a form of naturalistic fallacy (what comes to us naturally is good), and offers no solution to the question of mutually incompatible wishes (Joe wants to kill everybody, Jill wants to stay alive – who’s right and who’s wrong?). It also succumbs to its own version of the Euthyphro Dilemma: if good and bad are just what we want to do, then that makes ethics a question of arbitrary whims, and a desire to kill others cannot be condemned coherently; if intuitive wants appeal to reasons why they are moral, then they are superfluous and those reasons are sufficient. And if we can’t reason to what we want, then what grounds can there be for taking our emotions seriously, when they deny themselves a reason to be treated unquestioningly?

          The is/ought distinction, as many anti-realists treat it, leads either to some form of uncritical mystic dualism which treats ethics as living in a completely different world from reality, or to moral nihilism. That’s because it effectively turns moral statements into either mysterious powers that just popped into human history irreducibly complex, or meaningless noise with no connection to reality. Intentionally or not, I think Hume and this line of thinking has caused more confusion than clarification.

          1. Nice response. I agree with you that it does not offer a solution to mutually incompatible desires. I’m not sure it has to. It is simply an observation, or statement of fact. First we feel, and then we apply reason. We can’t use reason to determine what we should feel.

            It does not matter what such an understanding would lead to as far as behavior is concerned. The question is whether the statement is true or not about moral judgments.

            If you can find a moral judgment that reduces to a purely analytical/empirical observation that would be the only way to nix Hume’s point.

            I disagree that he was making a naturalistic fallacy. I don’t think he was saying that makes it good, just recognizing that is where we get our concepts of good. He argued that an interest in utility was at the core of (most) people’s moral intuitions. I would disagree with that assessment, which is why I agree he missed dealing in large part with how to deal with moral incompatibility.

            I’m not sure about other anti-realists, you may be right, but I personally don’t believe that reduces moral statements to merely noise with no connection to reality. A moral judgment is just as real as the person making it. The difference with realist concepts, is that for anti-realists moral statements are reduced to understanding the nature of individuals and not the universe as a whole.

            Sam Harris is a moral realist and opposed to Hume’s is/ought distinction, as was Kant. I would argue that both have caused more confusion and less clarification than Hume regarding moral judgments. Perhaps one thing we could both agree on is that none of these guys found the key to understanding what is “good.”

          2. “Nice response. I agree with you that it does not offer a solution to mutually incompatible desires. I’m not sure it has to. It is simply an observation, or statement of fact.”

            If it informs your anti-realist stance on what good and bad are, then you cannot treat it as a neutral descriptive ethics, and then claim it supports a metaethical position, without indicating how it deals with conflicting values. The is/ought division prevents a priori any connection to facts about the world, so incompatible values remain unsolved for no more cogent reason than common origin. In any case, this rests on the dubious premise that emotions make no claims about the world. Fear informs me that certain things are dangerous, even if they include harmless spiders, and happiness informs me that I am in a secure and pleasant situation, even if I don’t know something bad is about to happen. Unless you think passions have a non-evolutionary origin, they are designed things that can make mistakes.

            “First we feel, and then we apply reason. We can’t use reason to determine what we should feel.”

            We do it all the time, even outside ethics. We reason that scoffing our faces with candy is bad, even though the stuff is delicious and we have urges to do it, because we reason that it will prove detrimental in the long run. We reason that it’s OK when someone dies horribly in a film, despite how overwhelmingly realistic it looks, and how strongly we react to the sight, because no lives are actually at stake. We also reason that creationists are lying when they say that evolution makes us immoral, by going beyond atheist and scientist stereotypes and looking up statistics that show that countries which accept evolution have, say, lower crime rates than those that don’t.

            It’s the same in ethics; we reason that we should feel more upset when we learn about catastrophes occurring in other parts of the world, even though we might not feel a twinge of discomfort upon reading about it in the newspaper. We reason that punishment is best viewed as a deterrent to dissuade criminals, even though our primate brains are crying out for blood in spiteful revenge. We reason that a psychopath should have been a normal person instead of a psychopath, and so we should temper our hatred, even as we loathe the unethical actions they perform to us personally.

            Pinker and Singer argue in their books that one of the reasons for the declining trends of violence in many parts of the world are best explained by the wider use of reason, which should not work if emotions were all that was needed to settle ethical issues. Most of the secular humanist morals we abide by are based on explicit premises about human nature, not after-the-fact rationalizations for gut feelings, and had to be defended in debate before they became mainstream. If the is/ought distinction were valid, there should be noise, not trends, since an ought immune from any objective reality would cycle like fads on a whim, produce hundreds of factions rather than converge on a coherent established system, and would not need any real rational defence.

            “I disagree that he was making a naturalistic fallacy. I don’t think he was saying that makes it good, just recognizing that is where we get our concepts of good.”

            Hume did not just point out that reason is the slave of the passions, but he added that it ought to be. In any case, the position that good and bad are not merely derived from emotions, but cannot be reasoned into, is no better than saying they are derived from magic, because you’re still stuck with the Euthyphro dilemma, only picking the “it’s arbitrary” option rather than the “it’s independently reasoned” one.

            You cannot be consistent if you – on the one hand – point out a psychological fact in a descriptive ethics (“recognizing where we get our concepts of good”), and on the other hand use that to justify a meta-ethics of anti-realism, without at some point making the further claim that goodness IS gut feeling or passion and reason can stop the bus here. And the problem with that view is that it either obscures and mystifies ethics, preventing any critical analysis and therefore calling itself into question, or it renders it arbitrary, disabling oneself from calling out unethical behaviour by denying itself a robust criterion from which to measure it.

            Supposing a dictator of a small country is motivated by strong passions of hatred and disgust to decimate a rival ethnic group for their “crimes”. If your feelings of revulsion for his actions have no substantial reason behind them, how can criticism of the regime be taken as anything other than a competing “moral fad”, with no more substance than flavour of ice cream? You cannot say the dictator is wrong, he cannot say you are wrong, and nothing of consequence occurs, because the deaths of millions of people carry no weight themselves, only what lens you have in your eye, and therefore it’s no more substantial than what flavour ice cream you prefer. It would be as if 1+1= anything you felt like, so 1+1= ends up meaning nothing because you could replace it with 2+2= and nothing would change. The form of anti-realism you suggest, because it just says our emotions work in mysterious ways, takes the ethics out of ethics. Hence it has its own version of the Euthyphro Dilemma.

            “I’m not sure about other anti-realists, you may be right, but I personally don’t believe that reduces moral statements to merely noise with no connection to reality. A moral judgment is just as real as the person making it. The difference with realist concepts, is that for anti-realists moral statements are reduced to understanding the nature of individuals and not the universe as a whole.”

            This is incorrect for two reasons. The first is that you – I presume unintentionally – bait and switch. You don’t seem to appreciate the distinction between an opinion on something and the something itself, so when you point to the subjective nature of ethics, you inadvertently suggest that people’s attitudes towards ethics IS ethics, because you use the fact that people have opinions as proof that ethics is nothing more than faith-based opinion that cannot be justified in principle. This doesn’t even concede the possibility that some people’s opinions might just be mistaken, because you’ve assumed a priori that they can’t be. You’ve given up the rational enquiry.

            Let me explain: You say that a moral judgement is just as real as the person making it. In a banal sense, any judgement a person makes is real. It’s a fact about their mind at a particular point in time. But the issue is whether the concept of goodness has any real basis behind it. Far from clarifying the issue of what goodness and badness are, you’re just invoking more mystery by saying our passions produce our morals, without recourse to any reason or facts of their own. Yet, people have contradictory values, have different impulses at different times, and can’t defend these impulses with reason. The mere fact that people can hold contradictory views doesn’t translate into those views being automatically valid just by existing, and by giving up reason so easily, you give up all the apparatus that goes with it, including validity and possibility of error.

            Even if it is true that people think with their gut, your anti-realist stance goes further and claims that there’s no objective standard by which to parse these disparate products of the mind. In which case, that’s as good as saying anything the mind churns up is good by fiat, in defiance of all the contradictions and arbitrariness. And to say something has contradictions and arbitrariness is to say it’s not worth taking seriously, because that violates basic rules of intellectual engagement. In the end, anti-realism is about more than pointing out people are all over the place: it claims there’s no more substance to ethics than whim.

            The second is that it mischaracterises realism. The difference between realism and anti-realism is not a question of the nature of individuals versus the nature of the universe, which is a rather evasive way of putting it. It’s a question of whether good and bad have certain real-world properties, and that one can have incorrect ideas – and therefore incorrect morals – about them. Pointing out that people get their morals from intuition cuts no ice: a realist wants intuition to cough up its logic and run the risk of being incorrect. That doesn’t mean a realist wants the laws of chemistry to contain ethical judgements, any more than a biologist wants the laws of physics to describe quarks using ethology, so it doesn’t make sense to say realists view “moral statements” as being “reduced” to “understanding the nature of the universe as a whole.”

            In short, I don’t necessarily disagree that people use their emotions to make moral judgements. What I disagree with is the notion that reason and fact have nothing to do with it, as if emotions just conjured shoulds, goodness, badness, right, wrong, and the whole caboodle miraculously out of thin air, could never be wrong because they are inscrutable, and provide a sound basis for ethics while simultaneously avoiding any responsibility when it comes to the contradictions and caprice of at least some ethical systems. Even intellectually, That’s no better than chalking up the mind to the workings of God.

          3. Wow. There is just too much material to respond to in this format. I will condense things to three points.

            1) You told me what I believe and you are wrong. I do not use meta-ethics or ethics in the way you suggest at all. If you want to debate my actual position you can come to my site, where I will be producing articles on ethics in greater detail over time.

            2)”We do it all the time, even outside ethics. We reason that scoffing our faces with candy is bad, even though the stuff is delicious and we have urges to do it, because we reason that it will prove detrimental in the long run.”

            That statement shows reason being used to support an unreasoned desire. Break it down into arguments and you will find you need a conditional in there somewhere, which must be satisfied by an innate or intuited motivation.

            3)”We also reason that creationists are lying when they say that evolution makes us immoral, by going beyond atheist and scientist stereotypes and looking up statistics that show that countries which accept evolution have, say, lower crime rates than those that don’t.”

            That is not an ought, that is an is statement. As such there is no problem with it being reached by reason alone.

      3. Ben Goren,

        “For me, “is / ought” is a red herring and a non-sequitur. Indeed, it only makes sense if you assume some sort of absolute moral authority as is typically embodied by the gods. “

        I disagree strongly if by that you mean that it’s a trivial issue. Hume’s observation is bang on, important, and useful in identifying lots of bad moral arguments that people have assumed made sense.

        And it remains so whether a God exists or not.

        We all recognize that there is a difference between:

        1. The child IS being held captive by a sadistic killer.

        and

        2. The child OUGHT to be held captive by a sadistic killer.

        That’s an obvious example where everyone would agree that the “ought” does not derive directly from the “is.” (In fact, someone claiming it did in that case would be branded a moral monster).

        What Hume noted is just how subtly ubiquitous this strange non-sequitur seems to appear not just in such obvious examples, but it seems to be underneath most…maybe all…the moral statements people tend to make, even concerning “non-controversial” value statements.

        Ask someone where they got their “ought” from and they’ll tend to appeal to some “fact” but without actually showing how the two are bridged, and Hume simply said anyone
        attempting this “Is to Ought” owes us an explanation for how they are deriving the ought. And he’s right.

        It’s not that it’s in principle impossible to do, only that we can’t get away with this move without showing our work. I find it one of the most powerful questions that very quickly unearths the bad foundations of many different moral claims and theories, including theistic morality. (E.g. Theists so often move from “is” Statements like “God commanded X,” “God Is The Creator” etc to “ought” statements “therefore we ought to do as God tells us.” All non-sequiturs because they do not bridge is to ought).

        As it happens you have given one short-form
        version of an answer to the is/ought question: that our goals/desires
        help us derive is from ought. I personally agree that seems to be on the right track (and so do a number of philosophers).
        But it’s still far from easy – there’s a lot to account for and defend when making such a claim. E.g. Whether “ought” is centered on our individual goals, or whether “ought” derives only from desires/goals shared with other people, and since there are differing goals which goals “ought” to take precedence…which gets into some trickier territory in keeping things coherent.

        Vaal

        1. What Hume noted is just how subtly ubiquitous this strange non-sequitur seems to appear not just in such obvious examples, but it seems to be underneath most…maybe all…the moral statements people tend to make, even concerning “non-controversial” value statements.

          That may well be Hume’s original intention with the phrase, which is much the same point I try to make: that it’s not even relevant to the discussion of morality. But, if that was Hume’s intention, then that just makes it all the more ironic that the way the phrase actually gets used is to obsess over how to get to “ought” from some ethereal “is” regardless.

          E.g. Whether “ought” is centered on our individual goals, or whether “ought” derives only from desires/goals shared with other people, and since there are differing goals which goals “ought” to take precedence…which gets into some trickier territory in keeping things coherent.

          A game theorist would tell you that, in practice, it must ultimately rest upon individual goals, but that — and here’s the important part — satisfying the goals of the group are almost invariably inevitably a prerequisite to the most effective method to achieving individual goals. Or: you should want to help society be healthy and prosperous, because you can then leverage that same healthy and prosperous society to much more effectively fulfill virtually any goal you might have for yourself.

          And, of course: this is statistics and game theory. Just as evolution doesn’t guarantee ideally optimal solutions but only trends increasingly towards more optimal solutions, the same happens with respect to morality. You can load the dice in your favor, but you can’t guarantee they’ll roll the way you want them to. Will you let the perfect be the enemy of the good and place all your bets on rolling boxcars several times in a row, or will you play the odds and bet on seven coming up more often than anything else? Or, more to the point, will you try to make money at gambling by being the most skilled card shark you can…or will you practically guarantee your gambling profits by opening a casino?

          Cheers,

          b&

      4. That summarizes my own stand I think.

        Though I had not made such a firm connection between theology and “is/ought” (through absolute moral), merely observed that there was a correlation between usage and apologetics. That is very useful!

        1. Yeah, there’s almost no move that theists make that doesn’t involve this lurking is/ought non-sequitur.

          As we know a standard ploy is to ground morality in God’s “nature.” When we ask “why ought we do as God does? (or God suggests)?” the reply is to appeal to God’s nature: “God’s nature IS The Good.”

          But hold on, they have just again simply provided another “is” statement without showing how they derive “ought” from it.
          They can’t special plead just for God’s nature.

          And if by “the good” they included “what one ought to do” then they are just begging the question. When asked to justify why we ought to do as God does, they are answering “because we ought to do as God does.” It’s just vicious circularity and question-begging.

          Then it can be pointed out that the theist is actually smuggling in the moral equation that Ben (and myself and others) appeal to.
          Why do they need a Personal Being at the center of morality, rather than just, say, a magical, eternal “rock” or “elementary particle?” It’s because persons have certain attributes: they have desires, goals, and rationalize about what actions will achieve those goals. You are going to HAVE to appeal to actions arising from God’s goals and desires to make any sense at all, and for him to actually be a model for OUR behavior.

          But then, once that has been admitted, the theist can’t special plead that this logic only pertains to God. If value, “ought” and “good” arise from the connection between goals, desires and what actions will achieve them, then that very equation applies to beings like humans. It’s not the “magical” parts of God that causes morality to arise, it’s actually the traits of goal/desire-fulfilling God shares with us human. So long as beings like us exist, morality will exist and God is superfluous.

          The Theist as usual gets things exactly the wrong way around. It’s not that humans have to have the traits of God in order to be moral, it’s that God has to have the traits of humans to be moral.

          1. Of course, the one-word summary of your post is, “Euthyphro.”

            There’s an even more pressing problem for theists, though. Especially considering that Jesus is the Good Shepherd and we like sheep, and considering the principal source of protein for shepherds and the primary predator of sheep (a primate, not a canine)…well, even if you grant everything the theists may claim about their gods, it still doesn’t tell you whether or not the gods actually have our best interests at heart, even if the gods themselves think they do.

            Ultimately, only you are even theoretically capable of determining what’s in your own best interests. Others may well be able to offer you guidance and assistance, but the ultimate decision ever remains yours and yours alone. Even if you do choose to trust another, it’s your responsibility for the decision to place that trust, and it’s your responsibility for the consequences of that original decision of trust.

            Cheers,

            b&

  17. We are saddled with the “Spirituality” for awhile. It maybe naturalized at some point to mean something more that orginally intended like “wife”, “husband” and “spouse” no longer refers to relationship between genders. It is part of our heritage but does not limit it. In the meantime, it’s less cumberome than a valuing of central intangibles essential to well-being.

  18. Spirituality without religion is like saying that disbelief in god is a religion – can’t have it both ways.

    1. It all depends on the particulars of the definition of “Spirituality” you are working from. There are tons of words from our past that we still use today even though we no longer subscribe to the beliefs that originally inspired the word. I am pretty sure that when someone like Harris uses the word he means the mental / emotional states associated with the word, but not the supernatural or dualistic aspects.

      1. It all depends on the particulars of the definition of “Spirituality” you are working from.

        Yes. There’s a very goo book by Antonio Damasio called ‘Self Comes to Mind’
        In it he takes the words ’emotion’ and ‘feeling’ which I think most of us use interchangeably and gives them distinct definitions. Under the influence of environmental triggers our brains release neuro-chemicals that affect our brains. The immediate effect is emotional. What he defines as feelings are the thoughts that the brain generates under the influence of the emotions. The important thing is that emotions are physiologically distinct from the ideas (feelings) that we associate with them and can exist without those ideas.
        Spirituality is one of those emotions, one of those neuro-chemical cocktails and is available to us even if we don’t believe in any of the things that most people associate with it.
        One person described to me that the aim of Zen meditation is to achieve a state of no thoughts at all, just pure emotion which Damasio would understand perfectly.

      2. Mental/emotional states are just that. They don’t have anything to do with imaginary “spiritualism.”

        1. You’re the one limiting spiritualism to the imaginary. You are being intentionally obtuse to communicate your disdain for any concept of spiritualism. I get that. You’re welcome to it. I don’t have much use for the word myself. I’m merely pointing out that in reality, since other people don’t share your opinion regarding the meaning of the label “spiritualism,” your analogy was false.

  19. Threads like this remind me why Watchmen is my favorite book/movie. The question of who decides what the greater good is and what sacrifices to life and liberty are justifiable in the service of an “objective” morality remains a fascinating knot to try to untie.

    In Watchmen, the notion of masked vigilantes is harshly criticised because everyone has very different ideas about what morality is and isn’t and allowing people the freedom to act on thier own private moral calculations inevitabley results in chaos as these different moral views come into conflict with one another.

    Personaly, I find Sam’s main argument persuasive because I’m in favour of the pragmatic consequences it entails.

  20. As far as I am concerned, Born is correct, but in a way that doesn’t matter. One needs to put values in to get values out – he’s right about that.

    However, that’s a problem with any justification for ethics whatsoever. As a philosophical naturalist of sorts, a materialist, etc. I do start with the assumption that at least some humans are valuable. One has to agree on this or a similar starting point, or agree further downstream on specifics. But one cannot avoid it; this is Hume’s problem.

    Does this entail consequentialism? Yes, I think so, but that’s tricky. In my view it (long story short) entails a non-utilitarian consequentialism (see, for example, Rawls, for a discussion of *that* split) which can borrow the idea of “virtue” and “vice” and other matters in the virtue ethics tradition as consequentialistically interpreted shorthand for what “works” and doesn’t.

    My new problem here, if I ever get to it, is seeing if Kitcher is right that ethical sentences are non-propositional.

    Interestingly enough there’s a paper from years ago by Railton that tries to show Kant (!) is a consequentialist, really. I wish I could remember its title – I heard it at a UBC colloquium ~14 years ago.

  21. As a contestant (1 of the 400+), I was not impressed with Harris’s response for reasons that are much too long for this format.

    What I am curious to know is how you (Prof. Ceiling Cat) and others felt about his “clarification” that the moral landscape theory is descriptive rather than prescriptive in nature?

    Did/do you feel that moral claims (X is wrong) are simply about telling directions (X is not the right path to Y) with no moral imperative (X is wrong so you really should not do X)?

    It seems hard to believe most people would agree that their feelings and claims about the Taliban and Nazis are simply statements that those guys are taking a wrong direction… or that they simply exhibit “unflattering” dispositions… without any further meaning that they should change.

  22. To answer your query…
    “Readers, do you want to think of one?”

    If it were scientifically proven that a false/irrational belief in fairies who 1) heal the sick and give justice to victims of crimes in some after life, and 2) do not advocate violence against others or block scientific progress on any other topic, actually results in improved feelings of happiness, health, and long life, would you think it was wrong to maintain doubt about them or challenge that belief in others?

    If you would choose truth or honesty above proven greater health, happiness and longer life then you are choosing something as good or better without regard to consequence (beyond perhaps an aesthetic feeling).

    I definitely count myself as an “atheist” with regard to those fairies, regardless of a known placebo effect that might help me otherwise.

    1. I think your example simply fails to take into account the full range of (possible) consequences. Opting for honesty in the face of comforting and superficially innocuous delusion may entail subtle or beneath-the-surface desirable consequences, or consequence that will pay dividends over time.

      You also gave to take into account the subtle, beneath-the-surface, or long-term damage that might be done by espousing or allowing the delusion.

      Faith is a good example. Some people claim they need it in order to cope. We could say “alright, fine, doesn’t seem harmful”. But down the road you wind up with children dying of curable diseases because of it.

      1. Your point is taken, however in my hypothetical I was trying to avoid any other possible consequences.

        We could adjust the hypothetical to say that the fairies only work when you have tried/are already trying all possible conventional means… in other words add prayer on top of the best scientific methods. Then there is no chance on reliance.

        Or it can be made generic…

        If it were found that a false belief in X led to greater happiness, health, and longevity, and with no negative consequences possible from that belief, would it be wrong to maintain disbelief and encourage disbelief in others?

        If you don’t think it is wrong, then you value truth/knowledge over practical consequences.

        1. In defining your hypothetical as entailing no negative consequences, you’re still appealing to consequence.

          If we say “go ahead and believe”, it would be because we’ve determined that nothing bad will result, which is a type of consequence.

          If we say “no, let’s deal with reality”, it would be because our experience teaches us that honesty is the best policy. We may not know what unforeseen ills could result from the belief.

          The value we place on truth is born of witnessing, over time, the consequences of being honest or dishonest. Valuing truth (or anything) is not something you can do without reference to consequences.

          1. I think you are missing the nature of a hypothetical. I grant you that a person’s initial valuing of truth could come from experience, learning that it leads to better consequences. The point is that people can then come to hold that value independent of outcomes.

            But since this particular hypothetical is causing problems let me shift to another.

            Say there is a major disaster and humanity is pushed into a serious bottle-neck for survival. It may be that survivors are faced with the options of (take your pick) cannibalism, killing children, or forcing women to get pregnant in order for humanity to survive. It is valid for a person to choose to do none of those even if it means humans will come to an end. They can legitimately believe that a world which requires such acts is not a world worth saving. That is choosing a value over a consequence.

          2. No, this is still consequentialism.

            In choosing “to do none of those”, the person is considering the consequence of having to live in a world where cannibalism and rape prevail, and also considering the consequence if having to live with him- or herself after committing one of those deeds. If the person chooses not to do those things, it would be because s/he prefers the consequence of a declining human population to the consequence of becoming a cannibalistic murderer or living among other cannibalistic murderers.

            The only way you can ever get to a value is by looking at its effect(s) in the world, and those effects may be primarily psychological (eg, not being able to live with oneself after opting for cannibalism), but they are effects nonetheless.

          3. Ok, we actually agree for the most part. If you lump esthetic consequences (psychological as you put it) in with direct, practical consequences then everything falls under the umbrella of a concern for consequences. I would agree with that.

            However, that move does not solve the problem of differing systems entirely. It just makes different ethical systems a subset of consequentialism. So value ethics would tend to put a priority on esthetic or “psychological” consequences, over the practical consequences prioritized by systems like utilitarianism (or Harris’s theory).

            I’m fine with that reset, but I think it is not necessary. I’d argue that the person valuing purity (of not eating another human for example), even if based on not being able to live with oneself afterward, means that the set of results stemming from that value is of greater importance than all other types of consequences. The “pure” consequentialist would not necessarily set one group of results higher than the other based solely on the method to achieve it. In fact, in theory, they can’t. The traditional consequentialist is supposed to prioritize maximizing one goal, being blind to the route. I.e. if you think eating another human is wrong, when it is the only way to save mankind, then you are being immoral because the latter is much more important (overall) compared to any discomfort you would feel.

          4. As a clarification, my first sentence was not supposed to be an insult. It should have read “the hypothetical” not “a hypothetical”.

            You had a nice response, but I thought it missed the point. Specifically your second and third paragraph. Choosing to believe WOULD be based on consequences, but the nature of the hypothetical precludes the “let’s deal with reality” being based on consequences because the known results here are definitively negative. That might not be realistic, but it doesn’t have to be.

            Anyway we can use the other hypotheticals. I just realized I might have come off as dismissive and condescending and that was not my intention!

  23. People who use the “is/ought” argument never seem interested in knowing what an “ought” is. They treat it as though it’s just an irreducible thing, not to be questioned. But I’ve always found that it makes sense if you cut it down to, say, hypothetical alternative futures comparable by the presence of good, bad, and neutral to varying degrees in each one. Then “ought” just collapses to the question of the existence of a good or bad.

    And the most compelling examples of those are conscious experiences that, no matter what you believe, will be bad, such as pain, suffering, and distress. Pain is still unpleasant, even if you somehow lie to yourself that it isn’t. And, of course, since conscious experiences don’t happen in a vacuum but occur in a real world, any sentient creature has to pay attention to the details of its environment.

    I don’t know what philosophy that is, but I think a test of moral objectivity would be that it had to be true regardless of the content of one’s beliefs. Basic sentient experiences seem as good a candidate as any, in time and space.

    1. Sam makes the same point:

      “To say we “should” follow some of these paths and avoid others is just a way of saying that some lead to happiness and others to misery. “You shouldn’t lie” (prescriptive) is synonymous with “Lying needlessly complicates people’s lives, destroys reputations, and undermines trust” (descriptive). “We should defend democracy from totalitarianism” (prescriptive) is another way of saying “Democracy is far more conducive to human flourishing than the alternatives are” (descriptive). In my view, moralizing notions like “should” and “ought” are just ways of indicating that certain experiences and states of being are better than others.”

      1. And it is precisely by making the “better” claim in there that there is values in therefore values out. See my earlier reply, however, because I agree with the those who think Harris is missing this, and hence the philosophers who criticize him as missing it are correct. But the latter are also incorrect, because it doesn’t matter for basically a generalization of the reasons in the _Euthyphro_.

  24. Psychology has been spending the past few decades developing theories of human flourishing, obviously they’re not complete or fully justified at the moment but the various theories exist.

    The theory I’m most familiar with (due to 1st year of college) is humanistic psychology, predominantly from the work of Abraham Maslow and Carl Rogers.

    Maslow created the Hierarchy of Needs which is a simplified vision of human wellbeing. I think it makes intuitive sense, but as a scientific concept it’s kind of lacking. However, he made many testable predictions arising from this theory so that left lots of room for refinement.

    Rogers developed his theory of self-actualization, and essentially tried to define eudaimonia/maximal human flourishing in terms broad enough to include the values and aims and drives of people from all backgrounds.

    This type of work is not science in the same way that physics or biology or neuroscience is, but once the philosophical musings give rise to testable predictions then we will have a genuine science of morality on our hands. First we would need a source of funding for this type of research, however…

  25. I thought the original challenge was about whether Sam Harris can derive morals from science, not whether his ethical system is the best? A lot of this post argues for the latter, but it does not change the fact that the former has to be answered in the negative.

    Indeed, reading between the lines Harris pretty much concedes that because he argues that while he has to presuppose values, everyboy else also has to do the same. Yes, but the point was that he has to do it. Guess we are all agreed then?

  26. As far as I can tell, there are three concerns in figuring out how to live one’s life “morally” (or how to live with regard to anything, really). And I still can’t see how Harris’ book is supposed to help with that.

    First, the three concerns are:
    1. Figure out what you want. This includes all of your urges, needs, long-term goals, things you desire to make yourself happy, and things you desire to make others happy.
    2. Figure out what you or others have to do to get what you want. When it comes to social matters, you might ask yourself questions like “how should I comport myself in order to have the types of interactions I want to have with other people?” When it comes to societal matters, you might ask “what kind of government is conducive to creating the kind of society I want to live in?”
    3. Figure out how to motivate yourself to do the actions prescribed in #2. This is seldom mentioned, but this is the entire problem of akrasia. It is entirely commonplace for a person to say “I should do X” and “I want to do X” and “I’m going to do X”, and yet still not do X. The best intentions mean nothing if you can’t get your behavior to match them.

    So what is The Moral Landscape meant to help us with? The major thrust of the book seems to be “Did you know that some actions harm people? Did you know that we can measure harm using science?” I think both of those things are fairly obvious (if someone provides an operational definition of X, you can usually measure X). But at the end of the day, Harris said relatively little about what he wants, or how to get it. And the subject of akrasia has been left completely alone, despite the fact that it is such a large factor in shaping human behavior.

    Ultimately, it seems like Harris is hung up on coming up with something “objective” so that he can win arguments with people who don’t share his moral desires. I honestly can’t see what the point of the book was beyond that. But if anyone has some better ideas, please share.

    1. Harris has spoken on this concern many times. Part of the thrust for clarifying that science is and will be the domain for morality is getting people to realize that it’s not in a Deity. Of course out of this has spilled a arguments among secular, humanist, and philosophical communities – but in the end it was about getting people to question the turning towards certain books or divine figures for authority. If you don’t think this is critically important, then you’re unfamiliar with the data about what percentages of our country (and others) believes in rigid interpretations of the Bible. Or in other words – millions of people, whether they know it by name or not, really do believe in divine command theory.

    2. Since you asked:

      I think the book’s use revert back to the only relevant use of philosophy, albeit I called it “flimflam” elsewhere here (as it is always tempting to generalize from that it mostly behaves as re knowledge), as per Krauss. It is a useful basis for ethics re jurisprudence. Since I, like Jerry, has a problem to see that the claimed empiricism is present in reality, I think Harris’s claim is philosophic.

      As per the rest I would agree, except of course that “akrasia” doesn’t seem to exist except for philosophers. People have innate moral reactions, so there isn’t any basic motivation problem. The ethics problem that de facto remains is then to construct a simple jurisprudence rule system that covers innate reactions and intentional deliberation both.

      [But as I will note elsewhere here, I now have concerns that his book is counterproductive re today’s jurisprudence.]

    3. Sam explicitly wrote, in the rebuttal under discussion, that it wasn’t his intention to come up with an exhaustively prescriptive moral system.

      At a minimum, I think the utility of his project is in proposing that it is legitimate to try to wed morality and empiricism, rather than leaving it to armchair philosophers or, worse, theists.

    4. Sorry for the slow reply; I’ve had zero time to think about this since last week.

      jefscott and musical beef: If Harris’ primary concern is getting people to realize that morality is not in a Deity, then I would think that all the talk about “objectivity” isn’t necessary. Rather, one could underline the fact that there is no version of morality that does all the work you want it to. For example, if a theist does certain things because “god said so,” then they aren’t doing what’s “right”; they’re simply following orders.

      I also think there are better ways to impress upon people that morality isn’t some otherwordly thing existing in a metaphysical realm. The best thing I’ve seen on this is Section 2.2 here: http://raikoth.net/consequentialism.html

      Torbjorn: “People have innate moral reactions, so there isn’t any basic motivation problem.”
      I don’t think that’s true at all. It may be true in the moment, like when you see an innocent person under attack and you want to save them, but it isn’t true for longer-term concerns. Say I have a long-term goal to decrease the frequency of female genital mutilation in the world. That’s a big subject to think about. How should I best go about doing this? Donate money to human rights organizations, write articles against FGM, contact my politicians? And then once I’ve decided what to do, how to I get myself to actually write that article, rather than slumping on the couch after work and watching TV? Connecting our long-term goals to our in-the-moment motivations is essential to getting anything done.

  27. Let me try my hand at “an action that is obviously right or wrong for reasons that are not fully accounted for by its (actual or potential) consequences.”

    Consider any two moral scenarios (actions), both of which create the same amount of well-being.

    However, one of the scenarios (actions) requires that you fulfil a promise that you made to someone.

    Further, you can only attempt one of these moral actions, not both.

    It would seem that there is no consequentialist reason to attempt one of the actions over the other, since they both provide the same amount of well-being.

    You might as well flip a coin.

    But doesn’t the fact that you made a promise oblige you to attempt to fulfil that promise over and above attempting the alternative action (scenario) available to you?

    If you answer ‘no’ then you would have to think that the sole value of fulfilling a promise can be cashed out in terms of its consequences.

    If you answer ‘yes’ then you would have to think that at least part of the value of fulfilling a promise cannot be fully accounted for by its consequences, but by the simple fact that you made the promise.

    1. Fulfilling the promise increases the well being of the person to whom you made the promise. If it doesn’t then you were foolish to make the promise, thereby decreasing your well being.

      1. Yes, fulfilling the promise will increase the recipients well-being, but that has already been accounted for when I said that both scenarios create the same amount of well-being.

        1. “The same amount” – what are the SI units for well-being, I wonder?*

          Fulfilling a promise does nothing, directly, for the person it was made too. Zero, in whatever units you choose. It’ll make you feel better about yourself, and through increasing trust it may create opportunities to do more good to others at a later time.

          * This is fundamental: there aren’t any, it’s entirely arbitrary and personal. Sorry, all attempts to exactly titrate well-being are doomed. IMO

  28. I would propose the following is an action that is made without reference to consequences for the one making the decision or for anyone else. That action is the act by an atheist to let go into the final moment of his/her existence, i.e., the moment separating life from death… to accept it fully, to embrace it for it is the end. It is self-affirmation, life affirmation even one might say death affirmation but without benefit or consequences to anyone even the one doing this action.

    1. I agree with the concept of you argument. Further back in the thread I had proposed something similar though a bit sharper in detail.

      If it were proven that a false belief in (wholly benign) mythological entities could lead to greater happiness, health, and longer life, would you choose to believe in it or not? More important would it be “wrong” to try to spread your doubt to others?

      If one chooses the truth (atheism toward the entities) as more important, despite all the harm that will come, one is not basing moral choice on consequences.

  29. I think Sam did an outstanding job of replying to Born’s challenge, particularly in terms of clarifying his position.

    That said, I think a lot of people, even who agree with Sam essentially, intuit or recognize that one of his thesis’ greatest strengths also seems to be it’s greatest weakness: it’s inexactness.

    Sam’s “well being of conscious creatures” casts such a vague, wide net that he can argue it can cover practically anything you can bring him, including subsuming conclusions of all sorts of other value theories (e.g. deontology, virtue ethics, even Divine Morality!). But that Sam can re-cast all of these as showing a concern “somehow” for the well-being of conscious creatures indicates just how vague and elastic his concept is to begin with.
    There seems to be a lot of “but THAT TOO would be seen in the context of affecting our well being” but very little delineation of his concept to suggest how to resolve
    many of the dilemmas in moral theories.
    (Because you can re-cast almost anything in another moral theory as ‘being pertinent to the consequences to the well being of conscious creatures).

    To ever get to normative use for us, Sam will have to finally zoom back in to some way of actually answering many of the thorny questions moral philosophers, and people in general, actually argue about.

    Sam often admits this problem of the vagueness of his thesis…that his main motivation is to at least get people to see and admit that moral questions are in principle empirical questions. But there are statements by Sam that suggest he is not fully cognizant of some problems in his thesis. For instance, Sam has claimed that to ask “Why would the worst misery for all conscious beings be ‘bad’?” is to ask a non-nonsensical…even STUPID question.

    And yet it IS an intelligible question and one at the basis of much moral disagreement.
    We can agree it is bad but answering WHY it is bad is the problem that Sam seems to be skipping. Sam sometimes seems to equate “happiness” to “well being,” but “happiness theory” (the idea that happiness is what we seek, or of ultimate value) has a number of problems (it fails to explain certain elements of human experience). But then Sam may want to say “Ok, then if happiness theory does not capture all of well being…then whatever other theory that also captures part of well-being can be part of the equation.” But then, what has “well being” actually described? It seems that it’s open to what OTHER moral theories have
    argued about what constitutes value, and it hasn’t settled much. And, the very fact that we STILL have to muddle through these questions of what value theories may be delivering pieces of the “well-being” puzzle
    shows both that “well being” doesn’t seem to have solved a lot, AND that to ask “WHY exactly is the worst possible misery for everyone a bad thing?” IS an intelligible question that we need to answer in order to sort this out.

    Further, I think certain value theories get deeper and more specific than Sam’s. I’ve gone on a bit about one of them before: Desire Utilitarianism (and similar theories), in which our “desire fulfillment” are identified as the source from which value arises, not mere “well-being.”
    Hence to answer Sam’s question: the reason the worst possible misery for everyone would be “bad” would be because this means everyone’s desires would be thwarted, and desire-thwarting is “bad.”

    Sam can’t say “well, if it is the case that desire-thwarting is bad, then that is just subsumed in my concept of well-being, since we’d say that part of well-being entails fulfilling desires. No problem.”

    But it would be a problem, because if the desire-theory is correct, it is more fundamental than Sam’s theory: it would identify how value arises, and how “ought” arises from facts, in a way that Sam’s theory has failed to pinpoint. (And all that isn’t to get into defending Desire theory – it’s just an example that Sam’s question isn’t in principle unanswerable).

    But I still think Sam’s reply was excellent.

    Vaal

    1. I’m still not sure why people are saying Harris’s reply was “excellent”.

      You have just pointed out some key concerns with his theory that were addressed (if nowhere else) in the entry I made to his contest, and at a full response to his book at my site (to which had had been invited).

      In short he wrote a 238 page book advancing a moral theory. Then offered a contest where challengers had to limit their response to 1000 words, promising that it would extend into a dialogue. He took one essay which according to the judge’s declaration focused on primarily one issue (Blackford listing other subjects brought up in other essays). Harris then produced a 4000+ word response to that single essay, which while claiming to address other criticisms as well did not touch the ones made in actual entries (such as your stated concern)! And it should be added with no chance for further dialogue by anyone!

      Imagine that an ID theorist had done the same thing (on the topic of evolution), my guess is most scientists and atheists would view this (intentional or not) as unfair and far from “excellent”.

      To my mind this was shooting fish in a barrel. Almost any half-competent author would look “good” given this lopsided handling of criticism. I would not have entered the contest if I had known this is how we would be treated.

      As a contestant, I am planning a response to his “clarification” and trying to figure out a way to give a platform for the rest of the 400+ who may have had an argument he did not touch. There are still many problems with his theory, and only through dialogue can they be resolved.

      1. You have some powerful numbers there, brandholm. While I was just saying (above) that there are no SI units for well-being and you can’t titrate moral decisions, I reckon that page numbers may be as good a measure as any other.

  30. My observations on the two main issues:

    Idea #1:

    I’ve now read (and blurbed) Sam’s new book, Waking Up: A Guide to Spirituality Without Religion, which is a provocative synthesis of neuroscience, spirituality, and Sam’s own adventures in meditation and drug-taking. I recommend it, though I told Sam that I thought he’d get pushback from that segment of the atheist community (not including me) who cringe when they hear the word “spiritual.”

    Of course I cringe, but I do so for good reasons.

    If “spiritual” is a feeling akin to scientific “awe”, many will experience it and some will not. That doesn’t mean the experience is useful in itself. Say, experiencing panic may or may not be useful. In modern societies mostly not.

    Likewise meditation and drug-taking may or may not be useful. Say, looking at visual hallucinations may or may not reveal visual mechanisms in the hand of a thorough and clever experimenter. Most drug experiences will not be useful, and most laymen will not contribute, and that is that. I don’t think I have heard of meditation having any verified neuroscience effects at all by the way, akin to experiences like daydreaming or idle fishing/knitting.

    Harris’s idea strikes me as Botton’s idea of churches for everyone. Who needs it, why would we need it, who would at the current state of knowledge want to waste time with it?

    If Harris can fix those outlying points I won’t have any good reason to cringe anymore.

    Idea #2:

    my case for a scientific understanding of morality

    I don’t see any such case, nor do I see how analyzing the flimflam of philosophy would make Harris’s idea empirical. As Jerry says one can’t measure (“adjudicate”) well-being at this time. That is because how human’s fare socially is complex and so usually left undefined. The idea won’t be useful. (In that way. But see below.)

    The idea won’t be scientific in fact. At best it is a self-refuting hope to be objective. I’m tempted to say that it is framed as a deepity ironically being based in misunderstanding science, but at this point I really need to read the book first. :-/

    So, given that it isn’t clear that there is an objective morality* at all, is it helpful to know that this is the best we can do for a system of morality? Not that I can see.

    Summing up, leaving it as a philosophical system (since we can’t ground it in observation) to be used as a basis for jurisdictional ethics seems the obvious (but perhaps arguable) outcome. It may not be objective, but it sure seems to aim at being impartial. (However one measures and define that. =D) And maybe it is then the best at what it does.

    *I still think there is an objective morality to be had in the form of observations of innate and learned moral reactions, reactions to social situations involving moral behavior. It is evolutionary psychology, at a guess.

    The question then, is there a mapping between such reactions and social measures, such as “what is best for the society” (well-being, perhaps)? I doubt it. The evolution part, and some of the cultural part in long-lived societies, would be geared for differential reproduction. Famously, that doesn’t care for the well-being of others outside of kinship selection, that cares about well-being for individual “selfish” genes.

    1. Since I posted this I have read Ben Goren’s initial response. I am now worrying that perhaps Harris’s utilitarianism is counterproductive re individual rights and freedoms. So I’ll have to strike my “obvious” outcome, it was highly arguable after all.

      Okay, that kills off Harris’s last two books for me.

    2. “Botton’s idea of churches for everyone”

      I wish people wouldn’t parrot this line since it’s the creation of sensationalist media rather than what de Botton actually said and later clarified. Basically de Botton thinks it’d be good if more contemporary architecture took some visual inspiration from cathedrals and the like and the media misinterpreted this as, variously, “de Botton thinks we need atheist churches” (merely wrong) and “de Botton plans to BUILD atheist churches” (egregiously wrong).

  31. How about that? Sam writes, at the very close of his rebuttal:

    “There is only what IS (which includes all that is possible). If you can’t find your oughts here, I can’t see any other place to look for them.”

    Three-and-a-half years ago I wrote this, one of my first comments here at WEIT:

    “Whether or not scientific investigation will ever elucidate how we derive an “ought” from an “is” (and you must admit, there must be at least a few “oughts” – don’t we spend an awful lot of time bemoaning the regular violation of certain “oughts” by the religious right?), don’t we have to concede that inasmuch as “oughts” exist, they must have derived from an “is”. All there is is “is”. As Sean Carroll writes (defining physicalism): “All that really exists are physical things.” “

    *polishes fingernails on chest*

  32. Jerry Coyne says that the number of responses to the Challenge tells us “just how seriously people take Sam’s views.” That’s not true. I don’t take Sam’s views seriously, and I wouldn’t assume that most, let alone all, of the respondents did so because they take his views seriously. What they presumably take seriously is the opportunity to get published on his blog, earn $2,000 and possibly, just possibly, change his mind. What I take seriously is the fact that so many people take him Sam Harris seriously. I wrote my essay because I think his views are not worth taking seriously, and I think there is a serious problem with the way so many people follow him.

    By the way, my essay was not entered into the competition, because I didn’t learn of the competition until after the deadline. But I wrote one anyone:

    http://specterofreason.blogspot.com/2014/02/the-moral-landscape-challenge.html

    I think it gets to the heart of the matter better than any other response to Sam I’ve seen. I think it deserves a response, but I’m not holding my breath.

    1. I dunno, it takes a lot of work to write a good response and $2000 wouldn’t motivate me to do so. Sam Harris has before held contests for people to refute his ideas. He wants to learn from what other people say and crowd sourcing in this way is, IMO, pretty effective. You may not agree with Sam Harris, but the fact that he listens to the masses and is open to learn from them, says a lot about his character.

      1. I wrote mine in a day. You tell me if you think it’s good. And $2,000 is a big commission for an amateur writing competition. You call this “crowd-sourcing” and “listening to the masses.” I’m not convinced. Sam only read one of the over-400 essays. Russell did the vast majority of the work. But now Sam is in a position to say that he has responded to the best that is out there. I beg to differ. He hasn’t responded to me, and I’m sure there are other objections that are at least as good, if not better than mine.

        1. He did update his essay, Lying in the same manner (see here) which was he crowd sourced reader input. IIRC, he updated his essay quite a bit and people were rewarded the prizes he mentioned.

          1. My point was as it was in the original response that despite what you might think of his work, it speaks to his character that he listens to what people say and addresses or updates his work. You had replied that he did no such thing so this reply is not beside the point.

          2. I didn’t say he doesn’t ever listen to his critics. I’m talking about this situation. I don’t think “crowd sourcing” and “listening to the masses” are accurate descriptions of what he has done here. In any case, I don’t think that’s what he necessarily should be doing anyway. He should be engaging with the literature, not playing games with his followers.

          3. The idea that he shouldn’t be engaging with his readers is nonsensical and silly. Sam has stated many times that his primary goal is this: to change people’s mind, that believe that morality comes from a supernatural authority or certain books. Now as we know, that enterprise has spawned a nuanced debate about whether it’s “science” or “philosophy” that determines value statements, but this discussion is secondary. He has repeatedly stated that his efforts are truly meant to change religious people’s minds, even though it is a daunting task and a hard-fought battle with every individual.

            So I would guess that you are unfamiliar with Sam having stated this, in person, in front of crowds on many occasions (try youtube). Because otherwise it’s not entirely coherent to suggest that he shouldn’t be spending time engaging with readers. That is how you change religious people’s minds. Getting academia, which is more likely to be secular anyways, to agree that it’s “science” instead of “philosophy” is not.

          4. I did not say he shouldn’t engage with his readers. My point is that if he wants to find the best arguments against him and further his understanding, then he should engage the literature. He should try publishing in peer reviewed journals. Consider the double standard here: somebody claims to have no need for elaborate engagement with a field of specialists and claims that the vast majority of said specialists are simply wrong. If that field was physics, chemistry or biology, you would laugh them out of the water. But if it’s philosophy, you defend their character, because they “crowd source” amongst their followers.

    2. Wait… Other people were only motivated by the hope of winning $2000 but you wrote a response without any possibility of a cash reward?

      Sounds to me that you take his views seriously even if you don’t like them.

      1. Actually, part of me did hope that my essay would still be considered for some monetary reward–I even emailed Russell just to see if there was a chancen–but mainly, I hoped (and still hope) my essay would help people see through the bad arguments that pass for “informed” philosophy in places such as this.

        1. Then why not assume that other people who responded were motivated by a similar desire to make what they think is a good case to convince others of their view?

          1. What they presumably take seriously is the opportunity to get published on his blog, earn $2,000 and possibly, just possibly, change his mind.

            That sentence says “The other respondents were in it mostly for the money and blog-publicity.” Perhaps you didn’t intend it that way, but that’s how it reads.

          2. I did intent it that way. But you’re not thinking carefully enough about what that means. Do you think somebody would want their arguments published and celebrated if they didn’t think they had something of value to say? It goes without saying that anybody who entered the contest wants to change people’s minds. They want to be heard. That’s OBVIOUS. What is not obvious is that the majority of participants actually respect Sam’s views. Maybe they do, maybe they don’t. The mere fact that 400+ people entered the contest is not sufficient evidence to draw any conclusions. That is my point: Coyne drew an illogical conclusion. That’s all.

          3. When you demean the arguments of others by saying they are simply motivated by money (compared to your own presumably noble motives) you poison the well. Similar to telling others that they aren’t thinking carefully enough. Such comments provoke the kind of response that the roolz prohibit. I’ll disengage now.

          4. You see what you want to see. I did not say “they’re only in it for the money,” and if you insist on such mischaracterization, it is just as well that you disengage. My point was plain as day. Instead of addressing the actual point–that Coyne’s conclusion was illogical–you insist on misreading me. Fine, disengage. No loss.

  33. He says, “To say we “should” follow some of these paths and avoid others is just a way of saying that some lead to happiness and others to misery.”

    People, think about what the English language would be like if “ought” and “should” meant what Sam Harris thinks they mean. Imagine this conversation:

    Carol: We should do x, y and z.
    Lucy: Why?
    Carol: Well, if we do x, y and z, it will lead to happiness. It will maximize well-being.
    Lucy: Oh, you’re right. Okay!

    That’s a simple, common-sensical conversation, right? Nothing wrong with it.

    Now imagine a world where people spoke the Sam Harris thinks they do:

    Carol: We should do x, y and z.
    Lucy: Why?
    Carol: Well, if we do x, y and z, it will lead to happiness. It will maximize well-being.
    Lucy: I know what “should” means, Carol. Geez. Why are you lecturing me on the definition of “should.” I’m not a child. I’m asking for a REASON!

    See, if Sam Harris were right, then you could never appeal to the maximization of well-being as a REASON for doing anything. You could never say you should x BECAUSE IT MAXIMIZED WELL-BEING. You’d have to give some other reason. But what sort of reason could you give?

    Hey, Sam. Why should we maximize well-being? Oh wait. That question doesn’t make any sense in Harris-speak. And yet, for the rest of us, it is a perfectly natural question.

    1. Well, no. That assumes that a tautology is automatically known, which doesn’t necessarily hold true for any particular state of knowledge. I can talk about water all day without once bringing up that it’s H2O, yet our best current science shows that water and H2O are the same thing. Bring that up with Thales, and he’d insist that water was the basal element that made up everything in the universe, so could not be made of anything.

      In any case, you don’t seem to be distinguishing between ethics and metaethics. Harris isn’t proposing a method to achieve goodness (ethics, principally normative), but is trying to pin down a metaethics that explains what goodness is. Turning onto that and asking why that’s good presupposes you have some other tangible metaethical theory in mind by which you judge this one, yet you neglect to put your cards on the table and show us what that theory is. You have not provided grounds for thinking that the question is meaningful, and you certainly can’t claim that being able to pose it makes it so. I can ask what’s north of the North Pole, but that doesn’t mean I have a good reason for asking, regardless of whether it’s natural to ask that or not.

      Linguistics is hardly sure ground for rebutting others’ ideas. If it was, physics would never have progressed beyond the medieval impetus theory that came before Newton.

      If I had to complain, Harris’ concept of well-being seems needlessly vague, since he doesn’t do enough to point to examples in the world that would clarify it and thus prevent it from being applied so loosely as to be practically unhelpful. For instance, what of people who feel bad about feeling good about doing bad (recursive emotions), or people who fear something despite that something being harmless (arbitrary emotions), or whether well-being extends to anything from oneself (solipsism) to anything sentient or not, and how to prioritize them in moral decision making? He should also be more comprehensive when he tries showing how rival metaethical theories both fit into his framework and don’t necessarily undermine it. Lastly, he doesn’t do nearly enough to strengthen the consilience between the mind sciences and ethics that his book’s subtitle calls for, nor does he disentangle the various meanings of objective and subjective that would prevent further confusion on the issue.

      1. If you want my best argument, read my essay. But I’ll defend my comments here as well, as time permits.

        I was not assuming that a tautology under any description is automatically known. You must have misunderstood something I said.

        By the way, the difference between ethics and metaethics is obvious. I have no idea why you think I’m denying it. But you seem to be saying that the question, “Why should I maximize the well-being of all conscious creatures?” is, in fact, meaningless. I think the question is asking for a reason which could be used to justify the maximization of well-being of all conscious creatures. You don’t need a metaethical or ethical theory to see that. A metaethical theory could tell you about the properties of a valid justification. And ethical theory could tell you whether or not the reason given was sufficient. But the meaning of the question is the same regardless.

        You say, “Linguistics is hardly sure ground for rebutting others’ ideas.” Yeah, you’re right. Language is irrelevant. We should ignore the meanings of our words. When somebody proposes a use of words that goes against the way they are commonly understood, we should not point that out. We should not accuse them of changing the subject. Indeed. And by the way, tell that to Sam Harris next time he accusses Dennett of changing the subject because he doesn’t like the way Dennett uses the phrase “free will.”

        Sam Harris is not talking about morality. He’s talking about something else. He’s changing the subject. If you don’t want me to point that out, then tough crap. I’m pointing it out. You don’t need to comment if you don’t find it interesting.

        1. “I was not assuming that a tautology under any description is automatically known. You must have misunderstood something I said.”

          You seemed to be saying that, if Harris’ point about “good=maximizing well-being” is correct, it would be obvious to anyone that they were synonyms (hence the “I know…” bit in your second example). My point was that this isn’t necessarily true, and that “x=y” does not mean people who are familiar with x will pin down y. In context, I said it’s incorrect to suggest that the way we currently use language to talk about “shoulds” and “good and bad” poses any strong rebuttal to, say, Harris’ claim that a “should” boils down to a question over well-being. That logic would also lead us to abandon counterintuitive physics ideas, such as Newton’s law that objects keep moving until something interferes with them, rather than the commonsense view suggested in our use of language that things have an impetus that dissipates naturally, tending towards rest.

          “By the way, the difference between ethics and metaethics is obvious. I have no idea why you think I’m denying it.”

          I never said you were. I said you haven’t clarified what kind of question you’re asking (ethical/normative or metaethical), which makes it hard to tell why you think your challenge is a substantial one.

          “But you seem to be saying that the question, “Why should I maximize the well-being of all conscious creatures?” is, in fact, meaningless… the meaning of the question is the same regardless.”

          Any particular ethics, principally the normative kind (as I indicated in the post you replied to), presumes a metaethical theory to begin with, otherwise it’s empty. Your question is not the same regardless of which meaning you pick. If you were asking from normative grounds, then you are committing to a rival metaethical theory, however loosely, but you can’t challenge a metaethical theory by asking for a norm, because the norms are supposed to derive from the metaethics, so you’re supposed to ask in terms of another, rival, metaethical theory, an equal. The challenge otherwise makes no sense. It’s like trying to rebut a definition of life (a foundation of biology) by asking how it’s supposed to capture life forms (a method of proceeding that relies on distinguishing life from non-life).

          If you were, in fact, asking on metaethical grounds (challenging Harris’ claim to have solved the issue of what goodness is, principally), then you need to do better than claim that it’s possible to doubt it, which is effectively all you’re doing. Harris’ well-being criterion can be rebutted by the usual forms of rebuttal: internal and external contradictions, lack of positive proof, failing to point at anything tangible in the real world, and so on. But in that case, it’s not clear what you mean by asking for a reason. Are you looking for a real-world explanation of how goodness and badness arise from otherwise morally neutral physics, or an appeal to your self-interest? By analogy, if someone says water is H2O, we don’t make any progress by pointing out that people who talk about water don’t point out that they already know about H2O, and may not in fact know that this is what water is. Without clarification, the question is meaningless.

          “You say, “Linguistics is hardly sure ground for rebutting others’ ideas.” Yeah, you’re right. Language is irrelevant. We should ignore the meanings of our words. When somebody proposes a use of words that goes against the way they are commonly understood, we should not point that out.”

          Except your point wasn’t doing that (at no point do you even explain what “should” is supposed to mean), so your mischaracterization of my quotation’s meaning is simply enthusiastic incorrectness. You were criticizing Harris’ metaethical theory with a thought experiment that treated it like a normative claim, that isn’t solid as a metaethical critique, and all while presupposing that the way we use language is a valid critique of the “goodness”=”well-being” idea. My counterexample was aimed at pointing out that this standard doesn’t work, because language use isn’t totally reliable. I never said it was irrelevant, so frankly you’re either misinterpreting me or lying.

          I don’t agree that Harris is correct when he says Dennett is trying to change the subject – as far as I can see, their disagreement is largely a question of word choice – and I don’t agree that you are correct when you claim Harris is changing the subject, though for different reasons.

          “Sam Harris is not talking about morality. He’s talking about something else. He’s changing the subject.”

          His metaethics is too rough and incomplete, granted, but I hardly think trying to pin down what good and bad are is somehow a distraction. Metaethics and morality are hardly disparate subjects. Why else would Harris spend a chunk of his book trying (key word) to explain how it gets around Moore’s argument?

          On a final but strictly unrelated note, the tone of your finishing paragraph makes quite a few unnecessary assumptions about my motives, not least of all that I posted because I don’t “want” you to “point out” something I disagree with (implying bias), and that my post is therefore “unnecessary” because I’m not really “interested”. It won’t kill you to consider I might simply disagree with your view. Personal indictments don’t give your posts any authority and are totally uncalled for, and I don’t appreciate being talked down to in that manner, especially from someone trying to discuss a subject as uncertain as ethics. May I ask you refrain from them in future replies?

        1. hahaha… welcome to the club. I’m an anti-realist (though I believe that is a realistic view of moral judgments) and sort of a virtue ethicist.

          I was also a contestant. I did not take his theory seriously, but was concerned others might due to his popularity. So I wrote a full-response to his whole book before the contest was ever mentioned.

          I entered his contest due to the tempting mix of money, and fame, and the guarantee there’d be a full debate with the winner. 1000 words was way to small for a full review of the errors in his system, and his attacks on relativism.

          Harris came off cheap to my eye. He clearly did not read everything submitted, yet he acted as if he was responding to everyone.

          Farther back in this thread I noted that if this had been done by an ID theorist (against evolutionary biologists), that person would not have gotten the same pat on the back Harris is getting.

          By the way I will look at your essay. My essay and full response are on my site (link is in my name). I am preparing a response to Harris’s clarification and am trying to figure out a way to gather arguments he did not address from the rest of us contestants. Put ’em in one place.

          1. Well, Sam never said he was going to read any but the winning essay. But he did say he would publish Russell’s critique of his response to the winner and he said we should expect an extended debate with the winner. So he didn’t follow through exactly as planned.

            I look forward to reading your essay. The main points of mine are as follows:

            1) The “worst possible misery for everyone” thought experiment is just an intuition pump which makes people feel better about assuming that “moral goodness” is just a matter of maximizing well-being (or avoiding suffering). It is not actually an argument and, as an intuition pump, it’s impact is limited.

            2) Harris does not have a plausible argument for treating moral “ought”-statements as statements about the maximization of well-being. His best argument is one from evolutionary biology, but this goes against everything we know about evolution.

            3) Harris’ argument that morality must be grounded in conscious experience is not persuasive. Evolutionary theory tells us that all of our judgments, including our moral judgments, can be based on the interests of our genes, and those interests do not necessarily have anything to do with our conscious experiences. Moral judgments are not necessarily reasoned according to an interest in conscious experiences.

            4) A careful look at the use of language shows that Harris is wrong to take moral “ought”-statements as synonymous with statements about the well-being of conscious creatures. Specifically, we can conjoin clauses about maximizing well-being, like this: “If we x, we will maximize well-being and if we y instead of x, we will maximize well-being.” However, we cannot similarly conjoin moral “ought”-statements, like this: “We ought to x and we ought to y instead of x.”

            5) My conclusion is that Harris has not posed a coherent challenge to the is/ought distinction and he has not provided a coherent argument against moral relativism. He has, at best, changed the subject.

          2. You have already linked to your essay and touted it several times, no doubt to draw more attention to it than it got on your blog. I said above that you’ve posted enough on this thread, and I meant it. I suggest you post on your own site and not here if you want to reprise your arguments.

Leave a Comment

Your email address will not be published. Required fields are marked *