How should we be moral?: Three papers and a good book

March 11, 2013 • 5:52 am

Here’s a reading assignment for those of you interested in morality.  It consists of three papers, all of them free (download links at bottom), and a book. These papers, which form a natural unit, have had a strong impact on my thinking about not just morality, but theology as well.

All three papers are eminently accessible to the layperson: they are clear, very well written, and incisive.  The Greene paper is a bit long (and includes rebuttals after it), but all are essential reading for those pondering the current arguments about the nature of morality, where it comes from (both cognitively and evolutionarily), and whether morality can in any sense be objective.

I still think that there is no way that morality, or moral laws, can be “truths” in any scientific sense, for from the outset they all presuppose some system of value. But putting that aside, the references below will at least make you think about whether we should trust or follow our moral instincts.

One thing I’d like to say first is that many accommodationists, most notably National Institutes of Health director Francis Collins, have argued that the existence of a “moral law,” that is, the intuitive feelings we have about morality (such as those involving matters like “trolley and footbridge problems”), cannot be explained by evolution or social agreement, and thus must have been instilled in us by God.  I disagree, of course, and think with Greene and others that intuitive morality is most likely a product of our evolution in small social groups. That is, to a large degree morality comprises hardwired feelings and behaviors that evolved via kin or individual selection to enable individuals to thrive in small ancestral groups. If you want to get an idea of how our instinctive morality leads us to pass very different judgments about scenarios that don’t differ much, read about those trolley and footbridge dilemmas in Judith Jarvis Thomson’s nice book.

But on to the papers.

Here are their main points:

  • The Greene paper is mainly about the two great types of morality: deontology (“moral rules and rights” vis-à-vis Kant, which should be followed even if their net effect on “well being” is negative), and consequentialism (e.g., utilitarianism), in which something is “moral” if it has certain overall consequences for society. These usually include the maximization of things like well being or happiness.
  • Greene makes the case that deontological feelings of morality embody our intuitive moral judgements, and are largely the product of evolution. The reason they are intuitive rather than reasoned is because we had to make such judgments quickly in the ancestral environment, and evolution would favor mental “rules of thumb”. We simply didn’t have time to weigh the consequences of our actions.
  • Both Greene and Singer make the point that the ancestral environment is no longer the environment in which we live, and hence our intuitive judgments about what’s moral may no longer be optimal. (I’ve recently made this point as well, not realizing—since I’m a philosophical beginner—that others had dealt with this in extenso. One example cited by both Greene and Singer involves the trolley/footbridge problems.  We intuitively feel that switching a runaway trolley about to kill five people onto another track on which one person stands is morally fine: it saves five lives at the expense of one. But throwing a fat man standing beside you on a footbridge onto the track to stop the train, which achieves the same end (the premise is that you’re too thin to stop the train by jumping onto the tracks yourself), is instinctively seen as immoral. Yet the consequences are the same in terms of any reasonable judgment. This is a difference between deontology and consequentialism.

Singer makes the point that while it may not be immoral to throw a fat guy down on the tracks, it may also be unwise to publicize that act: there is a difference between acting morally and making that public, for the latter may have consequences you don’t want. But why is there a difference between how we feel about the footbridge and trolley problems? Greene argues that our moral revulsion at deep-sixing the fat guy is because our moral sentiments evolved when we were close up to others: we lived in small social groups. Trolleys didn’t exist on the savanna, and in such cases, where the recipients of our actions are remote, we don’t have an instinctive reaction.  And cases when we don’t act or feel instinctively, we can ponder the consequences—and that’s consequentialism. (Both Greene and Singer are, of course, consequentialists.)

In today’s society, Greene, Singer, and Haidt feel that consequentialism is a better foundation for morality than is deontology, since the former involves reasoned rather than instinctive judgments. (None of these men, at least when wrote their papers, argue that consequentialism is an objective system of morality—they simply claim it has better social results.)

  • Haidt adduces a lot of evidence that, when making moral judgments, many people act deontologically. One sign is that they favor retributive punishment rather than punishment that deters others, rehabilitates the offender, or sequesters bad people from society. Greene, for example, gives this example:

“In one study Baron and Ritov (1993) presented people with hypothetical corporate liability cases in which corporations could be required to pay fines. In one set of cases a corporation that manufactures vaccines is being sued because a child died as a result of taking one of its flu vaccines. Subjects were given multiple versions of this case. In one version, it was stipulated that a fine would have a positive deterrent effect. That is, a fine would make the company produce a safer vaccine. In a different version, it was stipulated that a fine would have a “perverse” effect. Instead of causing the firm to make a safer vaccine available, a fine would cause the company to stop making this kind of vaccine altogether, a bad result given that the vaccine in question does more good than harm and that no other firm is capable of making such a vaccine. Subjects indicated whether they thought a punitive fine was appropriate in either of these cases and whether the fine should differ between these two cases. A majority of subjects said that the fine should not differ at all.”

Retributive punishment is deontological, not consequentialist. Those who favor retribution don’t care about its consequences for society: they have an innate feeling that punishing someone who did wrong is a rule that should be obeyed—regardless of the social consequences.

Greene and Singer give other examples of things that have no inimical effect on society are nevertheless rejected via intuition as immoral. Three examples are a man who masturbates with a grocery-store chicken before cooking and eating it, a woman who cleans her toilet with an American flag, and a man who reneges on a promise to his dying mother to visit her grave every week. Such judgments are instinctive—deontological and not consequentialist. They stem from an innate outrage that something is wrong.  Yet their consequences for society are nil.

  • Why do we make such moral judgments about situations that have no negative consequences, and which we’d probably retract were we to think about them?  All the authors think that instinctive judgments are largely a product of evolution.  But of course these judgments must then be justified. When pressed, people who think about the chicken-masturbation or grave-visitation scenarios think up reasons—often not convincing—why these behaviors are immoral.  All three authors suggest that these post facto rules are examples of confabulation: making up stuff post facto to rationalize your instinctive feelings. In this way, then, deontology can be seen as a poorly grounded form of morality—one that rests on instincts that evolved in situations that may no longer obtain. Far better, the authors agree, to be a consequentialist, for that involves some use of reason, reason that takes into account modern social conditions.
  • Haidt’s paper, with its cute title, is about how many of our judgments are driven by emotion rather than reason, and he gives many examples from his own work. The point of the title is that in cases of moral judgment we are often making the emotional dog (instinctive morality) wag the rational tail (our reasoned judgment), and that is not good.  Haidt has a nice analogy about confabulating reasons post facto for our instinctive judgments, and our inability to persuade people to abandon their confabulations:

“If moral reasoning is generally a post-hoc construction intended to justify automatic moral intuitions, then our moral life is plagued by two illusions. The first illusion can be called the “wag-the-dog” illusion: we believe that our own moral judgment (the dog) is driven by our own moral reasoning (the tail). The second illusion can be called the “wag-the-other-dog’s- tail” illusion: in a moral argument, we expect the successful rebuttal of an opponent’s arguments to change the opponent’s mind. Such a belief is like thinking that forcing a dog’s tail to wag by moving it with your hand should make the dog happy.”

I have to agree with these author’s analyses: I, too, am a consequentialist—largely along the lines of Sam Harris, though we differ in whether we think (as does Sam) that such morality is objective. I feel it’s simply the best way to behave if we want a harmonious society, and I favor abandoning—I’ll find no seconders here!—the term “moral action” altogether.

As for theology, well, I doubt that any of us think that instinctive moral judgments are evidence for God. Much recent work of anthropologists and primatologists, most famously Frans de Waal, shows that the rudiments of human moral behavior can be seen in our close primate relatives.

But I also realized that theologians engage in the same kind of confabulation that Greene and others impute to moral deontologists. Theologians often begin with an ingrained religious belief—ingrained not by evolution but by their parents and peers.  They then engage in a kind of sophisticated confabulation—called theology—to justify their innate beliefs. That’s theological deontology, also called “apologetics.” The more theology I read, the more convinced I become that theologians are simply educated grown-ups engaged in rationalizing childish (or child-like) beliefs.

I really do recommend setting aside some time and reading the papers below, and at least the two “trolley problem” chapters in Thomson’s book. I hope I’ve represented them fairly. If you must read only one, it should be Greene’s, but they really form a triad that should be read together. Then read Thomson’s book to learn about the trolley problem, and many other interesting moral issues. I guarantee that a. you’ll enjoy them, and b. they’ll make you think, even if you wind up rejecting their premises.

________________

Greene, J. D. 2007. The secret joke of Kant’s soul. pp. 35-79 in W. Sinnott-Armstrong, ed. Moral Psychology, Vol. 3: The Neuroscience of Morality: Emotion, Disease, and Development. MIT Press, Cambridge, MA.

Haidt, J. 2001. The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psych. Rev. 108:814-834.

Singer, P. 2005. Ethics and intuitions. J. Ethics 9:331-352. (download here, using utilitarian.net site).

Thomson, J. J. 1986. Rights, Restitution, and Risk: Essays in Moral Theory. Harvard University Press, Cambridge, MA

120 thoughts on “How should we be moral?: Three papers and a good book

  1. Glad to see these recommendations; these papers are very important and worth reading.

    Just a few notes:

    1. I think Singer at least would accept that ethical facts are objective in the philosophical sense: invariant by observer and by observers’ attitudes and situations. If suffering is bad, it’s bad even if people think it’s good. (That doesn’t mean the beliefs are invariant, nor that the psychological sources are; just the facts themselves.)

    2. A good test case for consequentialism is whether you think we should “punish” innocent people when we can get away with it and we know it will be a successful deterrent. That’s a place I, at least, part company with consequentialism. Another good test is the “minority” objection: it might create more overall happiness to, say, ban atheism, if most of a country were very sensitive theists, but again, I don’t think we should.

    3. Just because an attitude is instinctive, doesn’t mean it’s deontological, even if in practice, our instinctive particular-case judgments tend to be deontological. Presumably, the consequentialist still needs to rely on something like instinct when forming the judgment, e.g., that suffering is bad or that happiness is good.

    1. A quick complaint on 2, which is an analysis of a gut emotional reaction but then also a claim about “more overall happiness” given the ensconced characters of a given society. On the last, you have essentialized identities and a societal structure, and given the immediate “happiness” values of these people, you say you can imagine “that society” justifying the outlawing of atheism (we could also justify their roasting and cooking of first born children under some specific societal system). But you are also imagining a people who have zero ability for reflective judgment about why their desires are the way they are, or whether certain social institutions and social beliefs should change. Our ability to rethink our selves, to reflect on and change our baseline “happiness” values in the first place has to be on the table (for me), and I think that complicates those analysis.

      If there was some isolated society or isolated world with, to us, a very strange set of social institutions and beliefs, it may be that to their immediate desires and given societal structures that outlawing atheism will make for the “best” future for those people (that statement may be invalidated by the type of brain/minds and societies that our genetic structures limit human behaviors to, e.g. there may be an insatiable appetite to continually question and reflect on how our world hangs together and so the possibility of atheism will always creep back up). But there is reason to say that if such a society deduced what the outlawing of atheism would mean to their selves and society 100 and 1000 years from then, saw it fit with other immediate perceived “goods,” that such a law would be “good” for their society. That is all our morality can be. The intuition that punishing the innocent is absolutely unacceptable comes from a gut reaction, perhaps a very good reaction given our society and selves today (and probably going back into evolutionary evolved social groups), but it is still a reaction that does not represent some morally unacceptable position that rests out in the world somewhere.

      1. Let me tie that together better: We could imagine different intelligent beings (different evolved gut moral feelings) and vastly different societal systems where outlawing atheism and punishing the innocent may make for a more robust society and happier selves, and that society should not reject their analysis because homo sapiens from planet earth insist that the punishing of the innocent violates some given moral order to the world, which does not exist. There may be some reason that for all possibility of genetic (material) basis of intelligent beings in any complex societal structure, ones in which punishment helps maintain good social order (that allows for other desires to be fulfilled), that the punishing of innocent people will undermine the capacity for punishment to work. But your reaction that such a possibility is unacceptable does not come from analyzing why the punishment of innocents will never make sense within any social order or any system of intelligent beings, but instead flows from immediate intuition, upon which you then claim that consequentialism is invalidated.

        I accept non-realism about morality, so I too reject that consequentialism helps us arrive at the correct rules for society, but I do believe it is the best and only means of analysis we have to rely on. Reflecting on why we have the intuitions that we have, why deontology is so appealing for example, allows us to understand the structure of human beings and our social institutions and beliefs, but then only a consequentialist and counterfactual analysis about what the future holds is the only thing that seems to make sense when deciding what we as society and selves should be working towards.

        1. Lyndon,

          Thanks for your reply.

          Indeed, the consequentialist can bite the bullet about “punishing” the innocent; they must affirm that:

          -if a society could get away with, say, a well-hidden program of randomly selecting one innocent person per year, framing them of a horrible crime, and publicly “punishing” them, the society should do it. And

          -if there’s a majority of 99% and a minority of 1%, and the majority hates the minority, and would gain lots of pleasure from denying that minority rights (more than the suffering the minority feels), that society should discriminate against the minority.

          But the more these examples are multiplied, the more we should start to ask whether the arguments for consequentialism are good enough to outweigh these counterintuitive consequences, especially when presumably, arguments for consequentialism ultimately rest on intuitions anyway.

          You mention what we as a society “should” be working on. Since you’re an anti-realist about ethics, what does that “should” mean? Or are you just saying, “… if there were objective ethical facts, then …” etc.? (Just to be clear: You don’t think we should, e.g., give women and racial minorities equal rights, correct?)

          1. It doesn’t mean anything. In the same way that there is not a correct answer to whether we “should” stay in bed all day or commit suicide or do something productive. I am happy to accept some kind of “core morality” or “will to live” emotional stimulating impulse, whether for any individual or for society as a whole. Having been pushed by such impulse, I argue we “should” build the best societies and selves we can, but that is loosely speaking and I mean nothing normative by it or do I mean to hang such on the word “should”.

            So, if you will, shall we try to come to grips with the best understanding of human beings and the discourses we have surrounding that understanding (our use of phrases such as “morality” and “should”) and now push forward to more robust societies? Given what my self is, I say that is the direction that will give my self and our selves the most pleasure and beats the other impulses, ones that lead to the conclusion of shrivel up and die or nuclear extermination, that are the other possibilities. But no, no “right” or “wrong” answer or behavior will be the result of any decision we take.

            On your last point, there is no “should” about institutional practices and social treatment of minorities. Having accepted the impulses above and choosing robust societies over nuclear extermination, our best calculation is that our society can be more robust when we see beyond things such as empty in-group-biases or might makes right impulses, and treat all parties more equally (yes, on some levels for many of us this equal treatment may satisfy emotions as well). But minorities no more “deserve” equal treatment than all men in Athens “deserved” equal treatment, despite the empty rhetoric of things like our Declaration of Independence. Given certain structures of our selves, including post-enlightenment values and core evolutionary emotional structures, we may have strong emotional impulses to see all beings treated as equal as can be, and given new media and reflective practices we understand and expand what that equality means. This both satisfies impulses of liberal ideals ensconced in us (liberals) and, hopefully, leads to better societies as all parties feel a more vested and coherent interest in that society. But we are not approaching a more “just” society, given what most people mean by that.

      2. Lyndon, lots of real human societies can and do ban atheism. So I’m not buying the idea that only some alien or very strange set of circumstances would ever lead to it being considered consequentially good by reasonably normal humans. You either have to defend that by going ‘no true scotsman’ and claiming the millions if not billions of humans living under those systems don’t count as normal. Or you adbit that the consequentialist calculus of normal humans does occasionally lead to the belief that minorities should give up some (religious) freedom because them not having it is seen as increasing the overall happiness of the collective society.

        1. I don’t know. That assumes that the effect of banning atheism helps contribute to the overall, or per capita, well being of the society. Perhaps in the short term it makes even a large percentage of the population feel righteous about their religion. But, I think it could be argued convincingly that the long term effects contribute to a type of society that is not conducive to providing for the well being of its population.

          I can’t think of a single society, extant or from the past, that banned atheism and would also generally be considered to have been better at providing a better environment for the well being of its population than modern societies do.

          Of course, all that depends on what well being means.

    2. Yes, that’s an important clarification you make in point 3. Deontological prescriptions might be reasoned.

      I think what seems unattractive about deontology is nothing to do with outdated instincts, but that it deals with inflexible prescriptions. Context doesn’t seem to bear. Black-and-white thinking like this can certainly lead to awful places.

      And as other commenters have already pointed out, even appeals to consequences must rest on assumptions.

      1. musical beef,

        Yes, that seems to be a big problem with “absolute” deontology, but I would guess that most deontologists today are willing to make exceptions for “moral catastrophe” cases. (I guess you might think they’re being inconsistent, but I would say that strictly speaking, as long as they still believe in constraints, they’re still deontologists.)

  2. http://www.huffingtonpost.com/mobileweb/sam-harris/a-response-to-critics_b_815742.html

    That’s a link to a long article by Sam Harris in which he responds to critics of “The Moral Landscape”. Scroll down to a section headed “The Value Problem” and you’ll find what I think is a fuller explication of Jerry’s objection to Harris’s claim of moral objectivity written by Russell Blackford, followed by Sam’s brief response, which can be boiled down to “It’s like anything else.” Yes, we ultimately must presuppose the value of well-being, but you also have to presuppose the value of health, the value of empiricism, the value of logic, in order to get anywhere in medicine, science generally, or even reason itself. I don’t understand why this is uniquely a problem for morality, and I’ve never seen anyone address Harris’s response.

    Jerry, you said “I feel it’s simply the best way to behave if we want a harmonious society”. Ok, regardless of how you feel about it, it is or it isn’t the best way to behave if we want a harmonious society. It may be that we just don’t know whether or not it’s the best way, and/or we can’t just can’t prove it either way. And we could endless criticize and refine what we mean by “harmonious”, of course. Regardless of our inability to ascertain what it is the best way to behave, or our inability to achieve consensus, there is a best way to behave (or perhaps multiple, equivalent bests). That’s all it means for morality to be objective: that there are better and worse ways to behave if we want a harmonious society.

    I haven’t read the papers yet, so forgive me if this was addressed, but one thing that the trolley problem often ignores is the psychological consequences to the actor. Whether or not it’s rational, if pulling a switch will make you feel like a hero and pushing a fat man will make you feel like a murderer, that’s a salient difference in consequences that any consequentialist analysis must consider.

    1. … there is a best way to behave …

      Only if you first define a way of evaluating “best”. And that definition will be entirely subjective.

      That’s all it means for morality to be objective: that there are better and worse ways to behave if we want a harmonious society.

      Everything depends on that “if”. You are right that once you define a value function, then one can make objective statements in relation to it. But there is no getting round the fact that the value function is entirely subjective — it is human opinion.

      Evolution has programmed us with morals as a social glue. And evolution has programmed us to think that our morals have objective validity, because they will be more effective if we think of them that way.

      Yet that feeling of an objective basis to our morality is a complete delusion; it is the biggest red herring in philosophy.

      1. But why is the value problem only a problem for morality? Why isn’t there a value problem in science, which presupposes that there is value to be had in crafting theories that can explain and predict observed phenomena? Why isn’t there a value problem in reason, which presupposes the desirability of deducing true conclusions from true premises?

        1. Why isn’t there a value problem in science, which presupposes that there is value to be had in crafting theories that can explain and predict observed phenomena?

          Science is just a tool. *IF* you want theories that can explain and predict observed phenomena then science is the tool to use. But science does not presuppose that we do want that, whether we want that is up to us.

          Why isn’t there a value problem in reason, which presupposes the desirability of deducing true conclusions from true premises?

          Ditto. Whether we want true conclusions is up to us, it is not a supposition of reason.

          1. I’m just not seeing the difference. Consider two brief exchanges:

            A1: Authoritarian governance is immoral because it can be shown that it is not conducive to human flourishing.

            B1: If you presuppose that morality is concerned with human flourishing, yes, but that’s just your opinion. You can’t say it’s objectively immoral.

            A2: That argument is invalid because it relies upon the fallacy “post hoc ergo propter hoc”, so we cannot say that the conclusion is true.

            B2: If you presuppose that reason is concerned with deriving true conclusions from true premises, yes, but that’s just your opinion. You can’t say it’s objectively invalid.

            Most people think that B1 is making a very powerful point, but no one would take B2 seriously. I can’t see any reason to treat them differently from one another.

          2. Objective: Science/reason produces results that are true (as best we can discern).

            Subjective value: We should pursue results that are true.

            Objective: Action X makes people unhappy and society disharmonious.

            Subjective value: We should pursue making people happy people and society harmonious.

            Science tells you the two objective statements. Science presupposes neither of the two subjective values. If you want to prescribe morals, however, you do need to adopt subjective values.

          3. You seem to be making my point. If you want to pursue science, you must accept the subjective value of empiricism, among other things. This does not render science subjective. Science remains entirely objective (meaning that propositions of science can be true or false, and are made so by facts about the world) despite the fact that it requires us to accept the value of empiricism, which is a subjective judgment. Likewise, morality is objective (meaning that moral propositions can be true or false, and are made so by facts about the world), despite the fact that it requires us to accept the value of well-being, which is a subjective judgment.

          4. If you want to pursue science, you must accept the subjective value of empiricism …

            Science is objective; our desire to pursue science is subjective.

            The difference between scientific claims and moral claims is this: truth is objective, values are not.

    2. “Yes, we ultimately must presuppose the value of well-being, but you also have to presuppose the value of health, the value of empiricism, the value of logic, in order to get anywhere in medicine, science generally, or even reason itself. I don’t understand why this is uniquely a problem for morality, and I’ve never seen anyone address Harris’s response.”

      Me neither, although I’ve been looking. It’s frustrating.

    3. Best line in Harris’s piece: “Admittedly, there is something arresting about being called a scientific fraud and ‘egotistical’ by Chopra. This is rather like being branded an exhibitionist by Lady Gaga.”

    4. ventzone,

      I think you’re just ignoring the is-ought distinction. It may be (objectively) true that an act accomplishes or serves a goal (“human flourishing” or whatever it may be). But that doesn’t mean the goal ought to be accomplished. There’s an organization called the Voluntary Human Extinction Movement, whose goal is the extinction of the human species. That certainly doesn’t seem consistent with “human flourishing.” But why does that mean it’s (objectively) immoral?

      1. Yes, I am ignoring the is-ought distinction. I’m also pointing out that everyone ignores the is-ought distinction except w/r/t morality.

        I’m not a scientist, so please keep that in mind if the example I use is particularly ludicrous.

        In what sense did Einstein improve upon the work of Newton in physics. As I understand it (which is “minimally”), Newton’s Laws of Motion are basically correct for most practical purposes, but Einstein was able to make refinements to account for complications Newton was unaware of. That’s basically a rough approximation of loosely accurate, I hope.

        So why ought I to care about Einstein’s refinements to Newton’s Laws? Newton’s Laws are elegant, simple, easy to learn, and easy to remember. Einstein is complicated and weird and so much harder to wrap my head around. Who needs that headache? I’ll stick with Newton.

        And that’s my prerogative, but I’m not doing physics anymore. I can’t stick with an obsolete theory just because I prefer it. It has been superseded by a better theory, and I’m compelled to accept that better theory on the basis of the evidence (or else to produce evidence which might refute it). Why? Because one theory of motion is better than another to the extent that it has better explanatory and predictive power. But that’s only true if I accept the assumption that theories of physics ought to have explanatory and predictive power w/r/t observations. But physics can’t justify that assumption. Either you accept it, or you’re not talking about physics.

        Same thing with morality. Morality can’t justify the assumption that it is concerned with the experiences of conscious beings, but if you don’t accept that assumption, you’re not talking about morality.

      2. I’m also pointing out that everyone ignores the is-ought distinction except w/r/t morality.

        I don’t know what you think it means to talk about “oughts” except w/r/t morality. What do you think morality is, if not a matter of what one ought to do?

        Morality can’t justify the assumption that it is concerned with the experiences of conscious beings, but if you don’t accept that assumption, you’re not talking about morality.

        Then morality is by definition subjective and there is no such thing as (objective) moral facts. There are facts about means and ends (in order to accomplish goal X, take action Y) but those are logical or empirical facts, not moral ones.

        1. Why ought I to accept Einstein’s refinements of Newton’s Laws? Because they are better able to explain and predict the movement of objects in the world. But I can’t derive an ought from an is, which means that knowing Einstein is better than Newton (in the sense I described) can’t be sufficient to justify replacing Newton with Einstein. It’s just a matter of opinion that theories in physics are supposed to explain and predict observable reality. Just because Einstein’s theory is better than Newton’s doesn’t mean that we ought to prefer it, because you can’t derive an ought from an is.

          I’m playing a word game, but that’s all this ought/is palaver is anyway. It’s just a word game. It’s true that you can’t derive an ought from an is, but derivation is not the only tool in our toolbox. You can’t derive an ought from an is, but you can infer an ought from an is just by making a simple, uncontroversial assumption. Assuming that we’re interested in theories that can better explain and predict observable phenomena, we ought to modify Newton’s Laws in light of Einstein, but physics cannot justify that assumption. Assuming we’re interested in preventing preventable deaths, doctors ought to encourage exercise and a good diet, but medicine can’t justify that assumption. Assuming we’re interested in maximizing the well-being of conscious creatures, we ought to establish and protect legal equality regardless of gender, race, religion, creed, nationality, sexual orientation, gender identity, and so forth, but morality can’t justify that assumption.

          These examples all look equivalent to me. I don’t understand why people seem to only consider one of them to be an argument against the objectivity of the whole endeavor.

          1. Why ought I to accept Einstein’s refinements of Newton’s Laws?

            There is no objective reason whatsoever why you “ought” to accept Einstein’s revisions of Newton.

            It is, however, objectively true that Einstein’s version is a more accurate model of reality.

            Truth is objective; morals and “oughts” are not, they are subjective. Thus “X is true” is objective. “Y is moral” is subjective.

          2. The reason one ought to accept Einstein is that Einstein more accurately models reality, and physics presupposes the value of accurately modeling reality. This presupposition is not objectively justified, and cannot be derived from any set of objective facts. This does not render physics itself subjective.

            In precisely the same way, morality presupposes the value of conscious well-being. This presupposition is not objectively justified and cannot be derived from any set of objective facts. This does not render morality itself subjective.

            Incidentally, I’m trying to challenge the basis for what I see as an illusory distinction. Simply restating that distinction isn’t helpful. Moral facts are just a particular kind of fact. Moral truth is a subset of truth.

          3. The distinction is real.

            Physics, for instance, discovers facts about the universe that retain their truth even the absence of sentient agents with desires and goals. These facts simply are.

            How could “moral truths” exist in a universe populated only with rocks and gas?

          4. … physics presupposes the value of accurately modeling reality.

            No, physics does not presuppose any *value* in modeling reality. All physics does is model reality.

            Physics is just a tool, like a hammer. If you want a nail driven in then a hammer is useful. But the *hammer* does not value the nail being driven.

            Objective: “Hammers are good for driving in nails”.

            Subjective: “It is good that the nail is driven in”.

        2. Why ought I to accept Einstein’s refinements of Newton’s Laws?

          I don’t think you “ought” to. It’s not immoral to reject Einstein. It’s just irrational to do so if accepting Einsteinian physics would serve your goals. Can’t you see the distinction?

          1. Of course, Gary, but that distinction is not at issue. I’m not saying that being wrong in physics is immoral, but I am saying that being wrong in physics is just like being wrong in morality. In physics, what makes you wrong is the objective reality of objects in motion. If I claim that acceleration due to gravity is 10m/s/s, I’m objectively wrong, and what makes me objectively wrong is the fact that objects do not fall at 10m/s/s.

            In morality, what makes you wrong is the objective reality of the well-being of conscious creatures. If I claim that slavery is morally good, I’m objectively wrong, and I’m wrong because slavery is inimical to the well-being of conscious creatures.

            In both cases, the claim of objectivity rests on an unsupportable assumption. In physics, the assumption is that we’re concerned about how objects in the world actually move. In morality, the assumption is that we’re concerned with how conscious creatures actually thrive or fail to.

            So the distinction we really need to isolate, if it exists, is the distinction that says the assumption in physics does not render physics subjective, but the assumption in morality does render morality subjective. What is that distinction?

          2. The Einstein analogy doesn’t get your point across very well imo. Its helpful to take a more common class of ‘ought’ statements e.g.:

            P1) My car is low on gas
            C1) Therefore I ought to fill it up with gas.

            This is the sort of reasoning Hume felt was unjustified… there is a missing premise.

            P1) My car is low on gas
            P2) I want my car to be in working order (axiom)
            C1) Therefore I ought to fill it up with gas.

            So we DO get an ought from an is all the time, we just have to be clear about all the premises. The tricky part is the claim that there are hypothetical imperatives (like the one above) and categorical imperatives (like morals). I agree that this distinction is extremely suspect, and probably an unwelcome carry-over from religious thinking.

            So for example

            P1) Murder causes harm
            P2) I do not wish to cause harm (axiom)
            P3) Therefore, I ought not murder

            Seems fine to me. Its a simple model of morality, and one can certainly question whether that axiom is sufficient, but there is nothing any more subjective about such a moral axiom than the axiom of car maintenance from the previous example. Most of the atheist objections to this sort of moral reasoning appear to be based on unjustified restrictions: moral rules must be inherently binding on all people, or must fall in line with our moral intuitions etc.

            Another good example would be an axiom of game theory: agents seek to maximise their payoffs. No-one objects to the idea of objective game-theoretic truths simply because one might not WANT to maximise one’s payoff.

            Unfortunately, I think that “morality” like “free will” or the “self” are so associated with terrible religious thinking that many of us atheists simply can’t see a way clear to talking about such concepts in ways that let us make useful models of human behaviour.

          3. Thank you, PascalsGhost. The only point I want to emphasize is that these hidden axioms, so to speak, are lurking just about everywhere.

          4. Kinda messed part of that up. Meant to say:

            “[…]there is nothing any more subjective about the moral ‘ought’ derived from that axiom, than the ‘ought’ of putting gas in my car in the previous example.”

          5. So we DO get an ought from an is all the time, we just have to be clear about all the premises.

            No we don’t. The “ought” in your care-and-gas example isn’t a moral “ought.” It’s just a colloquial way of expressing the fact that in order to put your car in working order you need to fill it with gas. Failing to fill it with gas may be irrational or counterproductive to your goals, but it’s not morally wrong. You don’t have a moral obligation to put gas in your car.

          6. Ugh. I have already said in my post that I haven’t heard a good justification for people separating “moral oughts” from “other oughts”, so simply asserting it without justification is not very compelling.

          7. So for example
            P1) Murder causes harm
            P2) I do not wish to cause harm (axiom)
            P3) Therefore, I ought not murder
            Seems fine to me. Its a simple model of morality, and one can certainly question whether that axiom is sufficient, but there is nothing any more subjective about such a moral axiom than the axiom of car maintenance from the previous example.

            It’s obviously a false model of morality, since we do not consider the morality of murder to depend on whether the murderer wishes to cause harm.

          8. “we do not consider the morality of murder to depend on whether the murderer wishes to cause harm”

            I didn’t say that we do?

          9. Of course, Gary, but that distinction is not at issue.

            If you see the distinction between reason and morality you should be able to understand that “Einstein is more accurate than Newton” does not mean “one ought to accept Einstein.” You can’t get an “ought” from an “is.”

            In morality, what makes you wrong is the objective reality of the well-being of conscious creatures. If I claim that slavery is morally good, I’m objectively wrong, and I’m wrong because slavery is inimical to the well-being of conscious creatures.

            No, the claim that slavery is morally good is not objectively wrong. There is no objective moral wrong. The belief that one ought not to act in ways that are inimical to the well-being of conscious creatures is a subjective preference, not an objective fact. You can’t demonstrate the immorality of slavery any more than someone else could demonstrate the immorality of abortion or gay sex.

  3. It is interesting that none of those last three examples are illegal. So it appears that we have, in some respects, moved socially away from enforcing our own deontological moral feelings. Maybe not as far as consequentialists would like, but still, the trend is there.

    Though with the last example (person promises dying mother), I think its perfectly reasonable to conclude that such a person may be untrustworthy. They just intentionally lied to an important person in their life because they expect to get away with it and don’t want to upset her. A bystander could reasonably expect them to behave in similar manner in similar cicumstances in the future – i.e., lie outright to me if they think they can get away with it and if telling the truth is socially awkward. So some of the gut-level reaction to seeing that lie as wrong may be perfectly reasonably based on what it says about the liar to us, the living, even if this lie has no impact on the dead.

    1. Yes, it might show a “bad” (for our purposes) disposition of character. Once you throw in virtue-ethics concepts like good and bad dispositions of character, you get another level of complication. But maybe also a richer and more plausible theory.

    2. The problem with these moral hypotheticals is that they are so extremely divorced from reality. The dying mother example simply shows that the “liar” was dealing expeditiously with a raging narcissist.

  4. Three examples are a man who masturbates with a grocery-store chicken before cooking and eating it

    I’m intrigued, but I need a bit more information on this before I can decide if it is moral or immoral…

    1. Yet their consequences for society are nil.

      Not entirely. He could start an epidemic of antibiotic resistant salmonella.

  5. We intuitively feel that switching a runaway trolley about to kill five people onto another track on which one person stands is morally fine

    I’m surprised to hear this is the normal reaction to this thought experiment. I’ve always taken the position that one should do no harm, and switching the track makes you the agent of one’s death whereas doing nothing makes one an outside observer to a tragedy.

    in such cases, where the recipients of our actions are remote, we don’t have an instinctive reaction.

    This makes sense in light of the lack of guilt felt by snipers or those who engage in aerial bombardment. It’s much easier to snuff someone out from 20,000 feet than to club them upside the head with a baseball bat.

    1. Yes, it does seem well established that this is the most common intuitive reaction to the thought experiment. When I was teaching these trolley cases to first-year university students, that was always borne out, but there are much more rigorous studies of it than my personal experience. It also seems to be a cross-cultural thing.

    2. “I’ve always taken the position that one should do no harm, and switching the track makes you the agent of one’s death whereas doing nothing makes one an outside observer to a tragedy.

      It is just as easy to reason that your response is immoral as it is any other response, if one decided they wanted to. Doing nothing does not make you just an outside observer. You have made a decision to do nothing in a situation where some action by you may save lives, because you want to avoid feeling, or being perceived by others as being, immoral. There is no way to avoid these scenarios. They are specifically designed so that no matter what you do, you are screwed.

      These types of scenarios may be interesting for studying human moral judgement in a general sense, but there are just too many possible variables, many that can not even be identified let alone constrained or accounted for. You would have to pick apart each individual’s brain to have a chance at understanding why they responded the way they did, and have a chance of evaluating wether or not their response is useful data for studying moral problems. Or wether or not you understand the data well enough to form usefully accurate conclusions regarding moral problems.

      1. Which is why morality is subjective. There is no way you will ever convince me that engaging in an action that directly causes the death of someone else, even if it was done to save the lives of more people, is a moral or right action. There is no way to quantify the value of a human life. A person’s life is of infinite worth to him.

        Everyone does what seems best to them and people almost always put their own interests above those of others. We have laws to regulate behaviors, but what is is legal is not what is right because there is no “right”, there are merely preferences.

        Dostoevsky argued that without God, all things are permissible. He’s right. All things are permissible, but not all things are permitted.

        1. There is no way you will ever convince me that engaging in an action that directly causes the death of someone else, even if it was done to save the lives of more people, is a moral or right action.

          So you believe that killing someone even in self-defense is immoral? That seems to be a very uncommon belief.

          There is no way to quantify the value of a human life. A person’s life is of infinite worth to him.

          I’m not sure how you think society could function without quantifying the value of a human life, e.g. how much money to spend fighting a deadly disease, or how much money to award in a wrongful death lawsuit.

          1. So you believe that killing someone even in self-defense is immoral? That seems to be a very uncommon belief.

            Yes, I believe it is immoral to kill someone in self-defense; however, that does not mean I wouldn’t do it. Self-preservation and the preservation of the ones I love is paramount to me, but I don’t believe the taking of the life of someone else to protect them or myself is the moral action; it’s the popular action.

            Let’s change the thought experiment a bit. Let’s say your foot was caught in the tracks but you could reach the switch. Would you redirect the trolley to save yourself at the expense of five others? I would say that most people would answer that they would sacrifice themselves, but if literally placed in that situation they would act to save themselves. Would this be an immoral act? I would say so, but my life is of infinite value to me. The lives of those five others are not.

            Now, when analyzing the worth of five lives to one, I have the added problem of having no way of discerning who is worth (my preference, of course) saving in that situation. Is it five Nazis versus a scientist who may one day discover the cure for cancer? There simply isn’t enough information to make an infomred decision. You can’t simply say that 5 lives >> 1 life. In that case, I would choose to do nothing. If I knew it was a choice between Charles Manson and five pregnant women, I would have enough infomation to act in the expected manner. Without that, I have no reason to act, and in fact, have a compelling reason not to act in that I am causing death by my direct action.

            To me, causing a negative outcome is morally worse than allowing a negative outcome to occur by inaction. Edmund Burke may disagree with me. Tough for him.

          2. I’m not sure how you think society could function without quantifying the value of a human life, e.g. how much money to spend fighting a deadly disease, or how much money to award in a wrongful death lawsuit.

            These decisions are usually based upon who has more money, more powerful lobbyists, or who has the inertia of public opinion behind them and not on what is the correct cause or amount of money to award to someone. There is no amount of money you can award to a five-year old that would compensate for the death of her father by the drunk driver that killed him.

          3. I don’t believe the taking of the life of someone else to protect them or myself is the moral action; it’s the popular action.

            It’s not merely acting in self-defense that’s popular, but the belief that self-defense is moral. I don’t think most people say “self-defense is wrong, but I would do it anyway.” They believe it’s the right thing to do.

            Now, when analyzing the worth of five lives to one, I have the added problem of having no way of discerning who is worth (my preference, of course) saving in that situation. Is it five Nazis versus a scientist who may one day discover the cure for cancer? There simply isn’t enough information to make an infomred decision.

            As the trolley scenario is usually presented, there is no additional information about the people in jeopardy. The choice is simply between saving one person and saving five. So there is no basis for assuming that the life of the one is more valuable than the lives of the five.

          4. These decisions are usually based upon who has more money, more powerful lobbyists, or who has the inertia of public opinion behind them and not on what is the correct cause or amount of money to award to someone.

            I don’t know what “the correct cause” or “the correct amount of money” is supposed to mean. You said “there is no way to quantify the value of a human life.” I’m saying that, as a matter of necessity, we can and do quantify the value of human lives through law and public policy.

            There is no amount of money you can award to a five-year old that would compensate for the death of her father by the drunk driver that killed him.

            I doubt that’s always true. What if she hated her father (maybe he was sexually abusing her)? In any case, just because a family member, or anyone else, disagrees with the valuation doesn’t mean we can’t put a value on human life.

        2. Yes. That was one of my points. My other was that your inaction in the example scenario does not absolve you from any moral responsibility for the outcome. In the scenario you used as an example your decision to not act directly causes one or more deaths. That particular scenario is intentionally designed so that you can not avoid being directly responsible for the death of someone else.

          If a stranger were about to be hit by a city bus, and you saw the situation developing, had time to make a decision, and were capable of pulling them out of the path of the bus, would deciding not to act be morally acceptable to you? Probably not, and I understand that killing is the showstopper for you. And I think that is not a bad thing. But, deciding not to act when it is possible for you to do so does not remove you from moral responsibility as you seemed to suggest above.

          I am not trying to suggest that your response to the scenario is immoral, there really aren’t any good outcomes. In any case any person put in a simialr situation in real life will be seriously disturbed.

          1. In the scenario you used as an example your decision to not act directly causes one or more deaths.

            I disagree. Had I not been at the switch, the five would have died anyway. My inaction is the cause of nothing. Simply because I choose not to intervene on the natural course of events does not men I am the cause of those events. Yes, I could have changed them by acting, but at the present moment there was not enough information for me to intervene and thus kill someone by my action.

            Hey, I admit it’s a personal preference. And you’re correct that these thought experiments are no-win scenarios. It is my belief though (and that is all any of us have) that acting to kill someone is far worse than not acting to intervene in the natural course of events and allow those events to cause the death of someone else. More information changes the scenario of course, but given allowing 5 Sims to die versus killing 1 Sim, I’d sleep better if the five died.

          2. “Had I not been at the switch, the five would have died anyway.”

            Sure. But you were at the switch. Just as the 5 people were on the track and the train was moving toward them.

        3. It’s more like: “If God exists, then all things are permitted”, since theists show a remarkable ability to rationalize all manner of bad acts.

    3. I’m surprised to hear this is the normal reaction to this thought experiment. I’ve always taken the position that one should do no harm, and switching the track makes you the agent of one’s death whereas doing nothing makes one an outside observer to a tragedy.

      I’m surprised that you’re surprised. Most people do not seem to adhere to an inviolable “do no harm” rule, but instead believe that doing harm to prevent greater harm is sometimes justified. Sacrificing one life to save five seems like a clear example of that.

  6. Jerry, you’re not the only one who’d favour abandoning language such as “immoral action” – e.g. Joel Marks would certainly agree with you. I’ll be discussing his new book, in which he comes out as a moral error theorist and a moral abolitionist, in the next issue of Free Inquiry if anyone’s interested. Richard Garner would also agree with you.

    I’d probably want to avoid the term “immoral action” myself, though I note that it’s difficult to avoid such words as “good”, “bad”, “better”, and so on. We’re probably going to end up being stuck with at least some evaluative terms, along with arguments about how objective our evaluations are when we use them.

  7. The main difference between the original trolley problem and the fat man version is that the trolley with 5 people versus 1 person on the track seems reasonably sensible.

    However, in the fat man version, how do you know that you are too light to stop the trolley but that the fat man is just heavy enough to stop it? How on earth can you possibly have worked that out?

    I suspect that when people say they wouldn’t push the fat man onto the tracks it may well be simply because they realise that they would not in fact know this for sure, and no-one would push a fat man onto the track just on the off-chance that it might stop the trolley.

    I’m sure this point has been made many times before, perhaps in these papers, but it has always bugged me that this always gets glossed over.

    1. Yep, for a true comparison, you’d have to explicitly posit that switching the trolly onto the other track has a small but still significant chance of causing it it to jump the tracks and kill 6 people instead of 5. Say that and it gut-feels very similar to the fat man case.

      1. But I think, for the purposes of these thought-experiments, those variables are supposed to be eliminated. You know the fat man will stop the trolley, and you know the train will behave as described.

        The point is to elicit our reactions to the scenarios, given those assurances.

        1. I can’t speak for TJR or eric of course, but speaking for myself, I don’t think it matters. I think it is too much to ask. If you ask people to consider such assurances in a scenario that is so far removed from reality you are just asking for skewed data. If people followed instructions like computers, okay. But they don’t, not consciously or sub consciously.

          1. Exactly, the question is trying to elicit the person’s “intuitive” moral verdict, but in a scenario so unrealistic that it will distort any results you get.

          2. Darrelle, even if people followed instructions like a computer, the results would still be skewed. If you provide a computer false data (a fat man will stop the trolley) your results are going to be worthless.

          3. I would tend to agree. You have to explicitly lay out all the rules of the example, else people are going to make normal-life assumptions to fill in the gaps. A normal-life assumption in this case is likely to be: human bodies rarely stop trains dead in their tracks, but switching tracks pretty much always works to change their direction.

            Even when you lay them out, there’s likely to be hidden biases in your subject’s decision-making. Intellectually they may accept that the odds of failure are both equal and 0, but it still might not feel that way to them. When you ask about their gut feeling, they aren’t really giving your their feeling about that scenario because they can’t internalize it. Those problems you can’t really get away from, but not laying out the rules of the scenario practically guarantees this sort of bias.

  8. I’d probably lean more towards pushing the fat (or a very tall) man unto the trolley tracks – unless he was a ‘Militant’ Atheist. That’s presuming The Five were worth the effort.

  9. I’m not sure that consequentialist approaches are themselves fully free of deontological underpinnings. At the very least we have to determine what kinds of entities have “consequences” worth considering. Singer’s work in this area is especially notable, since unlike most consequentialists he argues that moral consideration has to be extended to animals.

    At some point all ethical systems have to define what “count” as participants in the ethical system. And that can’t be done solely on the basis of consequences.

    1. I’m in a similar camp.

      Maybe it’s my math background, and maybe I’m a weirdo (…OK, that’s not a maybe), but personally I find the deontology-vs-consequentialism debate the single stupidest thing in modern philosophy, since they’re isomorphic ways of talking about the underlying poset ordering-of-choices. Any form of consequentialism corresponds to a deontology where the rule “take the choice giving the best outcome” is followed; any form of deontology corresponds to considering the best the choice leading to the consequence where “The Rules were followed”. It’s not even a debate about semantics, but about arbitrary bookkeeping on the semantics.

      As a question of psychology, deontology versus consequentialism is a little more interesting. Which is used can say something about how humans go about such reasoning in practice — I suspect because of time-costs of analysis for decisions. (“Thou shalt not murder” may take less time than reasoning out why it would be a bad idea to strangle the really annoying person, much as in computer science lookup tables are sometimes faster for practical applications than fancier algorithms, though requiring more memory.) I suspect in practice, humans use both.

      But philosophically, it’s two sides of the same damn coin.

  10. If anyone has read it, Peter Singer in the conclusion of his book the Expanding Circle (an updated version) explains the emotional basis of morality following Greene and others, but then gives the thumbs up to moral realism in support of Derek Parfit.

    Anyways, I still find Greene’s defense of moral antirealism to be the most palatable, and feel that it gives the most coherent and comprehensive conclusion about what human morality consists of (or as humans have practiced “morality), including a rejection of the existence of moral facts.

  11. The trolley question bothers me because it is less a moral question than a lack of understanding of how a trolley operates. Between the dead man switch and remote operation of turnouts and rail or overhead power, the only way for this scenario to happen is to sabotage the trolley and if you’ve done that, then how many people are killed is meaningless.
    The point of the exercise seems to be that you have to kill someone but you’re not given complete information of the situation or realistic options. If I cannot ask questions and gather more information on the scenario, you’ve already directed my response. Without information to analyze, am I really making an informed decision, much less a moral judgment?
    In either case I’ve rationalized my decision, but being a morally correct decision depends on how you define moral. If morality simply means what you believe to be right, then you are moral no matter what you decide. If morality is decided by society, then your decision may be influenced by possible repercussions of your action or inaction.

    1. “The trolley question bothers me because it is less a moral question than a lack of understanding of how a trolley operates.”

      I can’t tell if this is a joke or not.

      1. I wasn’t joking. When posing a scenario where you’re given insuffiecient information and you have to suspend reality, you aren’t going to get valid data. The question is designed to force you into making a choice and rationalizing why you chose to kill someone. If you can make up your own rules of reality, like a fat man stopping a trolley, then I can make up my own rules and say that all six people will magically fly away before the trolley arrives.
        If you cannot posit a plausible real life scenario providing as much information as possible and without limiting the responses then the question is pointless.

  12. I think that these examples of choice situations (trolley and switch or fat guy) are not equivalent because there are various factors which makes them different. In the case of switch, influencing the movement of trolley is almost a certitude. There is no much of risk involved for us personally, than doing some exercise. In the second case, when we need to push fat guy from the bridge, well there is always possibility that it will end with fight, unless he is conveniently tied. Secondly, there is always good chance that his body will bump out from trolley not stopping it entirely, and I think that we humans have quite good intuitive judgement about this. Therefore it is not entirely equal in terms of probability that in both cases we will get the same outcome.

    1. Yeah. I think this is the major problem with the trolley problems — it doesn’t factor in the level of uncertainty people intuit to each attempt.

      1. You could come up with a better version. Say the train’s main brakes have failed, so it’s out of control. However, it also has a failsafe set of backup brakes that will be automatically applied if the driver dies. Do you shoot the (innocent) driver?

  13. h we differ in whether we think (as does Sam) that such morality is objective

    I wonder, what is your definition of “objective”?

    This is more or less the same point that ventzone makes above, or rather it touches on the same point. The health analogy of course doesn’t originate with Harris but rather with John Stuart Mill, but as far as I am concerned it indeed remains unanswered. Just because nobody can force me to (profess to) value bodily integrity that doesn’t mean that eating iron filings isn’t objectively unhealthy. Similarly, there is no way to force the Taliban to (profess to) value well-being, which also doesn’t mean that throwing acid in someone’s face isn’t immoral, i.e. shouldn’t be done.

  14. More thought on punishing the innocent. Let us say that humans have a strong emotional reaction against punishing innocent people. Let us also say that punishing innocent people is universally impracticable, it will never lead to better societies or successful punishment. Let us say (surely wrong) evolutionary structures have been robust, they have continuously been molded as humans tried out a great many different kind of social structures, and in societies that impracticably started punishing innocent people, people with strong emotions against such impracticable policy were selected for in our genes and eventually in our moral concepts.

    The confusion I believe comes because we are incapable of decoupling two things: A) That punishing innocent people is wrong (it isn’t, there is no such thing as “wrong”); B) Punishing innocent people is never practicable. Even if B is true, and even if our strong emotional reactions and intuitions have seized upon this logical relationship, that punishing innocent people will never create better societies, I believe we need to accept that that is all that our emotions and intuitions point towards. Our emotions and reactions and social discourse about “morality” encourages us to believe that “punishing innocent people is wrong” instead of understanding and believing that “punishing innocent people is never societally practicable.” The structure of the first statement is something that is further rooted in emotion, we are encouraged because of the kinds of being we are to frame the issue in this way, even though it hides the best analysis of the problem before us.

    1. I believe we need to accept that that is all that our emotions and intuitions point towards.

      But it isn’t. Earlier human societies have practiced human sacrifice of innocent children. And as I pointed out in response to your earlier message, many societies today limit freedom of religion which is a type of punishment of the innocent.

      This is not a universal and so its a legimate criticism of the consequentialist approach. We have to worry about minority rights under consequentialism because we know that normal humans in many times and places have, indeed, decided that the best thing for the group as a whole is to punish some minority group for just being a minority group.

      1. Hi Eric,

        For fun, I would first argue that these societies that are outlawing atheism, even if they are adhering to consequentialism, are misinterpreting the “happiness” of their society writ large and thus their calculus is way off. But that’s all empty conjecture.

        I am saying it is conceivable that such a law could be “good” for some kind of society of intelligent beings. Though as an aside I also claimed that given homo sapiens genetic structure of brain/mind, it may mean that punishing the innocent or outlawing atheism may never be a possible calculus for what is in the interest of that society (as those current members define such).

        As I stated in the first posts, I am an antirealist about morality. This means I think it only makes sense to talk about consequences for a law or policies; but that does not mean that I am a consequentialist, in that I think such a law (which maximizes happiness, say) is moral or “good.” Such a law or policy produces such and such consequences within a society for those selves, and I believe there are no other factors to be considered as we negotiate what policies to institute. For example, we shouldn’t create a law against murdering because “murdering is wrong,” murdering will be problematic for our present goals given the consequences it produces. The intuitions, platitudes and unquestionable rules that we use to justify any such laws will not be the best means for judging whether such a law or policy is a practice we wish to institute; except in the bare sense, that a full analysis of consequences is impossible so we will have to rely on generalized intuitions hopefully within a scope of continuous, wide reflection. But despite the latter caveat, judging the possible consequences of current policies is the only relevant procedure for deciding what we want our future (and present) society to look like, even if impracticable.

  15. The fatman is a case of coercing some one into doing something that should be *their* choice to do, whereas the trolley switch is a simple numbers choice and involves no coercion.

    Similarly, if a doctor could save 4 people using the organs of one healthy person, then that would also be wrong, since we shouldn’t take decisions for our peers.

    Surely peer coercion is where this moral intuition comes from and it is I think a very valuable one, however it evolved, not least because the motives behind the kind of decisions involved are highly likely to be nepotistic.

    1. The fatman is a case of coercing some one into doing something that should be *their* choice to do, whereas the trolley switch is a simple numbers choice and involves no coercion.

      But the fat man is not being coerced to act. He’s simply being thrown on to the tracks to block the trolley. That’s not an act on his part, any more than being tied to the tracks was an act by the other people in jeopardy.

      1. Missing the point – I didn’t say he was being coerced “to act” in a particular way. I said he as an independent sentient agent was being included in an action of your devising, but without his consent as a peer. That’s the distinction – In the case of the trolley switch you are on your own in the decision you make, since the scenario is set up as a binary choice. If, however, it was your girlfriend/friend on the track, then your moral responsibilities would be different, since they would depend on loyalty.

        1. You wrote “coercing some one into doing something.” But the fat man is not coerced into doing something. He is simply thrown on to the tracks. Something is done to him by someone else.

          You now say the distinction you meant is that he is “an independent sentient agent [who] was being included in an action of your devising, but without his consent as a peer.” But in the first version of the dilemma, where you throw the switch to divert the trolley so that it kills only one person instead of five, the one who is killed doesn’t consent to your action, either. There is no difference with respect to consent.

          1. The point is that the one person and the five people are in roughly the same predicament, whilst the fatman on the bridge isn’t, which is why his consent matters. That’s clearer in the surgeon/organs example, for instance.

            The decision becomes harder and harder to make the further you differentiate the predicament of the one person from the five: For instance what if the lever was in the middle and it you left it there everyone would die? Then it would be a no brainer to choose the five. But, if flicking the switch launched a missile that shot down an airplane flying overhead, that would be a harder moral decision to make.

          2. The point is that the one person and the five people are in roughly the same predicament, whilst the fatman on the bridge isn’t, which is why his consent matters.

            Why does that mean his consent matters, but not the consent of the people who are tied to the tracks?

          3. There is no absolute right or wrong – these thought experiments just act to show up the inbuilt moral disposition that is shared by most human beings.

            This particular intuition seems like a good one to me, since in real situations there is rarely 100% certainty of outcomes and one’s own decisions are not necessarily preferable to those of someone else.

            The same sort of intuition is probably also at work when people are outraged at the fate of “innocents” in war.

            And if *you* want to push the fat man off the bridge would you also kill 1 healthy individual in order to use his organs to save a number of your patients? I expect most people would find the idea pretty horrifying and for much the same reasons.

          4. You didn’t answer the question.

            Case 1: You throw a switch, diverting a trolley on to a collision course with a man tied to the tracks.

            Case 2: You push a fat man on to the tracks in the path of the trolley.

            In both cases, your action causes the man to die, but saves the lives of five other men. You said you think the consent of the victim matters in Case 1 but not Case 2. Why?

          5. Correction: You said you think the consent of the victim matters in Case 2 but not Case 1. Why?

          6. As I’ve already explained, it’s because in case 2 you are involving someone outside of the problem. What’s so hard to understand about that?

            Remember that I’m not saying that this intuition is “right” in some absolute sense, because there is no right or wrong, but thinking makes it so. However, I reckon that the feeling that we don’t have the right to dispose of third parties as we wish is one that benefits a cooperative society.

            And it does seem from your other answer that you personally would also be reluctant to involve innocent parties in solutions to problems.

            Just to try and finally nail this down – you can of course make all sorts of morally ambiguos scenarios, but (at least as I am hypothesizing) the principle intuition is to do with the involvement of innocent parties. We feel that when the fatman is on the track he is a part of the problem, but when he is an innocent passer by, on the bridge, he is not.

          7. As I’ve already explained, it’s because in case 2 you are involving someone outside of the problem.

            The fat man isn’t “outside of the problem” any more than you are. He’s standing there right next to you.

          8. You are outside of the problem too – since you aren’t on the track. That’s the same difference as the transplant surgeon and his patients. And these dilemmas are not about right and wrong, but what people instinctively believe to be right or wrong.

          9. And if *you* want to push the fat man off the bridge would you also kill 1 healthy individual in order to use his organs to save a number of your patients?

            I don’t “want” to push the fat man off the bridge. I would not kill (or endorse killing) one healthy individual to harvest his organs to save other people.

          10. How about this one: You are a surgeon, you have 5 people dieing in room A and 1 person dieing in room B. You can only enter one room… Of course, everything else being equal, you treat the 5 guys in room A. You don’t have to ask the person in room B for permission. This moral intuition comes from the same place as the track switch one does (or at least so I am claiming).

    2. There are many versions of the trolley problem. One of them has a switch that loops the train around before hitting the 5 people, and on that loop sits a similarly weight challenged person that we omnisciently know will stop the train. We can then ask whether you will pull the switch, “using” the fat man to stop the train; and, subsequently, whether you would push the fat man, thus “using” his weight to stop the train. I do not feel like there is a relevant difference there as far as the person making a decision for their self in those scenarios.

      And the data still seems to be, I believe, that people will pull the lever but not push him. As others have stated, though, there may be all sorts of problems in the analysis of why people answer the way they do.

      1. My point is that the question is to do with peer relationships. you can’t push the guy off the bridge, since both you and he are agents on the same level (you are in a peer relationship with him). That’s the same reason your doctor can’t capture you and use all your organs to save his patients. You can only sac the fat guy when he is in the same relationship to you as the other guys on the track. If it was your girl friend on the track and the other guys were randoms, you also wouldn’t flick the switch.

        1. I do not understand the “peer relationship” worry at all.

          So, day 1: My large friend is standing perfectly on the bridge, and in a split instant (without any warning) I can push him, stop the train, and save 5 people.

          Day 2: My large friend just happens to be standing on a track, and in a split instant (without any warning) I can throw a nearby switch, have the train loop around and hit him and save 5 people.

          What is the relevant difference as regards the action I am going to do to my large friend?

          1. Would you really sacrifice a friend for 5 unknown people?

            And, the more the predicaments of the two sides differ the harder the decision becomes to make. I’m sure that we could find situations where one wouldn’t prefer one choice over the other.

  16. An argument could be made that allowing the trolley to kill the five people is the utilitarian one. After all you have a group of six people who are stupid enough to camp out on a trolley track. Taking out five instead of one probably improves the human gene pool. 🙂

  17. In today’s society, Greene, Singer, and Haidt feel that consequentialism is a better foundation for morality than is deontology, since the former involves reasoned rather than instinctive judgments.

    I think it’s worth mentioning the distinction between the moral value of an act, and the decision procedure you use to choose which act to perform. I think the consequentialists you cited would agree that they don’t actually want everyone performing act-consequentialism directly (trying to perform mathematical evaluation of consequences) — counter-intuitively, it reliably leads to worse consequentialist outcomes than following a set of (consequentially-derived, say) rules.

    This blurs the line between consequentialism and deontology to not be so sharp, I think. Deontology is following rules rather than directly maximizing consequences, and.. so is consequentialism, when done properly. The difference is just where the rules came from.

    Brad Hooker thinks (in “Ideal Code, Real World”) that this is the best formulation of a rule-consequentialism:

    “An act is wrong if and only if it is forbidden by the code of rules whose internalization by the overwhelming majority of everyone everywhere in eachnew generation has maximum expected value in terms of well-being (with some priority for the worst off).”

  18. Strictly speaking deontological morality is also consequentialist. That is if you count as consequence the emotional states induced by the retributive punishment on the victims or relatives of the victims.

    What we’re really saying when distinguishing between deontological and consequentialist is that your feelings don’t matter.

  19. The Greene paper seems to explain the responses to the following moral dilemma I thought of and have since been wrestling with:

    Suppose there are two individuals. One has the mental ability of a 6 month old baby, is not self aware, and is not an important part of their community. The second individual has the mental ability of a 10 year old, is self aware, and is an important member of their community.

    Now suppose you were told you had to choose which one would be killed. If you could not choose, then you would be killed.

    Some people I asked could not choose because either result (the 1st or the 2nd individual being killed) is equally unfavorable. I personally would choose to have the 1st killed (the 6 month old baby) because it would less of an impact on the community and because I value sentient life.

    However, suppose now you were told the 1st individual is a human baby and the 2nd individual is a matriarch elephant.

    Now the people I talked to who could not choose immediately choose to have the elephant killed. I, on the other hand, cannot decide which individual I would choose because I would be hesitant on having a human baby killed.

    Now it seems this dilemma exposes pits emotional and “cognitive” processes against each other. It also seems those who could not answer in the first case and who chose the elephant in the second have a deontological moral view, where I have a more of a consequentialist view (although I am not completely able to override my emotional processes as evidenced by my response to the second case).

    The important question to ask is if human life is always more important than non-human life or if sentient life is always more important than non-sentient life.

    1. “Now the people I talked to who could not choose immediately choose to have the elephant killed.”

      This is impossible, they would have been killed already during the first test. 🙂

  20. A recent discussion on bloggingheads.tv about psychology experiments on young babies (less than a year old) suggests that some of our moral intuitions are very dark. In one experiment, babies were shown puppets exhibiting a preference for one kind of food over another. Not surprisingly, perhaps, the babies preferred puppets that shared their own food preference over puppets with a different preference. More disturbingly, though, the babies preferred puppets that attacked the puppets with a different food preference over puppets that were neutral or altruistic toward the puppets with a different food preference. For babies, apparently, merely preferring different foods makes you worthy of being attacked!

  21. I think our gut reactions to the various forms of the trolly paradox are not so much about deontology versus consequentialism as about what powers we are comfortable giving others. While I may feel comfortable giving power to a switchman to switch a trolly from killing five to killing one, I am less comfortable about giving power to a doctor to kill one person in order to harvest organs sufficient to save five. The latter power is more threatening to me than the former, because I would fear for life every time I walk by a hospital. So the two problems are not really the same, and I think that is true for all trolly paradoxes. IMO, all morality should be consequentialist.

  22. The trolley and fatman dilemmas have been around for a long time. The problem is both scenarios are hypotheticals and whatever decision one makes is also hypothetical. Whether you deliberately kill one person to save five or do nothing to save one, you are still making an “immoral” decision either way. You can rationalize either decision by claiming that one is more moral than another for whatever reason.

    Better, I think, to look at a real life (historical) decision and analyze that to see if a moral conclusion can be drawn. Harry Truman had to decide whether or not to drop atomic bombs on Japan. Do you think he thought about that as a “moral” decision (better to kill a “few” Japanese or allow the war to drag on and suffer many more deaths over the long run)?

    I suspect he wasn’t analyzing the problem from a moral perspective at all. I think there was only one thing on his mind. “How can I (Truman) end this war in the fastest possible way?” He left moralizing up to others.

  23. This was a most auspicious timing, not only because I need good references on morality, but because I read an article that put this in a larger context.

    That article was also contrasting consequences against rights, or more exactly human rights and freedoms. But it did so with a decidedly “trolley and footbridge” angle, namely rights and freedoms constituted by a free market on the one extreme vs looking at social consequences and institute plan economy on the other extreme.

    Here we can see that maximizing consequentialism is not favorable (being told what to and what not to buy). But so isn’t total freedom either because of unfortunate consequences such as suffering or crime.

    Thus political and legal systems can not only protect rights and freedoms thoughtlessly, “instinctively”, but have also the task to look at consequences.

    In fact you should be able to map protection of rights and freedoms to policing consequences as regards jurisdiction I (likely naively) think. This is also the one area where I think philosophy has a useful task. Here it seems consequentialism (utilitarianism) could be a nice fit.

    While it also seems to me that in the larger social context neither extreme is healthy or “moral”. There should therefore be a balance. And apriori I think human rights and freedoms would have to take the larger portion in order to maximize well being, or perhaps more correctly measured on that scale as functional societies.

    1. Here we can see that maximizing consequentialism is not favorable (being told what to and what not to buy).

      This is probably a misunderstanding of consequentialism, if I understand you correctly.

      Consequentialism doesn’t just look at the immediate effects of the decisions it prescribes; it looks at all the later effects, and (in the case of utilitarianism) really is trying to create the happiest society possible. If freedoms are important to happiness — more important than the gain in happiness we’d get from being told what to do all the time — it will say that we ought to give those freedoms.

      In short, I think that “human rights” and giving everyone plenty of freedom to choose how to live their life as long as they aren’t harming others are both supported (and demanded, even) by utiltarianism. There isn’t a conflict to be resolved there.

  24. There seems to be a lot of huffing and puffing about the trolley problem (including on that tomkow web site)! It seems to be fairly simple to resolve and Julian Baggini does it in his book “The Pig that wants to be eaten”.

    Initially it’s a question of saving/killing 1 or 5 people. So you save 5. You’re not actively killing someone by throwing the switch – someone was going to die anyway – you’re actually *saving* people.

    When it’s the fat man vs 5 people, then this is a qualitatively different scenario, because the fat man isn’t in danger in the first place. So you’re actively killing him by throwing him on the tracks.

    The end results are the same in terms of numbers but how you get there matters. Now perhaps there are no absolutes, but in my book you’d need to be saving a lot more than 5 people to justify killing an innocent bystander.

    1. I think it’s important to appreciate, though, that we are not talking about moral absolutes, but about common moral intuitions that only really apply between individuals in the same social group. That’s very clear when you appreciate that the moral strictures in the old testament only apply within the author’s tribe. Anyone else is fair game for rape and pillage.

      In modern society, one of the main goals is to break down these tribal distinctions so that we apply our moral intuitions to everyone equally. That this isn’t exactly natural in a state of nature appears obvious from the difficulties in getting people to accept others as deserving the consideration they would naturally get if they were more closely related.

    2. You’re not actively killing someone by throwing the switch – someone was going to die anyway – you’re actually *saving* people.

      By throwing the switch, you’re causing someone to die who would otherwise have lived. I’m not sure how you think this does not qualify as “actively killing someone.” The fact that your action saves five other people does not alter the fact that it kills someone.

      When it’s the fat man vs 5 people, then this is a qualitatively different scenario, because the fat man isn’t in danger in the first place.

      But that’s not a difference. The man tied to the track in the original scenario isn’t in danger in the first place, either, because the trolley is following a different track. What puts him in danger is your act of throwing the switch to divert the trolley on to his track.

  25. I think that the existence of things such as the ‘trolley problem’ are a nail in the coffin to those that claim morality is god-given.

    The fact that these problems exist at all – they do not have a clear-cut outcome in ‘right/wrong’, and if you would sample 100 people you would not get a 100/0 split for one answer – should indicate to proponents of the ‘divine morality’ that somewhere, God is not being very clear in instilling this morality.

    Why would God instill the desire to kill, say, one fat man, in 20 out of 100 people, yet make another 80 people chose to let 5 others die instead? Makes no sense.

    Somehow however the godheads don’t seem to see it that way, I wonder why.

Leave a Reply to Stan Pak Cancel reply

Your email address will not be published. Required fields are marked *