What does it mean to say that there’s an “objective morality”? The Stanford Encyclopedia of Philosophy reports this view as “moral realism” and characterizes it like this:
Moral realists are those who think that, in these respects, things should be taken at face value—moral claims do purport to report facts and are true if they get the facts right. Moreover, they hold, at least some moral claims actually are true. That much is the common and more or less defining ground of moral realism (although some accounts of moral realism see it as involving additional commitments, say to the independence of the moral facts from human thought and practice, or to those facts being objective in some specified way).
This is the stand taken by Sam Harris in his book The Moral Landscape, and it’s a view with which I disagree. Although some philosophers agree with Sam that morality is “factual” in this way—and by that I don’t mean that the existence of a moral code is a fact about society but that you can find objective ways to determine if a view is right or wrong—I can’t for the life of me see how one can determine objectively whether statements like “abortions of normal fetuses are wrong” are true or false. In the end, like many others, I see morality as a matter of preference. What is moral is what you would like to see considered good behavior, but as different people differ on what is right and wrong, I see no way to adjudicate statements like the one about abortion.
I’ve said all this before, but it came to mind last night when I was reading Anthony Grayling’s comprehensive book The History of Philosophy. (By the way, that book has convinced me that there is virtually no issue in philosophy that ever gets widespread agreement from nearly all respectable philosophers, so in that way philosophy differs from science. That is not to say that philosophy is without value, but that its value lies in teaching us how to think rigorously and to parse arguments, not to unearth truths about the cosmos.)
It’s clear that empirical observation can inform moral statements. If you think that it’s okay to kick a dog because it doesn’t mind it, well, just try kicking a dog. But in the end, saying whether it’s right or wrong to do things depends on one’s preferences. True, most people agree on their preferences, and their concept of morality by and large agrees with Sam’s consequentialist view that what is the “right” thing to do is what maximizes “well being”. But that is only one criterion for “rightness”, and others, like deontologists such as Kant, don’t agree with that utilitarian concept. And of course people disagree violently about things like abortion—and many other moral issues.
One problem with Sam’s theory, or any utilitarian theory of morality, is how to judge “well being”. There are different forms of well being, even in a given moral situation, and how do you weigh them off against one another? There is no common currency of well being, though we know that some things, like torturing or killing someone without reason, clearly does not increase well being of either that person or of society. Yet there is no objective way to weigh one form of well being against another. Abortion is one such situation: one weighs the well being of the fetus, which will develop into a sentient human, against that of the mother, who presumably doesn’t want to have the baby.
But to me, the real killer of objective morality is the issue of animal rights—an issue that I don’t see as resolvable, at least in a utilitarian way. Is it moral to do experiments on primates to test human vaccines and drugs? If so, how many monkeys can you put in captivity and torture before it becomes wrong? Is it wrong to keep lab animals captive just to answer a scientific question with no conceivable bearing on human welfare, but is just a matter of curiosity? Is it moral to eat meat? Answering questions about animal rights involves, if you’re a Harris-ian utilitarian, being able to assess the well being of animals, something that seems impossible. We do not know what it is like to be a bat. We have no idea whether any creatures value their own lives, and which creatures feel pain (some surely do).
But in the end, trying to find a truly factual answer to the statement, “Is it immoral for humans to eat meat?” or “is abortion wrong?”, or “is capital punishment wrong?” seems a futile effort. You can say that eating meat contributes to deforestation and global warming, and that’s true, but that doesn’t answer the question, for you have to then decide whether those effects are “immoral”. Even deciding whether to be a “well being” utilitarian is a choice. You might instead be a deontologist, adhering to a rule-based and not consequence-based morality.
You can make a rule that “anybody eating meat is acting immorally,” but on what do you base that statement? If you respond that “animals feel pain and it’s wrong to kill them,” someone might respond that “yes, but I get a lot of pleasure from eating meat.” How can you objectively weigh these positions? You can say that culinary enjoyment is a lower goal than animal welfare, but again, that’s a subjective judgment.
By saying I don’t accept the idea of moral claims representing “facts”, I’m not trying to promote nihilism. We need a moral code if, for nothing else, to act as a form of social glue and as a social contract. Without it, society would degenerate into a lawless and criminal enterprise—indeed, the idea of crime and punishment would vanish. All I’m arguing is that such claims rest at bottom on preference alone. It’s generally a good thing that evolution has bequeathed most of us with a similar set of moral preferences. I hasten to add, though, that what feelings evolution has instilled in us aren’t necessarily ones we should incorporate into morality, as some of them (widespread xenophobia, for instance) are outmoded in modern society. Others, like caring for one’s children, are good things to do.
In the end, I agree with Hume that there’s no way to derive an “ought” from an “is”. “Oughts” have their own sources, while “is”s may represent in part our evolutionarily evolved behaviors derived from living in small groups of hunter-gatherers. But that doesn’t make them evolutionary “oughts.”
I’m not a philosopher—and I’m sure it shows!—and I know there are famous philosophers, like Derek Parfit, who are moral realists, but my attempt to read the late Parfit’s dense, two-volume treatise On What Matters, said to contain his defense of moral realism, was defeated.
77 thoughts on “The absence of objective morality”
I do not know if “objective morality” is yet to be discovered, but it appears the old challenge “it depends” makes a big difference.
-Other even terrible stuff
The worst stuff – it simply depends on the conditions.
-Eating meat in a famine-struck region
-Abortion of a known genetically determined misery
– even more terrible stuff vis a vis Hitler or Goebbels
… maybe the last one is going too far but clearly it changes the reasoning.
Now I will actually read – ^^^^ that was just to sub.
You could qualify all of these. First, I did say “of a normal fetus”, so I took care of that. And you could change the meat question to “is it immoral to eat meat if you’re a middle-class American who can afford it?”
Please read Rattling the Cage, Toward Legal Rights for Animals by Steven Wise,published 2000. Foreward by Jane Goodall. It doesn’t solve the morality issue but provides some ideas on which to think about it. Many of us animal guardians, or animal rightists ,believe that while animals are not humans, they are more than things-which is what they legally are now -and suffer in that inadequate category. Atty Wise has been on the front lines of that struggle for years.
In the book recommended under 7, Hal Herzog recounts the situation of mice in a lab.
The ‘good’ mice, the ones inside their cages, have all kinds of rights -imposed by law- for humane treatment and the like. To the ‘bad’ mice, the ones coming in from outside, the wild or feral ones, no such restrictions apply, anything goes: from traditional traps (often only maiming) to the horrendous glue plates. If a ‘good’ mouse escapes it’s cage, it automatically turns into a ‘bad’ mouse and loses all it’s protections against inhumane treatment.
Exactly Nicolaas which is why I find movements to reduce medical experiments on rodents (ethical ones with oversight and proper licensing) supremely hypocritical. Ideally we would reduce the numbers of experiments conducted on mice and rats in the pursuit of improving human health to near zero except that there aren’t good alternatives yet to allow this. I’m generally aligned with PETA. But to spare the lives of thousands of rodents to develop better drugs for humans while simultaneously poisoning and exterminating billions of them annually on farms and in homes for human convenience is pure cognitive dissonance.
While it smacks of speciesism (which bears some analogy to racism), surely there is some relative scale as to how we value animal life that is not crazy ethically. We value animals more based on intelligence, size, longevity and rate of reproduction, natural abundance, disease vectors for us, pests to us in homes or our activities e.g. farming food). Surely few people value an aphid or even a branch full of them anywhere close to an orca. Objective? No. Subjectively justifiable? I would say yes.
since we are talking of men and mice:
“we need a moral code”
I think it’s enough evolution has given us the a moral core, conveniently written down in the Universal Declaration of Human Rights. By trial and error we have found a political system that does a good job at resolving moral conflicts through compromise (social-liberal-democracy). I don’t believe we are missing anything, the only danger seems to me to let morally extreme people (at both sides of the spectrum) in power.
I haven’t met anybody who is against treating animals more humanely. It just needs some time.
I’m a moral nihilist; this doesn’t mean that it gives me a reason to misbehave. Moral conflicts cannot be resolved with convincing arguments but through compromise or make problems go away by developing new technologies. This works for moral nihilists and moral realists the same.
Have them spit in your face and you won’t doubt their existence.
Because the position of moral realism is that there are some moral claims that are objectively true, it seems futile to come up with counterexamples. Would they not be able to defend their position by saying that this is just not an example of an objective moral principle?
Well, I haven’t heard one moral claim that is said to be “objectively true”.
But adjudicate them we must, at least so long as some insist that the state insert itself in decisions between pregnant women and their physicians (particularly by inserting itself through the enactment of criminal laws). And, for so long as that is the case, we must make do by striving for the most enlightened decisions as informed by the best available medical science.
I think he means adjudicate by an appeal to objective reality, i.e. by detecting some moral property of things or actions like we do with momentum or charge.
You’re right that we can adjudicate subjective qualities. We do this all the time. The only thing required to do that is some agreed-upon set of criteria for what counts as that subjective thing and a decision-making heuristic for applying them. Whether it’s Olympic ice skating scores or determination of a crime or attaching the label “immoral” to some act, you compare the event to your criteria and use your heuristic to say whether it matches sufficiently or not.
A moral subjectivist might say that we can still strive for a common and consistent morality, by us humans hashing out what would be the best set of criteria and decision-making method and applying it to all of humanity. This elevates moralty from “mere personal preference” to something more akin to a universal set of rules or guidelines. However, I suspect that moral realists think there is more to morality than this; that this sort of strong-but-subjective morality is insufficient or inelegant or simply not the way the world actually is.
Yeah, I understood Jerry’s point, but was playing off the ambiguity of “adjudicate.” Go ahead, feel free to accuse me of the equivocation fallacy if you wanna; I got it comin’.
No accusation here! I thought you made a good point.
And it’s relevant because while objectivists (and some subjectivists) might refer to subjective morality as ‘personal preference’ (the “mere” implied), it can be much more sophisticated and consistent than that.
I haven’t done any reading on moral realism but what you say here makes sense to me. As I see it, ultimately everyone decides for themselves what is moral and what isn’t and that makes it a preference. Where we differ is in the amount of objectivity we let into our morality-determining thought processes — yet another preference. As usual, I look at these philosophical issues and find the philosophers making a mountain out of a molehill.
The Golden Rule, whether positive (“Do do unto others….”), negative (“Do not do unto others….”) or even the responsive (”What you wish unto others…”), is what comes closest to an ‘objective’ morality IMMO. But of course it is not really objective.
When we will better understand the origins and evolution of morality, which develops in highly social animals (that, at least, we know for certain) we may get an idea of what morality is, what are universals and what not. [I do not specifically mean moralisticity (especially surrounding sex) which increases with society size as established by Murdock]. I think indeed morality cannot be objective, but I’m sure there are some universals or near-universals.
In connection with our frankly schizophrenic relation with animals, I recommend “Some We Love, Some We Hate, Some We Eat: Why It’s So Hard to Think Straight About Animals” by Hal Herzog. Note, it gives no answers, but it will challenge your ideas and is an ‘easy read’. (No, it didn’t turn me a vegetarian).
It’s certainly the closest we have to a universal — existing across cultures, and exhibited in rudimentary form by small children even before they develop language skills and by some of our fellow great apes.
One might add the ‘Capitalist Imperative:’ DO UNDER OTHERS!
I agree with Coynian morality. I’ve never understood Sam Harris’s viewpoint.
I suspect, just my possibly mistaken take, that Harris means there are moral universals, and calls that ‘objective’. I’m far from sure though.
In a nutshell, as it were, Harris argues that you only need to make one subjective premise to get an objective morality going, that premise being that suffering is bad. He argues that starting from that point then the methods of science can, in principle, be used to find the best solution for any given moral issue. He acknowledges that in reality it may not be possible to have all the data necessary to correctly solve a given issue, too many variables to realistically deal with for one, hence he stresses in principle. But this is no different than many fields of study in the sciences.
This is my understanding of his views based on his arguing with other philosophers and scientists at one of the Beyond Belief conferences. I’m a bit unclear about whether or not he unambiguously acknowledges that his starting premise is subjective. He seems to have acknowledged it during this conference, but he does also present arguments that seem as if he is trying to convince that even his starting premise is objective. He basically argues that if you consider the greatest possible suffering, that must be bad if anything can be considered to be bad, therefore taking suffering to be bad as a starting premise is straightforwardly logical / rational.
In general, and given that my understanding of his views are reasonably accurate, I agree with Sam. I’m not as confident as him that we only need one subjective premise, but I do think that starting with a small number of fairly simple subjective values that from there the methods of science are the best tools we have to determine what the best answer to a given moral quandary is. By best answer I suppose I really mean methods, just like science is the best method to determine facts about our reality. Science doesn’t have an answer for every question for a variety of reasons and often the answers have a probability of being accurate that is lower than chance, and I wouldn’t expect any science of morality to be any different in that respect.
I think that philosophy has long jealously guarded morality as being its own turf, and that that is wrong. The degree of denial still rampant among philosophers that science has a big role to play in answering moral questions is disappointing.
The book The Edge of Reason is an excellent book discussing the lack of objective morality.
I completely agree with your arguments.
Only at this point when you say that what feelings evolution has instilled in us aren’t necessarily ones we should incorporate into morality” I would disagree: In my view, there is no morality that is not based on feelings, that is not derived from feelings, What are feelings but pre-verbal judgements about good or bad? Basically, feelings are nothing other than value judgements and thus the preverbal basis of what we call morality.
The Furies of dense reading obstinate opponents make. 🙂
“You can make a rule that “anybody stealing Jerry’s collection of cowboy boots is acting immorally,” but on what do you base that statement? If you respond that “Jerry feels fond of his cowboy boot collection and it’s wrong to steal it,” someone might respond that “yes, but I get a lot of pleasure from wearing Jerry’s stolen cowboy boots.” How can you objectively weigh these positions?”
The problem with moral relativism is that, in the absence of moral realism, all you can do is shrug your shoulders and say, “Well, Stalin had his moral standards and I have mine; they just happen to be different; who am I to judge?” Is that really what Jerry and other commenters here believe?
The problem, which you seems to have stepped in with both feet, is that moral relativism/realism is not a binary choice. Even if we pretend for the sake of argument it is a one-dimensional value, the two extremes are ridiculous positions that virtually no one takes seriously. It makes sense to consider both objective reality and one’s preferences when considering moral choices.
“… to have stepped in with both feet…”
As long as those feet are wearing cowboy boots, right? 🙂
I’m not sure I understand you. You say “moral relativism/realism is not a binary choice.” Isn’t it? Either morality is a matter of subjective opinion, like preferring chocolate ice cream over vanilla (= moral relativism), or there really are such things as moral standards (= moral realism). What other options are there? Maybe you and I have different definitions of moral relativism and realism?
Couldn’t one have a pure preference for, say, vanilla over chocolate but still believe that some moral standards are objective? I will admit to being confused over the controversy.
When two or more people share a social goal, they can compare and contrast various moral rules and systems and may be able to make conclusions, based on reason and evidence, that system A is better for achieving the goal than system B. In that sense, morality is not mere preference; we can have real and substantive arguments over it, and even decide system A is better than – in a measurable sense – than system B.
But you have to share that social goal; that reason for creating a moral or ethical system in the first place. And you probably have to be able to define it fairly precisely if you want to get to quantitative measures of which moral or ethical rules gets you to your goal the best. If you’re in a situation where *I* want the social goals of life, liberty, pursuit of happiness, and all citizens treated equally, while *Stalin* wants the social goals of squashing dissent and industrializing at all costs, then you’re right, this is such a big difference in fundamental premises on which morality is to be built, that there may be no rational way for the two of us to get beyond ‘you have your morality, I have mine.’
Indeed. But there is a goal that is baked into moral systems wherever humans form them: they are supposed to allow people to justify actions to each other. That won’t make morality appealing to psychopaths, of course. But those who *are* interested in justifying actions to each other and acting in ways that can be so justified, have to take account of how others will be affected by various actions/habits/policies etc. And will get to demand to be taken account by those others. So we try to come up with fair ways, like democracy, to decide on policies. And we criticize existing norms when they treat us unfairly (or anyone we care about, which could include anyone whomsoever).
There is no guarantee that this will come to a stable or gradually progressing equilibrium without breaking down entirely. But the alternative is pretty terrible to contemplate. When the terms of coexistence can’t be resolved by reasoning together, they tend to get resolved by force.
I don’t know about others but I’m pretty sure you’ve misinterpreted Jerry, though he tried to prevent this sort of misunderstanding with qualifying statements. Jerry has said nothing about the answers to moral questions, he has only commented about the methods used to answer moral questions. I strongly suspect that if Jerry were asked whether or not stealing his boot collection were morally okay or whether or not Stalin’s moral standards were good that he would reach the same answers as you, no and no, and I also bet he would say that society should hold that these things are morally bad too. But he wouldn’t claim that his answer is objectively true. He’d say that it is a subjective view.
I would like to add that even Harris’ concept of starting by avoiding “the worst possible misery for everyone” seems to fail to establish anything because that can be attained simply by extinguishing all life. Now, I don’t remember if Sam lays an argument somewhere as to why existence is better than non-existence, but to me this too seems to be merely a matter of preference.
As I recall, there was also the whole thing of favouring the well being of “more conscious” creatures which seems awfully problematic.
Moral anti-realism may seem less intuitive with our hard-wired moral biases, but the existence of objective moral truths is to me just absurd when you scrutinize the idea closely. You always have to start with unprovable premises. But I suppose that is true in a whole lot of philosophy.
He was taking that as the starting point, i.e., the nadir of well-being. He was not saying that this was the solitary measure of morality, but that it was the worst case, morally speaking/well-being wise, and that if you think avoiding that is worth doing, then you’re starting from the same place he is.
Dr. Coyne, if you aren’t a moral realist, then on what grounds did you condemn the woke and the right-wingers? Were those condemnations solely on the ground that they harm the type of society that you prefer to exist?
Like everyone, I base my judgements on what I prefer, which is usually some version of consequentialism. But you are arguing that I’m making a judgement that my own opinion is a moral fact. I do not claim that. Do you have to be a moral realist to adhere to your own morality? I don’t think so.
But wouldn’t this mean that the people that you condemn can just claim that their actions are based on the moral system that they themselves prefer, and so your criticism of their actions as morally wrong (according to your preferred system) aren’t any more valid than their judgment that their actions (such as suppressing free speech or criminalizing homosexual acts) are morally right?
I wonder whether consequentialists, deontologists etc agree with each other on more moral questions than they disagree on.
As a free will skeptic, the concept of morality, objective or otherwise, makes no sense to me. (Other than to manipulate people) . Something that seems moral is simply an action that comports with a societal or an individual desire.
So, Dr Coyne can “condemn the woke” simply because they do not meet his desires for what he sees as a better society. @Amateur philosopher.
As a fellow free will skeptic, I have no objection to someone behaving “morally” or “immorally” based on my own or society’s moral code. I do object to someone being said to have moral RESPONSIBILITY.
I have no objection to thinking of my kitchen chair being a “fetching pillar box red”. Of course that does not make the chair “red”. A whole bunch of physics happens, and then some chemical processing in the retinas, optic nerves and brain.
Of course we think of the chair as red by convention. Similarly we can look at people’s actions as moral or immoral, again by convention. Of course it does not make those actions moral or immoral.
Not sure what I can add, I don’t find the concept of morality as particularly useful, philosophically speaking.
I don’t think Sam Harris’s position is as difficult to maintain as people seem to think. He doesn’t claim that we KNOW the answers to all questions of morality, or even what the right questions might be, merely that ultimately, moral decisions and morality are be based upon our natures as conscious beings (and that of other conscious beings). So, when (IF, really) we understand consciousness completely, we should, in principle, be able to determine various local maxima and minima on Harris’s moral landscape and chart our courses toward and away from places we prefer and those we prefer to avoid, because some will tend to improve “well-being” for us and for others, and others will tend to diminish it. This may never be a simple matter in practice, any more than it is possible readily to solve the Schrodinger equation for more than a very simple combinations of quantum entities, or even to solve the three-body problem. However, the fact that the problem may be difficult enough to be intractable in practice for highly precise and nuanced circumstances doesn’t mean that there aren’t certain situations where the answer is much clearer, and where we can draw some broader conclusions, at least, and to pretend that we can’t do so about more clear-cut circumstances because we don’t know everything about it is rather silly.
As for the notion of morality being a matter of individual preferences, those preferences too are based upon our nature as conscious beings. Our preferences are part of our minds, and are determined by the characteristics of our individual minds, and of what minds are possible, and what changes might be possible to our own minds, now or in the future. It’s not going to be simple, even in comparatively prosaic circumstances. It may be possible to say that, for instances, a fetus aborted prior to the formation of a nervous system cannot experience any suffering, but even this fact does not exist in a vacuum, and the people around it–the mother, the father, the family, the culture, etc.–are all entities that could, in principle, affect and be affected by the abortion, and so they might reasonably enter the real calculus of the situation. It being complicated doesn’t mean it isn’t “real” and that there aren’t better and worse courses of action by whatever criteria one might judge.
I think Sam Harris said at one point, that questions of morality might best be characterized not as what is Right and Wrong, but as what to do next. And, of course, as he points out, there may be many alternatives that lead to the same peak, or to different peaks (or valleys) on his “moral landscape”, but, again, that doesn’t mean that there aren’t SOME clear-cut and straightforward situations that we can recognize as such, and some gaping valleys that most conscious beings would agree are worth avoiding.
I seem to recall that the subtitle of his book was NOT his choice or preference, and that he would have preferred it so say something along the lines of “How science can inform moral values” rather than “determine” them.
And, of course, Sam Harris is very clear that he understands “well-being” to be a nonspecific and flexible term–he likens it to the notion of “health”, which is fairly vague and can be quite variable and relational, but the fact that norms and expectations of health can change over time doesn’t mean that there isn’t a clear difference between a well-functioning organism and one that’s bleeding to death. Similarly, there can be a clear difference between a healthy, well-adjusted mind that’s thriving reasonably well, and one that’s riddled by mood disorders or other mental illness, or twisted by dogmas that lead it to destroy itself and innocent others. As I think Harris said, there could be someone out there who claims that their idea of “health” is to be vomiting continuously, but the rest of us are not obligated to judge their viewpoint worthy of consideration on issues of public health. And we could certainly say that such a notion of health is not likely to be an evolutionarily stable strategy.
Excellent comment. I attempted to get across some of this in a comment up above, but this is about 100 times better.
But any moral scheme under which what we do depends on what humans prefer or wish to avoid is necessarily a subjective scheme not a moral-realist one. Sam is right on much of what he says — except in claiming that it somehow results in an objective morality.
A “moral realist” scheme has to be one in which the purported fact “morally we are obliged to do X” holds independently of what anyone thinks of that fact.
Well, you could say that, in Sam Harris’s scheme, the best answer (for any particular purpose, or by any criteria chosen) might in fact be clear at times, but whether one is “obliged” to do that (if we desire that outcome) doesn’t mean that people WILL do it.
I always get stuck on the behavior of male lions. After taking over a pride and defeating the presiding alpha male, the new leader tracks down and kills the cubs. This leads to the females going into estrus, the male mating with them, and new cubs sired by the new male leader being born. We can see how natural selection would favor this behavior in the lions, but as humans we can’t see how we could stand for killing off children every time a marriage breaks up. It does seem pretty relative.
Evolutionary psychologists posit that something similar occurs in humans, and call it “The Cinderlla effect”. I haven’t read much about it, but they claim that stepchildren are abused far more often than “natural” children, and for the same reason that lions kill unrelated cubs. A stepchild does not share your genes.
Wikipedia has a page devoted to it: https://en.wikipedia.org/wiki/Cinderella_effect and points out its differences from the lion behavior.
Actually, when I first read your comment, the typo caused me to read it as “Cinderfella”, perhaps referring to the fact that men are more often the abusers. Of course, Cinderella is the child in the fable so Cinderfella wouldn’t work. Turns out “Cinderfella” is also a movie.
You cannot derive an “ought” from an “is” – this always seemed to me to be such a clear and obvious statement that I can only explain the various desperate attempts to establish objective morality on the basis of either psychological insecurity (I need somebody else to tell me how to feel about something) or desire of control (you have to act the way I tell you to, because I possess the moral truth).
I strongly disagree with that old saw. If there IS a small child drowning in front of you, you damn well OUGHT to save their life. If it IS the case that you see a man hitting a woman repeatedly on the street in front of you, you OUGHT to intervene to stop him. I agree it is a clear and obvious statement that you can’t get an ought from an is, but what is clear and obvious about it is that it is false.
I think there is a difference between “ought” ant “want”. You want someone to save the drowning child.
There’s the old conundrum of diving in a saving that child and then splashing out on 20 bucks on a cleaning bill. When that 20 bucks might save ten kids in Africa.
If we truly believed in “ought”, Westerners would be a lot poorer than they are.
Given the all-capitals OUGHT you use, you might fall in the “desire of control” category
” If there IS a small child drowning in front of you, you damn well OUGHT to save their life.”
The “ought” there does not derive from, “there is a child drowning”, it derives from “there is a child drowning” combined with “humans value children”.
Oughts don’t come from “is” statements alone, they come from human values.
And human values vary a lot across history and across societies. In western countries, for instance, freedom of the individual is given a very high value, while in eastern countries what is more important is harmony within the society.
As I recall, not too long ago China didn’t much value baby girls, so the sex of the drowning child also becomes important.
Ought implies a sanctioner. You are not violating the law if you refuse to save the child (and what if you’re a poor swimmer? Lots of people have drowned trying to rescue people who are drowning.)
Here’s the first case where I disagree with Dr. Coyne. I think Harris is spot on. Anything which takes as away from the worst possible misery for sentient beings is scientifically determinable to be that (at least in principle) and is objectively morally better than something which moves us in the opposite direction. The fact that it is a difficult problem in some cases to sort out doesn’t obviate Sam’s point. Specifically, using abortion as birth control when there are alternatives available to prevent a pregnancy in the first place is morally bad is a point which I think most pro-choice folks would agree with most pro-forced-pregnancy on if for different reasons. On the other hand, equating abortion with murder is morally bad and can be shown to be so by Sam’s criterion. Various points between a total ban on abortion and permitting infanticide up to some specific stage of development are trickier to decide, but can, ultimately be decided by Sam’s criterion using the methods of science. People have all kinds of weird moral preferences based on largely socially constructed often myth based criteria, but if they meet Sam’s criterion for moral evil, they are evil. As Sam points out if the worst possible misery for sentient beings is not A Bad Thing, or Morally evil, then those terms have no meaning.
What do you even mean by saying it is “objectively morally better”? If you mean that humans would prefer it or would want it to happen, then that is a morality rooted in human subjective preferences, not an “objective” morality.
I think there’s a problem with “objectively” here, as Coel points out. Objective means that you can show it with empirical evidence . In this case, you can’t. Tell me what experiment you can do that shows that increasing well being is the right thing to do. You can’t. You can only say that your PREFERENCE is that increasing well being is the “right” thing to do. If it’s scientific, it can be tested, and in this case you simply can’t test your claim.
I think that “ought” can only come from “is”—”is” here meaning our natural constitution. Our human preferences have been instilled in us by evolution, and I don’t see how we can have any others, at least if we are to preserve our psychological well-being.
When we think that through considered thought we’re overcoming our natural preferences in favor of morally superior ones, the ultimate reason why we choose the latter must be embedded in our nature as well—otherwise we must logically accept that there’s an objective morality out there discoverable through reason.
Reason just aids us in how best to channel or materialize our underlying instinctual preferences. Reason without preexisting preferences is like a sail without wind. Reason alone can’t guide us, but it can help us choose among our preexisting and often-conflicting preferences, so that the most pressing or important ones are favored, given the particular set of circumstances we find ourselves in.
If through misguided reason or utopian fantasies we choose norms and principles far removed from our nature we’ll just be miserable. The fact that “expanding our circle” (e.g., rejecting xenophobia) doesn’t make us at all miserable must be an indication that doing so is not that foreign to our nature, particularly today when thanks to our technology-enabled hyperconnectivity it’s easier for us to feel closer to the rest of humanity, and when it’s not difficult to see that the benefits of international trade and peaceful coexistence at all levels far outweigh violence, conflict and war. Perhaps under different circumstances we’d soon revert to extolling the virtues of heroic patriotism and resort to war and plunder.
Stanford Encyclopedia says:
Instead of “some”, I think the author should have said “most” philosophers define “moral realism” as requiring mind-independence. I don’t think that kind of moral realism is correct. Instead, morality is a social construct.
As we discussed in Jerry’s post about social construct, the fact that something is a social construct doesn’t keep it from being “real” in a very practical sense. Money – economic value – is real, insofar as a Martian scientific team which had no prior experience of “money” would quickly figure out the broad outlines of how money works in human society. And from the point of view of any individual human being, economic value is more or less a given, despite the fact that, in principle, “economics” depends on the actions of all human beings taken together, including this particular person.
Listening to Sam Harris on morality killed off any remaining sympathies I had to utilitarianism. His moral calculation was too convoluted and required way too much information to ever be practical. Though it suffers from the same problem that all moralities do, as the article correctly identifies – there’s no move from is to ought that can warrant the question of why we should act on what it says is the moral outcome.
I think morality has that objective lure because subjective judgements lack something morality needs – applicability beyond the self. It is an intersubjective phenomena, grounded by the fact that humans have brains that reason morality in similar ways, bodies that have objective facts about them (feel pain, have desires, the ability to act on desires, etc.) and that we are live interdependent lives where our wants and needs conflict with others.
To be able to say there’s a right and wrong gives a way out of the problem that those who are powerful can do what they can and those who are weak suffer what they must. By appeals to a morality that governs us all, some of those imbalances can be addressed and make for a harmonious society. So without the being a higher governance, the problem of resolving conflicts and divergences of goals becomes more difficult.
Most societies throughout history have had “a morality that governs us all”: it is expressed by the system of laws established within a particular society, which reflect the prevalent morality of that society (prevalent by number or by force). But these laws vary across societies, and with time within the same society.
In western societies there might be a feeling of world-wide convergence of public morals and corresponding laws toward some common set of fundamental moral values. That might just be the result of common history (Roman law, Christian church, enlightenment, etc) plus military and economic worldwide hegemony. I am pretty sure that a future change of international power relations bringing other countries and cultures to world predominance will also bring different directions in moral expectations and corresponding established laws.
The rule of law is something, I think, that can largely replace a shared moral code, especially when it is seen as democratic and thus the will of those in a society rather than of those claiming power without such consent. My take on political battles is that a lot of them are trying to get particular moral beliefs codified as the law of the land, because the law is a proxy for morality (however imperfect). We can in a democracy at least fight for a set of rules to govern social cohesion, which is why religious groups try their best to gain power. They know it’s not enough to have believers, but to have the law of the land enforce it.
In the end all moral reasoning resolves to a consequentialist position.
Animal rights is a difficult prospect but is it a problem trying to determine suffering in animals or determining the suffering in people empathizing with said animal suffering? If no person cared about kicking a dog or about any other animal issue, would it or could it be wrong. Not to mention that kicking a dog is a valuable endeavor in the endless cases of dogs mauling people.
Capital punishment seems to be self-evidently wrong for a number of reasons not the least being the possibility of error, yet it continues. Obviously a deontological notion that one shouldn’t kill innocent people is compromised by the numbers game of who’s innocent and how innocent are they and on whether a few innocents dying is worth upholding the right of the death penalty.
Abortion is not a difficult case.
“Abortion is one such situation: one weighs the well being of the fetus, which will develop into a sentient human, against that of the mother, who presumably doesn’t want to have the baby.
I am not sure if this flippant sentence is speaking purely to ‘well being’ but is a poor encapsulation of the case of abortion.
A fetus ‘may’ develop into a sentient human being. It also may not. Saying it will is begging a few questions. It is not a sentient human at the time in question.
And “against that of the mother, who presumably doesn’t want to have the baby.”
Presumably doesn’t want to have the baby? That’s the woman’s side of the well being equation? No bodily changes, feeling ill, facing the risk of death in childbirth, no having an entity sucking the life out of you, no fearing and dreading all the pain and suffering both physical and mental. The list goes on.
All that over essentially nothing.
The well being equation resolves clearly to the side of the woman.
The moral situation is simple. Lots of suffering can be reduced at the cost of no suffering.
Except that, one step further, it resolves to emotivism, since you then have to evaluate any consequences as good or bad, ones you want or don’r want. In the end, morality has to come down to human values, there’s nothing else.
I’ve grown less and less certain of the value in philosophy precisely because of arguments like these. It seems to me what should be done is to look at the fundamental dynamics in the scenario, then determine what to do based on that. What one then names it, whether “moral realism” or “antirealism”, should have no bearing on what to do.
The most prominent and frustrating example of this is the free will debate. Whether one decides to call it free will or not should have no bearing on what to do when someone commits a crime.
That’s not to say all of philosophy is useless, but I believe the useful parts are all natural extensions of science.
That said, I struggle to see how any morality is inconsequentialist, i.e. free from considering consequences. It seems obvious to me that, if one refuses to break their rules to stop, say, a genocide, that isn’t a demonstration of morality, but of rigidity.
I think the two main issues with reasoning from consequences is twofold.
On a conceptual level, the worth of a consequence depends on the value associated with the action. If X then Y and if !X then Z requires saying what the relative with of Y and Z are in order to know if one should out shouldn’t do X. In that respect, consequences are only useful after we decide what has moral worth and not to determine what has moral worth.
The practical problem from consequences is that knowing what all the consequences of a given action are can be impossible, as we don’t have a complete picture of what are the knock-on effects of an action we do. It only ever works in simple conceptions because anything beyond simple actions are too complex for us to understand and consider. For example, imagine seeing someone about to get hit by a car – moral action might be to save the person right? But what if that person then goes home and kills their partner, while the person who would have hit the partner would have learnt from that incident and dedicated their life to helping others? Intervening or not intervening would have massive knock-on effects, yet none of that information is available to you at the time. You simply cannot see what the consequences of your actions are beyond the immediate. At best you could say “in the moment given the limited information I had, it was the right thing to do”. Maybe that’s enough, but it’s an incredibly limiting and narrow view of consequences to achieve it.
Your example doesn’t show that the worth of a consequence is dependent on the value of the action. It shows that you have to determine what “good” consequences are before you know what is good in consequentialism, which, duh? Of course you do.
What you’re describing in the second paragraph is fallibilism. Sure, you’ll never know all the consequences with complete certainty, but that argument is so general that it could apply to literally any action possible. You don’t know with complete certainty that eating will fill you, so why eat?
That is another issue I have with philosophy: the double standards. Sometimes a glib answer is the final authoritative word on a subject, while in other areas they twist themselves into knots trying to justify something like “free will”.
“It shows that you have to determine what “good” consequences are before you know what is good in consequentialism, which, duh? Of course you do.”
But then consequentialism isn’t really what’s right or wrong, but a tool for how to best achieve it.
“Sure, you’ll never know all the consequences with complete certainty, but that argument is so general that it could apply to literally any action possible.”
Well, yes! That’s the point. Take any action and there’s going to be so many consequences that stem from it. Teasing out what’s relevant and what’s not, what can be known and what can’t – consequentialism as a calculation is going to be complex and messy, with arbitrary decisions in multiple fronts in determining what’s relevant.
“You don’t know with complete certainty that eating will fill you, so why eat?”
Because you don’t really have a choice. Hunger sucks, food is pleasurable, and satiety is a desirable state to chase. There’s nothing on the other hand that says consequentialism over deontology, or virtue ethics, or any flavour of moral subjectivism or even nihilism. Why use consequentialism when it can’t tell us what we should value as the good, nor provide us with a good way of determining what we should do?
It can be said that over time we are increasing animal rights just as we have for human rights. Both are measurable in laws, behaviours, surveys, studies and such like.
It is not a hard and fast objectivity like science would produce from how things actually behave but rather from science, philosophy, criticism, conjecture we learn how to behave in nature. It’s certainly not perfect as we know but still a natural learning process for a primate investigating our place in the Universe.
You’re a bit above my pay grade here – philosophy isn’t my strong suit. But I’ll vote with the welfare of animals every time. If you look at their brains they feel pretty much all the same emotions that we do (or at least the mammals) — with the possible exception of boredom which we as humans suffer from more.
Our society embraces the toxic monotheisms (cross, stone and book worshippers) which draw a line between sentient beings (who actually suffer just as much as we do)…and us. They do it by sprinkling the magic fairy dust of “soul” on us, and not them, the animals. This is surely the most evil part of religion.
There’s no fairy dust in an MRI and a dog.cat/rat etc feels the same pain, emotional hurt as we do: as we see in the lab. Religion tries to deny that.
Just my unqualified $0.02 cents.
By domesticating them, we’ve bored the life out of cats.
I have a real world example :
California – including people, animals. and property – gets destroyed by wildfires every year. Wildfires call for vast quantities of nearby water to put them out rapidly. Most ( all? ) of the world’s almonds are grown in California.
Is it objectively moral to consume almond products – given the conditions – if almond trees require 100 times as much water per pound of almond product than other produce like tomatoes or peaches? All these things can be quantified, with correlations calculated – an objective view. Perhaps the rate of use is very slow for almond trees. But what nutrition do almonds provide that cannot be obtained any other, more efficient way?
^^^ This is based on Bill Maher’s editorial last week. I knew it was bad, but _that_ bad? I haven’t verified the claims.
Answer: there’s no “objective” moral answer to this. What there is is how much humans value almonds, water, property, animals, et cetera, and trade-offs between the things humans want. There’s no answer to this that doesn’t ultimately derive from what humans want, and is therefore subjective. (Which is not a dirty word!)
Sapolsky argues that we can all be good and careful with being analytical and objective about everything, but emotion is driving most of our actions. I think he’s right.
Not that I’ll give up analysis, of course.
I thought that learning philosophy might be a good thing, but then I discovered the French learn it at school. Consider French society – is it superior in any way as a result? Do they make better more knowing decisions about life? Their country is packed with fascists & anti-vaxers!