Moral Maze podcast

June 2, 2011 • 7:14 am

I’ve been told that our discussion of science and morality at the BBC’s “Moral Maze” show is now online here.  I haven’t yet listened to it, but I wasn’t all that satisfied with how things went with the panel and witnesses.  There was a lot of confusion about what “science” was: most panelists assumed that this meant “brain science,” construed as sticking electrodes in our skulls or doing brain imaging. (I suppose they were influenced here by Sam Harris’s book.)  There are, of course, other ways for science to inform morality, or even to see if there is a common thread underlying moral judgments, which I think is an extremely important enterprise—and a scientific one in terms of being tractable to empirical study.

I don’t think that religion should have been part of this discussion, for that’s an entirely separate issue. (But at least one panelist—Clifford Longley, I think—dealt with the Euthyphro problem by admitting that God’s dictates aren’t moral by virtue of coming from God, but that there is a “higher” source of morality.)  And—to my mind the biggest problem—until the very end there was almost no discussion about where morality comes from.  Surely that has to play a role in discussing science’s role in moral judgment.  I was prepared to talk about a combination of evolution and reason, but that issue didn’t arise.

46 thoughts on “Moral Maze podcast

  1. I still maintain that morality is, in the sense of game theory, an optimal strategy.

    An action is moral if it’s the best strategy for the individual to pursue. And, if you’ve studied anything at all about game theory, you know that that most emphatically does not mean that you should go around raping and pillaging.

    Individuals who live in peaceful cooperative societies fare much better than those who live in chaotic terror. Being a good and productive member of society (and not raping and pillaging) is the only way to survive in a peaceful cooperative society.

    Slavery is a dead weight around the neck of a society: all the slaves are wasting their potential. Rather than picking cotton, they could be inventing cotton gins. The same goes for lesser forms of slavery, such as the plight of women in much of the third world (and especially the Islamic parts).

    Once you can get it out of your mind that your best interests extend beyond the next two minutes it’ll take you to rape that pretty girl you’ve been drooling over — which, ultimately, is the essence of every objection to this observation I’ve ever encountered — the rest should become obvious.



    1. For a christian morals come from the serpent. According their Genesis fantasy. Their god does not teach them to distinguish ‘right’ from ‘wrong’; “then your eyes shall be opened, and ye shall be as gods, knowing good and evil.”

    2. Questions:

      1. Lying and cheating always evolve in groups of animals that share information or resources, because they are viable evolutionary strategies. Cheating becomes the best strategy for certain individuals given the group they’re in. Does this not run counter to what you’re saying?

      2. Realizing that your best interests extend beyond the next 2 minutes, or the next 2 days, or 2 years is neurologically a matter of delayed-reward discounting. Humans have known limitations in their ability to delay gratification. How far into the future should we be expected to delay? The answer will change the calculus on what my or society’s best strategy is.

      1. Regarding #1: We see everywhere in evolutionary matters inescapable suboptimal local maxima. The recurrent pharyngeal nerve is always a big favorite. In a society such as you describe, cheating might not be immoral; however, much more moral would be working to change the society itself to one that doesn’t need to rely on cheating. In the resulting society, the actions of the society’s progenitors would rightly be considered immoral. This is exactly analogous to slavery in the modern world.

        Regarding #2: Morality can only possibly work at the unit of the individual; however, again, the interconnected nature of society again provides hooks into extending gratification beyond the current generation. Parents already sacrifice for their children for reasons that shouldn’t need to be explained in this forum. So the calculus must always operate on the individual level, but the results of that calculus will almost always dictate gratification at levels beyond the grasp of the individual.



        1. i think what you are saying can be reduced to genetic imperative to survive as the property of any and all life-forms, human included

          since life is a property of matter and surviving is property of life we should first and foremost study mechanisms of our _continuos_ survival as individuals and species

          it can be argued that science is the _only_ basis for such continuos survival and therefore it makes sense to focus on what needs to be done to make science the shepherd of human condition sooner rather than later

          “morality” and “beliefs” are second-order intellectualizations relative to “matter”

          we should always remember that any discussion that is not properly framed will most likely be contributing to the “noise” that only distracts from proper institutionalization of science: “economics”, “political science”, “philosophy” , to name a few, are heavily polluted by, if not entirely consist of, this “noise”

          1. I’ll buy that, but with the emphasis that, just as genes act at the level of the individual (and various other properties emerge with large populations), so too does morality act at the level of the individual.

            That’s part of where all those “torture random innocents for the better good of all humanity” thought experiments fall down. It’s the piece that Sam Harris is missing. His morality is imposed from the top down; in reality, morality organically emerges from the bottom up. Just as everything else that follows an evolutionary pattern. (Duh!)



            1. agree

              i think sam harris is trying to walk a middle ground and introduce taboo subject into the discourse in a way that would minimally affect his position in the structure

              in this case he is very limited to what he can say

              his book has very little science in the pure sense (that is science cleaned of cultural,ideological and other beliefs) and has a lot of “dancing around” in words

              i don’t blame him because it is not possible to put a true scientific book on the subject out there and have it pass self-sensorship of institutionalized ignorance

        2. #1: Hmm… I’m not sure I entirely understand you.

          If almost everyone else is playing fair, doesn’t that make cheating the optimal strategy? Isn’t it a global maximum for you? You seem to be saying there is a better strategy for this individual, but how do you know?

          #2: Let me disagree using a toy example.

          I’ve had the experience of being part of a team trying to accomplish something, where argument between the members of the team has caused insults to be thrown, egos to be damaged, etc. Through the fighting a grudging concensus is reached, but all injured parties might not have received the apology they’re owed. There is a tendency to want to keep arguing until one feels the scales of fairness have been balanced. But sometimes you have to put aside what you want or deserve because you know that more strife at this particular moment is only going to make things worse, or delay you accomplishing your goal longer.

          So in this moment, if you have the maturity it is your “best strategy” to let the fight go and move forward as a team. So far so good.

          But not everyone will have the maturity to do that. Some people will be too hot-tempered, or too impetuous, or too insecure. You might say “well, they can learn.” But there will always be situations that call for more reserve or delay of gratification than people have. Presumably these characteristics are normally distributed. So the best strategy according to me may not be the best strategy according to Jeff, because Jeff can’t let shit that bothers him go as well as I can. So he’ll do the thing that’s worse for the team because it’s best for him (given the limitations of his mind).

          I guess, in short, I’m not convinced that the best strategy for a diversity of individuals is going to end up being something that’s great for the group.

          In a sense, haven’t societies always collapsed because of problems like this? It’s easy to say that rape is a suboptimal strategy when you’re not a rapist. Clearly, rapists don’t think so.

        3. Also, when you reply answer one more thing:

          What is the “best strategy” in your theory?

          In evolution, it’s whatever gets the most copies of your genes in the population. In your theory (or game theory), is it what allows you to live the longest? Or to stay out of jail? Or to be the happiest? Did you operationally define “the good life” before coming up with this idea?

          1. So, replying to both of your posts:

            If almost everyone else is playing fair, doesn’t that make cheating the optimal strategy?

            An even better strategy is to play fair but to have mechanisms to prevent others from cheating. “Trust, but verify.”

            So he’ll do the thing that’s worse for the team because it’s best for him (given the limitations of his mind [emphasis added]).

            That’s the key. In your example, Jeff may think he’s doing what’s best for him, but he’s really shooting himself in the foot by torpedoing his long-term career goals. Once he gets a reputation for being the guy who always has to lord his perceived superiority over everybody, nobody’ll want to have anything to do with him any more.

            Your other example illustrates a similar point:

            It’s easy to say that rape is a suboptimal strategy when you’re not a rapist. Clearly, rapists don’t think so.

            Consider in our society that — with the big honkin’ caveat that our society has a hell of a long way to go — a rapist (in theory) pays a huge price for the “privilege” of raping a woman. He’s now a wanted, hunted criminal in danger of apprehension, conviction, and imprisonment. When caught, his life is ruined. (Again, for the sake of the argument, kindly assume that our legal system treats rape the way it says it wants to, and not the pathetically embarrassingly way it actually does.)

            No, that’s most emphatically not a “Won’t somebody please think of the poor rapists!” argument. It’s an observation that, if you really want to rape somebody, the price of admission is a hell of a lot higher than a few scratches and a shot of pepper spray.

            Before I continue, let me quote your last question, which gets the same answer as the rest of your rape question.

            What is the “best strategy” in your theory? […] Did you operationally define “the good life” before coming up with this idea?

            That’s the great part about life: you get to pick and choose what you value most. You may well decide that your ultimate goal is to become the world’s most feared serial rapist.

            But the math still works out for society as a whole.

            You see, simply stepping out into Times Square, pulling down your pants, and grabbing everybody in arm’s reach isn’t going to get you very far; you’ll be pinned down and possibly ripped to shreds in no time at all. So, you’ve got to be more subtle. You’ve got to blend in. You’ve got to give the appearance of not being a rapist, or else you’re right back where you were before — naked in the middle of Times Square with an angry mob tearing you to pieces. And the best way to not appear to be a rapist is to not be a rapist.

            Ultimately, the more advanced the society, the more self-defeating such a goal becomes. In order to carry out your goal of destroying society, you have to work from within society. In order to work within society, you have to survive within society. In order to survive within society, you have to be a good and productive member of society. As soon as you start to stray from being a good and productive member of society in order to fulfill your goal of destroying it, society steps in and stops you cold in your tracks.

            More simply: in order to achieve any sort of goals at all, you have to be alive. Your chances at living are better as part of a society than outside one or working against one. Insofar as your goals help uplift society — or, at least, don’t harm it — you will be much more successful than if you’re a threat to society.

            And evolution, over generations, will sort it all out. As rational beings we have the ability to predict how that’s most likely to go down and set ourselves up with what’s most likely to be a winning hand.



    3. So I can get the idea of the good as simply the best strategy we can come up with given a particular problem.
      On the other hand, where is the psychological theory that sees a person accepting the optimal strategy?

      1. On the other hand, where is the psychological theory that sees a person accepting the optimal strategy?

        Once you bring psychology into the question of morality, all bets are off. Humans are notoriously lacking in all sorts of cognitive abilities. We’re as capable of doing complex calculus unassisted as we are of transporting heavy objects unassisted. And that’s ignoring well-identified mental malfunctions such as sociopathy.

        The best I can offer is the observation itself as a tool to help guide the actions of those who aren’t completely irrational. Perhaps the “science” of marketing can offer some insights as to honest ways to help people to align their actions more with their long-term big-picture interests rather than instant gratification. And, yes, I am completely aware of the irony in the preceding sentence, as should be all those who read it.



        1. That’s some pretty bloody hard core moral realism you’ve got going on there Ben. I salute you! Just a suggestion though: I don’t think you need to bite that bullet quite as completely as you do.

          My own preferred answer is the say that objective reasons are inherently motivating, but just as we need eyes and brains that work in particular ways to see colours, we need particular ways of thinking to “see” reasons. This of course raises the question about ways of thinking and your slightly flippant quote about marketing. Just to be brief, I think that once we’ve got this far the entire question has shifted. Objective reasons, the optimal game theoretical strategy for instance, are still there, but the real question is now how to create people capable of perceiving it and this is what Ethics is really about.

  2. Ah well. These things never turn out the way you rehearse them in your head.

    It’s like trying to speak a foreign language. You want the respondent to reply in words you understand, and rehearse your reply on that basis … sadly, things almost always go awry.

  3. Wasn’t it Giles Fraser the Anglcan guest who said that morality was beyond or above god, whereas Longley the catholic thought it needed god?

    I hope that you get the chance to give them feedback. I was surprised by the strange questions they put to you.

  4. I hadn’t listened to Moral Maze before, but was unimpressed with the format. It takes time to raise a point, then clarify any misunderstandings, then reach a conclusion, but the host and panelists are too eager to interrupt and derail the line of argumentation. The result is an apparently fast-moving discussion that sounds terribly intellectual to a casual listener, but actually makes little substantial progress, and whose perceived outcome is likely to be driven by rhetorical point-scoring. The woman on this panel was particularly culpable in that respect, jumping immediately on any sensible qualification or caveat offered by opponents, with some premature retort like “then why do you say science has anything to contribute”, as if she had scored a zinging logical point. I don’t know who she is, but I would guess she has more experience as a debater than a careful reasoner.

    I thought it particularly interesting that, when offering an off-the-cuff circumstance that would justify violence, she jumped to “if my sister were being raped.” There is a whole pile of evolutionarily-based reasoning behind that particular knee-jerk example, from her choice of crime to her choice of victim. For instance, evolutionary theory could help her understand why she apparently feels she would be more justified in defending a close relative than a stranger. If she paused to reflect, she might find it difficult to offer a basis for the apparent moral distinction implicit in her statement. Evolutionary biologists understand that it reflects an unconscious inclusive fitness calculation so fundamental to human social structure that society recognizes and respects it implicitly and unquestioningly. By illuminating the deep-seated psychological basis for the kin/non-kin distinction, science provides a solid empirical foundation for a discussion of whether someone in the kin-based circumstance should be judged more or less culpable for her actions than someone in the non-kin scenario. Of course, we cannot give the answer, but without that understanding, I don’t think we’d even be in a competent position to pose the question. The question also touches on the absence of free will that Jerry mentioned, and the question of where morality comes from, which I agree was not adequately addressed.

    These sorts of extended points and reasoned rejoinders cannot be made in a program with this format, unless you’re exceptionally quick-thinking (Jerry, you’re absolved, as she didn’t make that particularly revealing comment during your spot).

      1. Well in this episode of the Moral Maze for sure. She really didn’t have a good grasp of either Joshua Green or Jerry’s responses. I guess more generally she’s just a touch too libertarian for my liking. Maybe in her defence we could say thatat least she’s not Melanie Philips.

  5. Ah– thanks for posting the link. Didn’t get to listen when this was on, unfortunately. *rushes off to listen to the “strange questions they put to you”*…

  6. I caught the discussion and was disappointed as well. Each person had an opinion on morality in relation to science, but they seemed to use terms which had varying definitions. Without consistent, agreed upon definitions of the ideas being discussed it was bound to be a clusterf***k. It was. There seemed to be a lot of confusion and lack of comprehension, willful or otherwise.

    The idea of morality as a fixed value separate from the whims of a wrathful god are always a sticking point. Epicurus was rather succinct in the matter. You either have a deity that is not omnipotent or one that is a prick. My Xtian roommate is of the latter persuasion. He says since Gawd told the Israelites to commit genocide it was “good”. Common argument. I’m always a tad reluctant to ask what those holding such views would do if their minister said he was 100% convinced we should return to stoning teh gheyz.

  7. This was just another slickly done radio program that sped through a topic that needed more attention to the basic question. I, too, agree that the women panelist came off rather badly her arguments did not convince me.Jerry, I feel, in the limited time you were given, you did a good job.

  8. “Clusterfuck” is exactly the word for it. What a mess. Especially the woman — didn’t catch her name — who kept interrupting and seemed to want to derail the discussion to make some sort of political point. I wanted Jerry to say, “I have no idea what the fuck you’re talking about.” And he sort of did, only very politely.

    1. Claire Fox (or Foster), once an interviewee on the show, and invited to be a regular because of her “controversial” style. Good radio, maybe, but not good for reasoned argument.


  9. I was disappointed at Jerry’s lack of hectoring and failure to deploy strident, opinionated language. Despite beeing continuously interrupted by Claire “rent-a-mouth” Fox, he was polite to a point.

    I just thought I’d comment on style…the rest was so vacuous and superficial as to be impossible to follow.

  10. I’ll have to admit all the criticism of The Moral Maze is pretty much spot on. It’s rare that you get more than a fairly shallow debate but occasionally you get some zingers.

    That said it is a bit of a guilty pleasure, a chance to feel my moral outrage peak and yell at the radio, particularly when that odious creature Melanie Phillips is on.

    1. Jerry:

      I’ve just finished giving it a listen and I give your contribution an enthusiastic two thumbs up. As far as I’m concerned you put exactly the right points forward which I read as roughly:

      1: Moral questions can’t be decided before hand. A bit of a no brainer for a scientist but something these debates ifteb seem to forget.

      2: Science is about using appropriate tools for appropriate questions. While Neuroscience is good for some questions, there are others where we might prefer to use say developmental biology or evolutionary theory or psychology or whatever is appropriate to getting us our answer.

      If there was any confusion in there the blame is squarely Claire Fox’s.

  11. I’ve been thinking about this issue for some time, with an eye to perhaps writing something on the topic, and it seems to me that people defending scientific grounding for morality are ceding ground unnecessarily in the face of the supposed is/ought distinction.

    It seems to me that moral judgements are best viewed as analogous to judgements about colour, and that accepting that analogy allows you to ground morality empirically and run with it.

    Everyone who lacks any form of psychopathology is able to look at some things (the rape and murder of a small child, for example) and see that they are wrong. Similarly, some colour differences are blatantly obvious to anyone whose colour vision is not impaired.

    Other cases are less obvious. Consider the question of whether a passing car is blue or purple. This is a question on which it is perfectly possible for two people who both have perfect eye-sight to disagree. Further, there is little either can say to argue their position beyond “Look at the bloody thing! Of course it’s purple!” You can have similar arguments about other colours – I’ve had many a disagreement over green/brown, for example.

    So, whence come our basic moral judgements? They are empirically based, in the same way as our colour judgements are empirical. We experience things as good or evil in exactly the same way that we experience things as red or green. These experiences are primary, not rooted in reason, but coming directly from our experience of the world.

    Of course, there are some people who simply lack empathy, or some other faculty that allows us to comprehend particular moral distinctions. But similarly, there are people who are colour-blind, whose eyes and brain function perfectly well, but who simply don’t see some colour distinctions as existing at all.

    Note that a colour blind person’s eyes are working perfectly well – they are not discerning between colours because they are not designed to do so, rather than because they are trying and failing to do so. To claim that colour blind men are failing to see properly is all well and good, but runs you into problems when you compare normal people to tetrachromatic women, who have far superior colour discernment to the rest of us.

    Now, we can use science to inform ourselves about colour in many ways. We can study the actual objects, their reflectance profiles, the light hitting them, the nearby items against which they are being contrasted or from which they are interreflecting. We can also look at the eyes and brains of the people doing the observing, and seek information about colour from there.

    In our day to day lives, we experience some things as things we ought to, or ought not to, do. How is that experience less empirically respectable than my experience of blueness or purpleness? Both are subject to disagreement, but such disagreements will come either from differences about the meanings of the terms, or differences in our faculties.

    Ultimately, I would contend that if we can be said to know empirically that a fire engine is red, we can also know empirically that we ought to rescue a child from a burning building. Both of these things are experienced by normal people with undeniable immediacy.

    If we can get any oughts from empirical observation, then we have everything we need to reason to more oughts – after all, there is no problem whatsoever with combining is and ought statements to arrive at new ought’s. And I would say that ultimately, the majority of our oughts come to us in that way – they spring sui generis from our experience of the world, embedded in it in exactly the same way as any other experience we have.

    Ultimately, then, I would say not only that science can tell us about morality, but that it’s the only possible place to ground a coherent theory of morality, and that using other methods (religion, intuition, pure reason) to explore morality will get us exactly the same kind of results as using those methods to explore astronomy or biology.

    1. I think I’m largely with you here Groovy. Hopefully though you don’t mind me putting some of the objections I’ve faced.

      The standard objection to moral realism more generally is that it fails to make sense of our moral psychology. This is basically the is/ought but there are two strands to how this applies to the view you’re putting forwad.

      The first is that it raises the question about where motivation fits in your account. Moral claims are motivating, they push a person towards performing certain actions.
      Unless motivation enters into your account, the accusation will be that you’ve not actually described ethics. Along the same vein you need to keep an eye on free will. Part of what makes free will such a live problem is that we feel there is something fundamentally free about our choice to perform actions so any view of motivation you propose needs to either be neutral to this intuition of ours, or to explain it away.

      The second problem is a problem about what it is in the world that we’re perceiving. The sense is that in order for it to be a perception we need to have an idea of “ought to be done”-ness that we need to build into our metaphysical concept of objects. The idea will be to imagine a table that just by looking at it encourages you to say be kind to puppies. Surely this is a bizarre kind of thing to conceive of and yet we have no analogous troubles with say colour.

      1. My response to the first problem would be that our moral experiences are motivating in precisely the same way that the smell of a delicious meal, or the appearance of a beautiful woman, is motivating.

        When I smell a steak cooking, there is no need for me to rationally evaluate that stimulus. I smell steak, and my mouth waters, and I am drawn to go grab a steak. If I’ve just eaten a realy big meal, then maybe I react less strongly, perhaps by penciling in steak for tomorow’s dinner on my mental menu.

        Raw sensory impressions motivate us to act all the time. We react when we see a fast moving object coming at us. If I take a drink of water, and it’s bitter, I quickly spit it out. Now, if I expect it to be bitter, I can control my reaction – but then, if I expect to be morally horrified by something, I can equally overcome that horror.

        I don’t consider free will to be a problem, because I only require us to experience ourselves as being free. Whether we are ACTUALLY free in any ontological sense is irrelevant. Just as my intellectual knowledge that the steak is realy just empty space dotted with a swirling cloud of elemetary particles does nothing to stop my mouth watering, the knowledge that my actions were necessitated by my history is irrelevant. We don’t experience ourselves as constrained, and morality is grounded in experience.

        As for the second objection, I would say that we knew about colour LONG before we knew about reflectance profiles, wavelengths of light, and so forth. How did we discover the latter? By examining the former scientifically.

        When we have spent as much time studying the psychology and neuroscience of morality as we have studying the physics of light, and we STILL don’t have an answer about the substrate upon which moral experiences supervene, I will consider that a scathing indictment of my position. For now, though, I would say that we have some pretty good starts looking at empathy and mirror neuron reactions, at the propogation of social norms through in-groups, and through the neuroscience of psychopathology (to name but a few of the many fertile leads that immediately occur to me.)

        As for a table that, just by looking at it, causes you to want to be kind to puppies, sure it’s ridiculous when stated that way. So is the idea of a table that causes you to smell magnolias by looking at it. Why would you expect to have a moral response when looking at something that is irrelevant to morality?

        Though, now I think about it, you could probably achieve the effect you describe by presenting a table with pictures on it of cute puppies being tortured horribly. Certainly that would create in me an immediate response of “That’s wrong, you should be nice to puppies!” You could possibly achieve the same thing with a table similar to one that had been used to crush puppies to death in front of you. As soon as something relevant to the moral properties of puppies is introduced, the problem is solved, and it makes no sense to expect us to have experiences irrelevant to the objects of circumstances from which those experiences spring.

        Besides, intuition equally tells us that a grey figure is invisible on a (identical) grey background. That doesn’t change the fact that we can set up a situation where that exact effect occurs, and experience it even while remaining incapable of describing it to others. And, having done so, we can explain scientifically exactly what is going on in our brain to create that wierd experience.

        I hope this answers your objections to your satisfaction.

        1. I should probably have added that yes, even just colour experiences motivate us. People are more relaxed in a blue room, less so in a yellow room. We are more likely to obey a man in a red hat than a man in a green hat. We perform better on an IQ test written in blue ink than on one written in red ink (indeed, even just having your name on the cover in a colour affects your performance.)

        2. Cheers for the full reply.

          I’ll start with your second point about “ought-to-be-doneness” because I think you’re pretty much right. The reply has to be that there is no reason why the subjectivist, or anyone else for that matter, gets to simply dictate what the world is like or even has to be like prior to empirical discovery.

          The question of moral motivation is a harder one though. The challenge will be that you’ve left out a step. Your standard Subjectivist will, following Hume, say that there are hidden premises in you deciding you want to eat steak, for instance that you find steak delicious and want to eat delicious things. The argument is that it’s these extra premises that are doing all the work, not your perception of the steak.

          Now for my part, I think that there’s a way around this objection but for some people it’s a rather bitter pill to swallow. Briefly I think that motivation simply is what it means to perceive a reason. What this means in turn is that there is no special faculty of reasoning and that all that is is another form of perception. Finally I’d argue that all forms of perception are dependent on theory in order to be genuinely perceptual i.e to avoid the various illusions that you raise. The major difference between this rough sketch and what you’re suggesting is that it makes the relationship between motivation/reasoning and the world equivalent to the relationship between a colour as I perceive it, and the facts about the world that make a particular object appear to me that way.

          1. I agree that they would say that, but I would argue that they are empirically wrong. As a matter of fact, you can be made hungry by a smell that you’re not consciously aware of, feel the need to go to the bathroom due to a sound you are not fully aware of, and so forth.

            Many of our reactions to sensory impressions are unconscious even if we are aware of the stimulus (our many unconscious reactions to the colour red, or to the temperature, for example.)

            If someone wants to posit that you’re reasoning unconsciously from premises you’re unaware of, I guess they can do so, but it seems kind of ad hoc to me.

            As a matter of fact, many of our sense impressions have action guiding properties, in that they produce in us an instinctive inclination towards action. Given that, it seems churlish to suggest that our moral impressions can not likewise motivate us to action.

            That’s not to say that we don’t have reasoned actions, or reasoned goals. We certainly have both. But much of our short term, immediate motivation comes directly from sense impressions.

            1. Spot on mate. That’s roughly the view I’m aiming for. I certainly agree that the standard Subjectivist model of morals is just plan wrong when it comes to describing how people are actually motivated.

              I’d only challenge the idea that you need to include that bit about instinctive inclination towards action. As far as I’m concerned reasons as a whole, whether conscious, unconscious, good or bad, simply are things which incline us towards action. The question isn’t therefore whether or not I’m motivated by reasons, but whether or not I’ve actually perceived a particular state of affairs, such as say the suffering in Africa, as a reason. The analogy should be that motivation is to reasons, as colour is to paint.

            2. Bah, that’s not quite right either. Motivation is to reason as my subjective experience of red is to red paint.

  12. Meh, I’d never have tuned in in the first place without the carrot of hearing Jerry. Just as with political debates, these formats are all about performance; logic, reason, and facts are immaterial to the posturing. NO topic of any importance should depend on the suavity, charisma, & sexiness of the speaker (though JAC’s got those nailed, of course).

    I was ready to throw a vase at my monitor when they plunged right into the is/ought chestnut w/in the first few minutes.

  13. That was pretty painful to listen to, for exactly the reason Jerry pointed out: their definition of “science” was ridiculous, so they ended up arguing about something no one was saying.

    I thought Jerry did a good job responding – calling them out on the ill-defined “brain science,” but it didn’t seem to deflect anyone’s line of questioning.

  14. My apologies for coming late to this discussion – I’ve been wrestling all week, mostly unsuccessfully, with a failing internet connection. As someone who is a relatively regular panelist, the Moral Maze, it seems to me, is a show with strengths and weaknesses, a format better suited to debating some issues than others. This week’s programme was messy and often confused, inevitably perhaps given the complexity of the issue, the subtlety of the arguments and the depth of knowledge required. I agree, too, that Jerry pulled the short straw, with some not very illuminating questioning or discussion.

    Nevertheless, I thought there were some useful parts of the debate, particularly the discussion with Josh Greene (I know, I know, I would say that wouldn’t I, given that I was doing the discussing) quite a lot of which was germane to the discussion here.

    I wanted to explore two issues with Greene. The first was the idea that a rational moral evaluation may give a different answer to that provided by scientific data or utilitarian cost benefit analysis. To take the example I used in the programme, whatever science may say about racial differences, and whatever may be the outcome of a cost-benefit analysis of the enslavement of ‘inferior’ races, there is a rational moral argument for treating all humans equally that rests on the fact all humans possess a certain integrity by virtue of being autonomous moral agents. Or, to put it another way, the moral answer may well be contrary to that suggested by scientific data or cost-benefit analysis, and there is nothing irrational about ignoring such data or analyses in making moral evaluations. It is simply that the logic of moral evaluations is different to that which undergrids the assessment of empirical data or utilitarian cost benefit analysis. The problem arises not in ignoring empirical data but in doing so in such a way that closes off rational debate (‘because God says so’).

    The second issue I wanted to discuss was Greene’s argument that we have two modes of moral thinking, intuitive and consciously reasoned, and that Kantian notions of rights and duties emerge from our intuitions while conscious, reasoned moral evaluations are driven by utilitarian considerations. I disagree. Notions of rights and duties, I would argue, are not merely products of our intuitions but, especially in the case of rights, emerge through historical development and through rational assessments of what it is to be human.

    Josh Greene appeared to agree with me with respect to the first point. But we never managed to discuss the second. There is, unfortunately, only so many questions you can ask in 3 minutes – another of the constraints of the Moral Maze, I’m afraid.

Leave a Reply