I have landed

October 28, 2012 • 2:38 pm

I have returned from the Moving Naturalism Forward meetings, and am now in Cambridge with friends, exhausted and needing a drink.

I drove back to Boston with Dan Dennett—just the two of us since Richard Dawkins left for Boston yesterday to avoid the hurricane. Our entire 2.5-hour drive from Stockbridge was devoted to Dan trying to convince me that humans not only have a form of free will worth wanting, but that we are also morally responsible beings. He did not change my mind: I think we are responsible beings, but that the concept of moral responsibility adds nothing and, indeed, could be detrimental to endeavors like reforming the justice system.

I’m starting to think that the whole concept of morality is an outmoded impediment that should be discarded in favor of what Nick Pritzker (who sponsored our meeting) called “good ideas about how to run society.”

Regardless, Dan is a formidable interlocutor and a ferociously smart man, and used every weapon in his philosophical arsenal on me, including his famous penchant for thought experiments.  I just wasn’t convinced, but oy, do I need some ethanol after two hours of intensive brain exercise!  But Dan is a sweetheart and I love him to bits; he’s the kind of guy who can still be your pal despite intellectual disagreement.

It was a fun meeting, but I need to recharge my batteries before I write any more about it and post my photographs. In the meantime, read Massimo’s account of Day 2 over at Rationally Speaking.

If the hurricane blows over, I’ll leave for Mexico Thursday.

Oh, and I have a book autographed by everyone there, with each person contributing an equation, drawing, or slogan.  Start thinking about loosening up your wallets and bidding for it for Doctors Without Borders. It includes a Feynman diagram drawn by Steve Weinberg showing the production of a Higgs boson! I’ll post a scan of the signatures before I put it up for bid in late November.

44 thoughts on “I have landed

  1. I envy you those two hours with Dan, I really do. Shame he couldn’t make a dent, though… 😉

  2. I recently attended a lecture by Dan in London on the topic of free will. He started with a strange anectode about a guy who was told by doctors after surgery that they implanted him with a chip (or something) through which they could control his every decision. So the guy turned bad etc. because he didn’t think he had free will anymore.

    Later, Dan attacked religious “morality” by saying that if you are moral just because of your religion, you’re not really moral and if that is the only thing keeping you from killing etc, you are not “wired right”.

    But – how is that different from the case where you are moral just because you believe in free will? I would argue that if your belief in free will is the only thing keeping you moral, you are also not “wired right”.

    I’m convinced that I don’t have free will in any meaningful sense of the word. But that, in my mind, has not changed me as a moral agent in any way. My ability to empathize was not wiped from my brain with the realization that I don’t have free will. I also suspect that was the case for the majority of the audience in London. I was quite disappointed by the lecture, I must say.

  3. This looks interesting:

    I’m starting to think that the whole concept of morality is an outmoded impediment that should be discarded in favor … “good ideas about how to run society.”

    Would you elaborate?

    Do you mean it would be better overall for humanity (or sentient beings in general) to stop thinking in terms of moral obligations and constraints, and to start thinking in terms of what would promote the most overall goodness for the world? Or, in other words, that it would be better for us to think that our one moral obligation is simply to promote the most goodness possible? (Or at least to add to the total goodness in the world?)

    I ask because I think that suggestions about first-order ethics are rarely seen on this blog, so I’m curious about what your developed views are.

  4. How does *moral* not add anything to responsibility? It at least tells us what sort of context we’re talking about. Being responsible for finishing my homework is a different context than being responsible for how I treat other people.

    1. Maybe not, because if you don’t finish your homework, you won’t become edu-ma-cated, and hence won’t be able to interact with (nor contribute to) society in an optimal manner.

      😉

  5. If the hurricane blows over, I’ll leave for Mexico Thursday.

    Good luck…and be prepared to be stuck on the East Coast without power for a while. By all reports, this one is going to get very, very nasty.

    b&

  6. Well, if 2 hours of Dennett on compatibalism couldn’t persuade you, I’m not going to try any more myself.

    And there goes my fantasy of “gee, if I only had Daniel Dennett in my pocket and could pull him out right now …” Of course, it’s been replaced by a fantasy two hour car ride with Daniel Dennett waxing eloquent on philosophy.

    1. LOL,

      That is exactly the comment I was going to leave.

      Hours in a car with Dan Dennett doesn’t get Jerry to budge? Jerry is hard core. I raise the white flags on my dweeby efforts as well.
      🙂

      Vaal

      1. I’m not going to give up on free will until the question of what to eat for lunch ceases to be a difficult daily conundrum.

        1. I know you’re joking, but the point is that, although it feels like you’re making a choice when you’re trying to decide what to eat, that feeling is an illusion — every bit as much an illusion as the one that a chess computer has when deciding which piece to play next.

          Alternatively, you can re-define “free will” in such a way that chess computers have a form of it, as well. I don’t think that’s the best rhetorical approach, but it’s certainly a consistent position to take.

          b&

    2. I’d respect him less if he was easily persuaded. Jerry may be an amateur when it comes to philosophy but he is no house of cards.

  7. Wow, were that I could have been the proverbial fly on that wall. I predict, that now that Dan has so assiduously planted his seeds of doubt, they will fester in your brain until the day, they sprout. Dennett has puzzled me, but then I get these head slapping moments. Not so much an epiphany but a coming to grips with the idea that things were just as they seemed to be all along.
    Thanks for the post.

  8. “good ideas about how to run society.”? Who decides which ideas on societal management get implemented? My guess is that it won’t be who you’d like it be.
    But the distinction between “good ideas about how to run society” and morals seems to be one of theory and practice. While the powers that be may theorize how to run society, they can only do so by influencing what society’s morals are.
    Morals as I understand them are simply the unwritten behavioral codes that we judge our own and others conduct by. They are dynamic and highly individual, so I can’t see how they could be anymore outmoded than the individual who holds them.

  9. Massimo, on his blog, describes Harris as a “self-desciribed utilitarian”, but i’m pretty sure Sam rejects the “utilitarian” label and is a type of “consequentialist”. It’s a pity Sam wasn’t there to defend himself.

    That aside, i’m an incompatibilist.

    Massimo says: “I pointed out, particularly to Jerry and Alex Rosenberg, that incompatibilists seem to discard or bracket out the fact that the human brain evolved to be a decision making, reason-weighing organ. If that is true, then there is a causal story that involves the brain, and my decisions are mine in a very strong sense, despite being the result of my lifelong gene-environment interactions (and the way my conscious and unconscious brain components weigh them).”

    I’d be interested to know what, exactly, is meant here by “in a very strong sense”, given the prominat caveats? I think this “strong sense” usually turns out to be an aesthetic, or visceral, or ideological preference of some kind, that adds nothing substantial to the issue at hand.

  10. Have I misunderstood the morality debate? Seriously I’m asking in case it has escaped me. The subject is a total non-starter, but I’d like someone to tell me what I’ve misunderstood…

    Here goes:

    Nick Pritzker:- “good ideas about how to run society” sounds interesting, but where do those ideas come from & who endorses them? Who weights the various indicators [life span, education, freedom from strife, wealth distribution etc. etc.]

    Supposing we encounter an alien civilisation with differing beliefs & values to our own? Supposing we decide to make a go of it as a combined civilization, but they outnumber us 100/1 in population & power & they have no concept of individual rights/autonomy a la a queen/hive society. Perhaps we have to accommodate them because if we didn’t they would simply destroy us. What happens to moral values then? Obviously we do whatever is in our best interests.

    I believe that moral values are entirely circumstantial ~ that there is no way to measure what is best for “all”. Even a utilitarian outlook fails ~ supposing the maximum benefit to the most people requires that we ‘somatise’ [my word] the extreme thinkers & we all dutifully take happy pills? This is a Borgian nightmare & I don’t suppose there are many people who want us to end up in that valley of conflict-free mediocrity.

    1. Michael,

      You’re aware there are various flavors even among utilitarians, and that most would have answers for the questions you pose?

      Further, if the subject of morality is a “non-starter” why do we atheists ever bother criticizing the morality of the Bible and it’s God, along with that of many religious practitioners? Should we stop?

      As for “better ways to run society” here’s an approach in a nutshell: We all seek to fulfill our desires, which is how value arises. Working together we can help one another in fulfilling desires (desire for health, safety, food, social structure, you name it). In essence, enlightened societies are already converging on this version of moral/ethical motivation, even when some of it’s members, the religious especially, wish to pretend otherwise.

      And since we are part of the real world, and our desires are part of the real world, and it takes real-world states of affairs to fulfill our desires, then there must be “better” and “worse” answers as to which principles and courses of actions are more likely to help us fulfill more of our desires. (It would be hard to mount any reasonable argument that randomly shooting our neighbors in the head would suit this goal).

      Remember reason (in it’s broad sense) relies on generalizing and universalizing. That’s why there is the fallacy of “special pleading” when you make up a principle that only applies to yourself without good reason.
      When people imagine some alternate selfish, uncaring ethos for a society (or for an individual) the fact is they can’t actually take that very far. It’s virtually impossible to expand on such an ethical system without quickly running into special pleading. And AVOIDING special pleading when reasoning about what you ought to do actually leads you toward endorsing ethics that are more universal, vs selfish.

      (Most Utilitarians will tell you why the “happy pill” type scenarios don’t really cache out against utilitarianism, but that’s another discussion I suppose).

      Vaal

      1. Vaal:”As for “better ways to run society” here’s an approach in a nutshell: We all seek to fulfill our desires, which is how value arises. Working together we can help one another in fulfilling desires (desire for health, safety, food, social structure, you name it).”

        The desire for distinction, victory, ascendency, superiority, and autonomy etc., are not necessarily best served by us all “working together” (in any strong sense); on the contrary. In practise, the “best way to run a society” always resolves itself into power relations between various social actors.

        Vaal: “When people imagine some alternate selfish, uncaring ethos for a society (or for an individual) the fact is they can’t actually take that very far. It’s virtually impossible to expand on such an ethical system without quickly running into special pleading. And AVOIDING special pleading when reasoning about what you ought to do actually leads you toward endorsing ethics that are more universal, vs selfish.”

        Then how do you explain almost all human history which, I hope you’ll agree, was *never* built on altruistic, inclusive, humane, universalist principles? To think that “special pleading” can be avoided is to misunderstand what (functionally, socially) morality is: it’s the expression of power relations within a society. Morality changes because power is a fluid, contingent phenomenon, and *not* because those formerly oppressed have suddenly acquired better abstract “arguments” in their favour.

      2. Thank you Vaal

        I appreciate the time you took to reply to my noob questions/comments

        Can you give me a reference for your final comment please? [perhaps a link to an article where it’s discussed] :-

        “(Most Utilitarians will tell you why the “happy pill” type scenarios don’t really cache out against utilitarianism, but that’s another discussion I suppose)”

    2. Michael,

      These are important questions. I’m not a consequentialist nor utilitarian, but I can suggest a couple of ways the consequentialist might try to respond.

      1. There is a difference between the question of the correct moral code and the question of the one that people will actually accept. (For one thing, the former is a normative question; the latter, descriptive.) Utilitarians admit that their moral standard might, e.g., be too demanding, or even that people in general are (as a matter of psychological fact) unlikely to accept it. But they will insist that this is a separate question from whether their theory is true, i.e., whether we are always morally obligated to maximize total well-being. They might say the same about sacrificing our interests for the benefit of the alien civilization that outnumbers us.

      2. As for “happy pills,” this is a counterexample only to hedonistic utilitarianism. Many (most?) consequentialists think that there might be things that are valuable other than pleasure itself, such as true beliefs, innovation, autonomy, preference satisfaction, and so on. So yes, the hedonistic utilitarian will be in trouble from these sorts of examples (compare Nozick’s “experience machine“), but many consequentialists will be able to sidestep them.

      3. Finally, is there a way to measure what’s best for all? Often, no, but consequentialists respond that at least there are clear cases, and useful rules of thumb. In general, torture for fun would create more harm than good. In general, justice creates more good than harm. And so on. The consequentialist will argue that these rules of thumb at least generally guide our actions in the right direction.

  11. RE: I’m starting to think that the whole concept of morality is an outmoded impediment that should be discarded in favor of what Nick Pritzker (who sponsored our meeting) called “good ideas about how to run society.”

    Yes, forget morality because it requires “free will” to function.

    Community ethics is sufficient.

      1. That is a great article. I heartily agree with that approach. Free will has outlived it’s usefulness, and now causes confusion.

        I believe in the future we will look back on how we conduct trials and punish offenders today as quaint in the way we now view medieval trial by ordeal, such as throwing somone in a pond to see if they float.

  12. Whenever he raises the issue of free will, our host mentions its implications for justice but never gets into the details, which may yet prove more interesting than the usual philosophical debate.

    Criminology, though younger than theology, has a substantial literature of its own which may be equally unsatisfactory, a graveyard of good intentions, but might anticipate some bright young suggestions. Penitentiaries and reformatories were intended to improve their clients, after all.

    Since I’m the sort of person who thinks the golden rule and the categorical imperative are more or less derivable through game theory, I’m puzzled by the distinction between morality and “good ideas about how to run society.” I’ve programmed machines to follow a certain set of rules to achieve a desired result (and even to modify their own coefficients to improve their performance) but surely none of us, at least, would value a rule above its intended outcome.

  13. When you weren’t convinced, does that mean that Dennett’s arguments were flawed, or that they didn’t sufficiently demonstrate his position, or simply that you didn’t change your mind?

    1. I’m curious as to what the distinction between being responsible and being morally responsible is. Surely morality is just personal responsibility applied in a particular domain (how we interact with one another).

      Wish I could have been a fly on the wall for that conversation.

      1. I’m curious as to what the distinction between being responsible and being morally responsible is.

        When I asked that previously I was told that `moral’ responsibility refers to an absolute morality about what one `should’ do regardless of what any human thinks or feels about it.

        I agree that we don’t have responsibility in that sense, but to me that is just a bad interpretation of the word `moral’, not a reason for re-writing the English language to avoid the word.

        1. I remember reading a paper that came out a few years ago called The Error In The Error Theory, where the author agreed with the error theorists that moral talk of that absolute nature was necessarily false, but that the absolute talk of morality wasn’t fairly representative of what people mean by morality.

          Reading through Massimo Pigliucci’s summary of day 2, I found myself nodding in agreement with how the discussion went on morality – how they were talking about it was much more akin to how morality is used in a meaningful sense by which we use it. Why would we need to confine morality to a conception that clearly doesn’t match its usage? Morality is our responsibility when it’s applied to how our behaviour affects other individuals – that is the domain of morality, not whether there are things as absolute moral statements. I think we’d all agree that there are certain things that are bad for people and certain things that are good, and that our behaviour towards others is governed in part by those concerns. How is that not moral responsibility? It certainly would be a definition would satisfy most of what people mean by morality, without any regard for whether or not there’s such thing as absolute morality.

      2. I like to think of that difference in this way: when we consider humans to have free will, when they commit an act we consider bad, something that harms another person, we get angry with them, and we want to punish them as retribution. In this case I would say we are holding that person morally responsible, and our punishment is based on moral retribution, the desire to mete out justice, a comparable pain or harm.

        If we install a component in a system, say a networking card, we expect it to perform in a certain way. It has that responsibility. If it fails, we don’t ordinarily feel anger or the need to punish it. We do not hold it morally responsible, though we did expect it to meet its responsibilities. We don’t think it fails because it is bad or evil. We think either we have expected something of it that it isn’t capable of, or we think it is broken or flawed in some way. We then take action to intelligently remedy this shortcoming: either repair it or replace it, and we hopefully have some way to refurbish or redeploy the inadequate component, or otherwise prevent it from causing harm.

        If we think of people as not having free will, they can still have responsibility in the sense of being accountable or having obligations to fulfill, but we can’t hold them morally responsible. When they fail to live up to expectations by, say, comitting a crime, we assume that person was doing their best, but that they lack some internal restraint, some sense of consequences for their actions perhaps. There is no reason to be angry with them, to feel moral retribution, which is effectively vengeance. This is a natural human emotion, but if we think of its evolutionary purpose, its purpose is to lead us to enforce fairness and order by punishing.

        The purpose of punishing is to provide deterrence, to teach a lesson. So if we view our fellows as not having free will, we can still take corrective action to deter and to teach, because even if we don’t have free will we can learn, if taught correctly with the needed reinforcement to reshape our brain over sufficient time. We can use reason to arrive at the most effective way to teach and deter, the underlying ultimate goal of angry retribution, of human moral intuition, but we can dispense with the anger, with moral indignation, with the need to inflict pain or exact a pound of flesh in vengeance or retribution.

        If we abandon this emotional morality predicated on the assumption of free will, we can treat one another more compassionately, assuming transgressors are not bad or evil, just insufficiently capable or insufficiently trained.

        We probably need to know a lot more about the human brain and human behavior before we can always make the right judgement about what people lack or need, but I think we can already do much better than our present gulags of horror if we had the will. Unfortunately too many Americans will our penal system to be morally retributive, not just rehabilative. Too many view the poor, the violent, the frusrated young men who have not been made equal stakeholders in civil society, as wasted, irretrievable, flawed by birth. I suspect poverty and lack of opportunity and lack of a dignified place in society are the sources of these failures. The Old Testament condemners don’t believe in rehabilitation (except stupidly by god’s magic).

        We might benefit from studying how Scandanavian justice and penal systems work, because from what I can tell they punish offenders like they are humans that need to be readapted to society, not like animals that need to be tortured.

        1. This is an excellent comment and I agree with it completely (though I’d emphasize rehabilitation a bit more). In fact, readers who want to know why I feel that we should dispense with the notion of “moral responsibility” should simply read Jeff’s answer.

          Moral responsibility seem to me to derive from the discredited notion of “dualistic” free will and the erroneous idea that we could have done otherwise compared to what we really did. I think that notion has, and still has, inimical effects on society. It is, of course, a bedrock of most religions, and also buttresses discriminaton of gays, arguing that they have a “choice” about whom they’re attracted to. The more we learn that being gay is not a choice, but a biological imperative (whether due to genes, environment, or both), the more tolerant society gets towards same-sex attraction and marriage.

        2. We might benefit from studying how Scandanavian justice and penal systems work …

          I agree with you on that and on most of your message, except for some semantics about what “moral responsibility” or “free will” entail.

          Note that Scandinavians have arrived at a more humane system not by abandoning concepts of “moral responsibility” and “free will”, but instead by understanding those things better (by which I mean a compatibilist understanding) and by not having the overwhelming and malign influence of religion.

          I can see what Jerry etal are aiming for here, but I’m not convinced they’re adopting the best tactics. If Scandinavia is ahead in this, copy them.

  14. the whole concept of morality is an outmoded impediment that should be discarded in favor of … “good ideas about how to run society.”

    Certainly we should abandon ideas of `absolute’ morality, and certainly we should ditch the religious undertones of the word, but then we need a word that encapsulates notions of how we interact with each other and run society, … how about `morals’, that seems to fit quite well?

    We do need the concept of morals, we just need to understand our moral system properly as something that evolution has cobbled together and programmed to enable our cooperative way of life, just as it has cobbled together our immune system and our visual system.

  15. A multi day trip in the good companionship of Dawkins and Dennett… it reminds me of a line in a poem by Yeats…..

    “Think where a man’s glory most begins and ends,
    Say that it way my glory that I had such friends”

    1. Let’s not forget that Dawkins and Dennet were also lucky to have a multi-day trip with Coyne.

      I guess that should read “say it was my glory”.

  16. Regarding morality – it is only an idea, it has no concrete existence outside our heads, therefore it is not one thing. Because, as we now know, there is no ‘I’ inside our heads controlling everything, this means morality is many things. Everyone has her or his own ‘morality’ or ‘amorality’, based on social constructs & conventions.

    1. When it comes to ethics, I make a distinction which I find very useful: between Right and Good. Right and wrong are determined by things like laws, conventions, policy, social contract (formal and informal), agreements, etc. Good and bad are determined teleologically and supersede right and wrong except in the one respect of the rule of law/right principle: the rule of law/right is a fundamental good and any changes to laws/conventions that aught to happen in in response to what is determined to be good or bad, must be made in terms of other existing laws, conventions, social contracts.

  17. “I think we are responsible beings, but that the concept of moral responsibility adds nothing …”
    I don’t get that. What is the difference? I’d ordinarily consider the term “moral responsibility” a bit redundant. Responsibility is a moral concept to start with. To assert a responsibility is to assert an aught, to assert an aught is to take a moral stand.

  18. I’ve some questions for Jerry or anyone who shares his views on freewill. And really sorry that this is going to be a bit long.

    I agree that there are no souls and that all of our actions are product of laws of physics but I don’t see how one can claim that freewill does not exist.

    My problem is the following: there are many actions and choices that we feel are made by our ‘freewill’ agent (picking what food to eat, which restaurant to go and etc.) while there are many bodily functions that are outside that. We cannot control many regulatory functions in our bodies. One fascinating example that I read about recently is our brains’ self-imposed limits on physical exercise. Our feeling of tiredness is in fact imposed by the brain way before our bodies get to their maximum capacity. So one way to improve your jogging record is to teach your brain that your limit is actually higher than what it thinks. I guess this means that our ‘conscious part’ has control on what physical activity we do, but a ‘subconscious part’ sets up a hard limit by telling us that we are tired and we need to stop.

    Now, it’s easy to think of an evolutionary advantage for the above system. But that also means that the ‘conscious part’ and ‘subconscious parts’ have been evolving together through our evolutionary history. In other words, to me it’s obvious that we have a separation between conscious and subconscious parts. So it seems to me that we have freewill in a real way and that our bodies and brains have been evolving and through evolution, some tasks are assigned to be performed by the ‘freewill’ part, some are completely outside its control and others are somewhere in between.

    1. As I understand it, with no free will there are still two inputs for the brain’s functioning; genetics (which is the part your comment is focused on regarding an explaination) and environment (which is the part you are wondering about).

      The genetics are static for an individual’s brain functions in most cases but, the environment (learning, reading, experiencing, associating) is changeable. The interaction of the genetics part and the environmental part is what would produce what we think of as mind.

      One of the problems of introducing free will is the need for an explanation as to how it could function. There isn’t an apparatus in the brain that can produce free will, the brain must function within the constrains of natural laws just like everything else does.

  19. Could you two arrange a friendly discussion (not a debate) about free will for broadcast on youtube? In addition to being a great discussion, this would provide some insight into how scientists and philosophers might have different ways of tackling the same issue.

Comments are closed.