Does determinism negate moral responsibility?: A survey

September 8, 2010 • 6:13 am

In a Opinionator piece in today’s New York Times, Joshua Knobe, a philosopher/cognitive scientist at Yale, describes the new discipline of “experimental philosophy,” which uses modern scientific methods to address traditional philosophical questions. (Sam Harris’s upcoming book, The Moral Landscape, is a specimen.)

I won’t recount Knobe’s work here, since he does a good job in his piece.  But what he shows is that, if you tell people the world is a deterministic one, then their view about whether such a world allows moral responsibility depends very strongly on whether you pose that question in the abstract or give them a concrete situation in which blame can be affixed.  (Guess which way the answer went!)  Knobe is not of course addressing the question of whether we really have moral responsibility, but rather what makes us think we do:

How can experiments like these possibly help us to answer the more traditional questions of philosophy?

The simple study I have been discussing here can offer at least a rough sense of how such an inquiry works.  The idea is not that we subject philosophical questions to some kind of Gallup poll. (“Well, the vote came out 65 percent to 35 percent, so I guess the answer is … human beings do have free will!”) Rather, the aim is to get a better understanding of the psychological mechanisms at the root of our sense of conflict and then to begin thinking about which of these mechanisms are worthy of our trust and which might simply be leading us astray.

I like these studies since, if you accept their methodology, they give pretty concrete answers—in contrast to a lot of philosophy!  Another study of this type is that of Marc Hauser and his colleagues (I posted about this last December), who asked people to judge the morality of acting in different ways in unusual and contrived situations.  They found that moral judgments seemed to be pretty universal, independent of religious or social background. This suggested that perhaps some of our “morality” is innate and evolved.  Now Hauser, of course, has been found guilty of academic fraud at Harvard, and so this result, like all his other work, remains under a cloud of doubt.  But it’s certainly worth repeating since its results are so interesting.

56 thoughts on “Does determinism negate moral responsibility?: A survey

  1. “The idea is not that we subject philosophical questions to some kind of Gallup poll”.

    ESPECIALLY not with a PZ and his pharyngulate around!

    (although .. wait .. let me rethink this)

  2. Sorry to be thick, but… Which way did it go?

    My confusion stems from the fact that I find the notion of determinism eroding moral responsibility to be quite silly. No offense to those who disagree… I am beginning to wonder if I am the one being silly, heh.. I just don’t see how the fact that actions are theoretically deterministic has (or even ought to have) any real bearing on how we go about our everyday lives.

    1. Same here. I don’t see how it makes much difference whether we are “just” highly complex machines, sorting through a vast field of inputs and life experiences as best such wetware can, or else really have magic mind fairies in our noggins that “decide” things without such constraints.

      1. Ray, one difference is that if people think of themselves as uncaused causers, unique exceptions to determinism, then they get to take ultimate credit and assign ultimate blame. This has all sorts of ramifications for how we treat each other, for instance in criminal justice and social policy, http://www.naturalism.org/criminal.htm , http://www.naturalism.org/social_justice.htm

        Another difference is that the determinist view draws attention to the actual causes of why people become who they are and act as they do, whereas believing we have contra-causal free will deflects attention from those causes. We’re much better off with the first alternative if we want better control over ourselves and our circumstances.

        As for moral responsibility, it remains intact under determinism, but our responsibility practices might become more effective and compassionate if we gave up the myth of libertarian freedom, http://www.naturalism.org/glannon.htm , http://www.naturalism.org/morality.htm

        1. What exactly is the definition of moral responsibility that we’re using here? What are the consequences of having it or not having it?

          Does the fact that someone could not have acted differently in a given situation really not affect your idea of how responsible they are?

          1. A large part of the “input” for any “decision” is the state of mind of the “decider” (ha ha!), correct? Doesn’t that strongly suggest that a persons decision making “algorithm” can be altered by deliberate learning or training? If that is so, then two points.

            1) Even if a person could not have acted differently in a given instance, can’t they be taught that their “decision” in that past instance is unacceptable? So that in similar future instances their suitably altered decision making “algorithm” coughs up a more appropriate “decision”?

            2) Why shouldn’t people, at some point in their life bear responsibility for having learned what types of behavior society considers to be improper?

            The consequences of having moral responsibility are that our societies have mechanisms that teach or train people to make “decisions” that result in behavior that society finds acceptable.

            The consequences of not having moral responsibility are pretty much unknown. Of course it is pretty easy to imagine that without some other mechanism for reducing unacceptable behavior to replace it, things would likely get very ugly.

            What other possible mechanisms can be devised is, I guess, what this argument is all about.

            1. 1) Yes. This is a pragmatics-based approach to morality. There are plenty of difficulties inherent in it, but that’s not the issue right now. The issue is that humans do not feel good about punishing people for things they obviously could not have not done. And the more our understanding of the mind improves, the more obvious it will become that people are robots following a program.

              2) What I’m asking you is “what does it mean to bear responsibility?” Does it mean someone deserves our derision for what they did wrong? Should we punish them? Should they feel bad about themselves? Should they do some sort of penance? The intuitive understanding of moral responsibility is that there are “wrongs” in this world, and when you perpetrate one, people are justified in being upset with you, and you, if you are of sound conscience, should be upset with yourself. You should expect the possibility of punishment or censure from your peers, with the knowledge that it is deserved. You should want to do better next time, and others will hold you to that. But, intuitively, none of that applies when we see that a person just could not help but do what they did. So if you want to talk about something akin to the moral responsibility I just described existing under Determinism, you have to talk about “justice” and “deserving punishment” and “feeling guilty” and how those terms can still apply. And if they don’t still apply… should we call this concept “moral responsibility” at all?

          2. The concept of responsibility should be judged on its effectiveness at preventing harm and promoting happiness. If (and only if) it can be shown that teaching people that they are responsible for whether or not they become violent when angry prevents harm – then that instance of teaching responsibility is a good thing. If experiments show that teaching people that they are responsible for maintaining a healthy weight do not result in any improvement in their weight, then the notion of responsibility should be abandoned in that instance.
            There are actually several areas (for example, depression, drug abuse, and weight control) for which there is significant evidence that the notion of personal responsibility is either unhelpful, or measurably harmful. And yet in America, a puritanical faith in the holiness of personal responsibility drives a vocal cadre of demagogues to preach that any alternative to personal responsibility is necessarily harmful. Particularly in the case of depression, this notion is now recognized to have caused grievous harm.

            1. Yes, I agree. And depression is a good example. When the brains functioning is physically altered by non normal levels of neurochemicals, or by any other means, it is clearly not possible to just decide to not be affected by it.

              Behavior is very complex though. For example while the evidence clearly shows that a large percentage of people can not permanently change their habits in order to reach and maintain a healthy body weight, some can and do. And some strategies for doing so have better results than others.

            2. So basically the whole idea is to manipulate the input people get in order to make them give the output we desire, right?

              So who decides what the desired output should be? There’s no objective morality… so what metric do we use? Sam Harris is having difficulty with his concept of “maximizing well-being.”

              I also wonder how possible it is to have people change their behavior when you chastise them for something that they know their past history “conspired against them” to make them do. If you tell someone that they shouldn’t have lost their temper and broken the lamp, but at the same time they are completely aware that they were born with a certain temperament, and every choice they made in their lives which would affect that temperament was also just based on the chemistry of their brain, how much will it sink in that they really have to change? I personally find it very difficult to hold myself to a consequence (positive or negative!) that I don’t feel completely responsible for. For example, it always takes the wind out of my sails a little bit when – after beating someone in some physical competition – I reflect on how I was born with a good dose of athleticism and coordination, and, given how much I’ve practiced the relevant activity, beating an opponent with less experience or less talent is just a natural consequence.

        2. ‘Moral responsibility remains intact’ but has the same meaning as it does applied to a cockroach-zombifying wasp going about its business. Afterall, in the entire absence of free will that is determinism, you have no more role in what you do than does the wasp and it makes no more sense to hold you responsible than it does the wasp.

          1. The main difference is that human beings are usually capable of modifying their actions in accordance with the situation and likely consequences, whereas insects that harm us just get squashed.

          2. So, you had no choices in writing that comment?

            If it makes no sense to hold anyone responsible for anything, what suggestions can you offer for maintaining a functional society without holding anyone responsible for anything?

            Or, since we have no choices why even bother worrying about it?

            I mean, we do hold people responsible, but we have no choice in doing so.

            And if we decide … oops, wrong word. If our behavior changes to not holding people responsible, then we had no choice there either. Right?

            Don’t worry about it. Just go with it.

            1. ::sigh::

              How many times do people have to explain the same concepts in these free will threads?

              “So, you had no choices in writing that comment?”

              Most organisms make choices. Determinism says that given the same brain state, you will make the same choice every time. The choice is ours what we want to do about the moral responsibility issue. Because we aren’t gods, we cannot know what choice we will make until we’ve made it. Therefore this question is necessary.

            2. “If it makes no sense to hold anyone responsible for anything, what suggestions can you offer for maintaining a functional society without holding anyone responsible for anything?”

              Is this an argument that we shouldn’t accept a truth because we don’t like the consequences?

            3. How many times do people have to explain the same concepts in these free will threads?

              Sorry Tim, I don’t know how to do smilies. And, please, don’t explain again. After the numerous recent WEIT posts on this topic I believe I understand you quite well. That whole comment, let alone that statement, was meant by me to be a bit tongue in cheek.

              I was attempting to express, with a little humor, the apparent paradox of trying to decide if we are capable of making decisions. Sort of a time travel paradox kind of vibe. I think it is important, but I also find that aspect of this issue funny.

              Is this an argument that we shouldn’t accept a truth because we don’t like the consequences?

              Well, at least you framed that as a question, however you meant it. I am trying to decide whether or not to tell you to “piss off”. And, what “truth” are you talking about? Do you know the truth about this issue? I thought all these discussions were an attempt at figuring out, at best, how to move a little closer to the truth.

              My comment was in response to the phrase, “…and it makes no more sense to hold you responsible than it does the wasp.”

              My point, attempt at humor / snark aside, is that it does make sense. It may not be the most sensible method, but that does not mean it is completely without efficacy. In other words, I was pointing out that the comment was a bit hyperbolic.

              And, since the commenter seemed so sure of his position I wanted to hear his ideas on how he thinks it should be done.

              So, are you black and white on this issue? Do you really believe that holding people responsible for their behavior is not effective at all in limiting “bad behavior”?

            4. “I was attempting to express, with a little humor, the apparent paradox of trying to decide if we are capable of making decisions.”

              And I’m attempting to express that there is no paradox. No one is having trouble deciding whether humans make decisions. We do, as I’ve already explained. What we are trying to do is decide what to do with our intuitions about moral responsibility and justice, given that our intuitions are designed for a world in which humans don’t realize that free will doesn’t exist. So I don’t see the point of your comment, tongue in cheek or not.

              “what “truth” are you talking about? Do you know the truth about this issue?”

              I’m saying that IF determinism meant the end of moral responsibility, our emotions regarding that truth would not change the fact that it was true. It seemed like you were making an appeal to consequences – a logical fallacy.

              “My point, attempt at humor / snark aside, is that it does make sense.”

              Then provide an argument for it. Define what you mean by moral responsibility, and explain how we have it.

              “And, since the commenter seemed so sure of his position I wanted to hear his ideas on how he thinks it should be done.”

              You’re conflating 2 things: 1) the facts about whether we have moral responsibility or not, and 2) if we don’t, what we should do about it. Having the answer to #1 does not require having the answer to #2.

              “So, are you black and white on this issue? Do you really believe that holding people responsible for their behavior is not effective at all in limiting “bad behavior”?”

              You’re conflating again. All I’ve said is that you’re wrong that people don’t make decisions (or that there is some inherent contradiction in saying so), and that (what I have inferred to be) your appeal to consequences is a logical fallacy. Above, I’ve asked that we define our terms (always a good idea, no?), and I expressed doubt that people would really feel the same about responsibility for their actions if they understood that they don’t have free will. I never claimed to have answers regarding what to do if we abolish the concept of moral responsibility, yet it seems like that’s what you want from me. To answer your question, I cannot say for sure that it is impossible to “hold people responsible” for their actions without the idea of free will, but I do foresee a lot of difficulty in doing so.

            5. Tim, I’ll attempt to keep this short.

              You are not reading what I have written. Perhaps my writing is at fault, though I don’t usually have this much trouble getting my view across. You have apparently decided that you have my views pegged. I can easily tell by your vigorous responses that you don’t. You are target fixated, but you have the wrong target.

              On the apparent paradox. No shit. Hence the word apparent That is part of what makes it funny. Similar to the chicken and egg paradox. Plenty of people have thought seriously about it, but when looked at from the proper perspective it is clearly seen to be “not even wrong”.

              On the “truth”. Sounds like I misinterpreted what you meant. I thought you were expressing that you didknow the truth of the matter, which would be an extraordinary claim.

              As far as “making sense”, I have already provided an argument for what I claim to be the case. All I claimed is that holding people responsible for their actions in the manner that most societies currently do, as manifested in systems of justice, child rearing norms and so on, is not completely ineffective in limiting “bad” behavior. I made this claim because the commenter I was originally responding to seemed, foolishly to me, to be saying that holding people responsible for their actions is completely ineffective.

              Notice, and I thought this was very clear, that I did not say that I thought that this was the best solution that we could devise. Let me make it clearer for you. Though I don’t have any concrete ideas on how to do it, I believe that changing our mechanisms for limiting “bad” behavior and maximizing “well being”, informed by a better understanding of human cognitive functions, and what “moral responsibility” should mean based on that understanding, is one of the most important things that human kind needs to do. Right at the top of the list.

              As for “conflating”. No, in the way that you describe, I was not and do not. I was responding to what I interpreted as superciliousness with immature school yard attitude as in “so you think you’re so smart, prove it!” Intentionally.

              And for conflating again. No, again I do not conflate as you suppose. You simply misunderstand me. Not saying that you are at fault in that, but I do wonder. I do not believe that people do not make decisions. I thought I already made that clear. Guess not. I already reexplained my paradox comment above. If you don’t see any humor in it, well, sorry. I made no appeal to consequences and thought that I had explained that comment quite clearly. Given the original context I was a bit suprised you misinterpreted it as you did. And, given my further explanation of that comment the fact that you are still on about it makes me think that you are trying as hard as you can to hear what you want instead of what I am saying so that you have a target to lecture at.

              As far as what I wanted from you. I got it. I wanted to know if you were the type of person that invests more confidence in a position than the current data warrants because of emotional bias, or whatever reasons. And, if you were the kind of person who views things in absolute, discrete terms. Some of your comments seemed to suggest that you were. Your last paragraph in your last comment cleared that up for me I think.

              For some over all context. My original post was motivated not really by the actual topic, but by some comments that I interpreted as being more confident than the data warrants, and too “all or nothing”. I was attempting to speak to that, which there has been a lot of in this debate, not to the free will debate itself.

            6. Ah, I see that we have been talking past each other somewhat. It always ends up happening, even when you try to prevent it. :-/

              On the “apparent” paradox: You seemed quite in earnest in your original reply to Mike, and I had no reason to think you weren’t using “apparent” to mean “evident” instead of “ostensible,” especially when others here have seriously meant what you (apparently) didn’t mean.

              On “responsibility”: I don’t think we’re all using the same definitions here, which is why my first comment here was to suggest that we define our terms. Alas, people often jump right into arguments without even taking this step. When you talk about holding someone morally responsible “making sense,” I don’t believe you’re saying the same thing Mike and I are. Correct me if I’m wrong, but you’re just saying that punishing/praising people for their actions seems to be useful and affects behavior. You’re not saying the concept of moral responsibility makes sense; you’re saying that it makes sense to act as if people have it, regardless of whether they do. What Mike and I are saying is that the concept of moral responsibility, which invokes ideas of justice and just deserts, guilt and pride, punishment and reward, does not make sense in light of Determinism. Or if it does, someone needs to explain it to me. What does it mean to “deserve” a reward or punishment if you were determined to perform the relevant act? Should you feel good about doing something good even though you were determined to do it? What about bad things? Either these concepts no longer make sense without free will, or they can make sense, but someone needs to describe exactly how they still apply and what the implications are.

              To say that treating people the same as we always have works is not to say that we are being philosophically coherent; what Mike and I are saying is that it is not philosophically coherent to say humans are morally responsible and that wasps are not. Unless you define “morally responsible” in some way different from what most humans intuitively understand it to mean. That is, I believe, why we’re talking past each other here.

              One last thing: you seem to be saying that because acting as if people have moral responsibility works now, it will always work. I thought I made this clear, but I’m saying that it probably won’t work if/when people understand that free will doesn’t exist and we’re all just robots. That knowledge changes the way people respond (or might potentially respond) to reward/punishment. Unless you have some reason why it wouldn’t change it.

            7. “I don’t think we’re all using the same definitions here, which is why my first comment here was to suggest that we define our terms.”

              I strongly agree. Even a word as seemingly innocuous as “believe” can cause serious problems. Believe, as in a conclusion about the level of confidence to award to a certain claim? Believe, as in you behave as if a claim were true without actively reasoning your way to that position in advance? And several others.

              “You’re not saying the concept of moral responsibility makes sense; you’re saying that it makes sense to act as if people have it, regardless of whether they do.”

              Very close I think. I mean “makes sense” purely in the empirical results sense. And I don’t claim that doing so is particularly effective, only that it is clearly not without some efficacy. I was not attempting to make any claims about the philosophical accuracy of that type of moral responsibility. And I was not trying to justify doing so. I was trying to point out that the absolutist claim “makes no sense” was incorrect in real world terms. It makes some sense because it is somewhat effective.

              Apologies. I would like to respond to the rest of your comment, but I just got a phone call that my boy is sick and needs to be picked up from school.

            8. Okay, picking up where I left off above.

              ”…the concept of moral responsibility, which invokes ideas of justice and just deserts, guilt and pride, punishment and reward, does not make sense in light of Determinism.”

              I did understand that that is what you meant. I tend to agree with this but, my level of confidence that my assessment of this is correct, that I clearly understand the arguments of some of the better thinkers on all sides of this issue, and that their (the better thinkers) arguments are significantly correct, is currently low.

              ”…..but someone needs to describe exactly how they still apply and what the implications are.”

              That would be nice, wouldn’t it? I think we are finally developing the tools that will enable us to make real progress on these issues in the not to distant future.

              ”One last thing: you seem to be saying that because acting as if people have moral responsibility works now, it will always work.”

              Well, no, I was “speaking” in the context of the past and present from which there is empirical data to show that doing so has some efficacy. To be honest I never considered addressing whether doing so would work in a future where the majority believes that “free will” does not exist. Not because I was trying to be rude or anything. I just didn’t feel like I had the time to give to that conversation what would be required since I would probably have to write a dissertation on it just to get my thoughts about it in order.

              I agree that “that knowledge” would change people’s behavior. How and to what extent is an extremely complex problem. Harry Seldon might be able to figure it out, but I wouldn’t put much confidence in my own predictions. I think that, very generally, there would be different “forcings” that would apply pressure in conflicting directions. Forcings derived from intellectual understanding would likely result in changes to justice systems and the like. I think it is also possible that emotional forcings like self esteem would apply pressure that would cause some percentage of people to behave as if they agreed with our current generally accepted concept of moral responsibility in certain situations regardless of their intellectual understanding.

    2. Nobody’s answered my original question: Are people more likely to say that determinism still allows moral responsibility if they are given a specific example, or if they aren’t?

      Because I see this problem so differently from most people, I have trouble guessing. I could see it going either way. I could see people being like, “Well if he killed seventeen cute babies, then I don’t care if it was predetermined, he’s still responsible!” I could also see people agreeing to moral responsibility in the abstract, but then being like, “Well if it was a genetic predisposition, maybe it wasn’t his fault” or something.

      So that’s all I’m asking… I don’t really want to debate it any more, because I’m apparently broken in my inability to understand 🙂

      1. Did you read the article?

        “The results showed a striking difference between conditions. Of the participants who received the abstract question, the vast majority (86 percent) said that it was not possible for anyone to be morally responsible in the deterministic universe. But then, in the more concrete case, we found exactly the opposite results. There, most participants (72 percent) said that Bill actually was responsible for what he had done.”

        1. Thanks. No I didn’t, and I would have looked there but I thought Jerry was referring to previous work by Knobe rather than something specifically in the article. So color me lazy and easily confused 🙂

          Sorry for the dumb question, and thanks again for giving me the answer anyway!

  3. To me all morality means is avoiding as much as possible treading on other people’s toes!

    Incidentally, Amanda Gefter of New Scientist (recall she had that bit removed from their online content last year?) has written a bit about the Royal Society’s book prize shortlist (25th August) & says WEIT should have made it to the last six -“Jerry Coyne’s Why Evolution is True, a fabulous book that made the Society’s longlist”.

    1. Indeed. The false notion that determinism is fatalism is a favorite tool of some Christians who, in their efforts to defend “free agency”, seek to defame determinism.

    2. Yes; I’m afraid most people don’t understand what determinism is. Many believe it means that “everything is predictable” – a claim which we understand to be absolute nonsense based on what we do know of the universe.

      1. We cannot even predict the local weather more than a short time in advance. The human nervous system is far more complex than a local weather system, and so I can’t see how individual human behaviour would be very predictable, except perhaps when the stimulus is overwhelmingly powerful — e.g., 99.9% of people who touch a hot ion will jerk his/her hand away immediately.

        So, of course determined =/ predictable.

  4. I like these studies since, if you accept their methodology, they give pretty concrete answers—in contrast to a lot of philosophy!

    Sure, but these studies are really experimental psychology, and not philosophy. Finding out empirically what people think isn’t really the job of philosophy, and there is a long tradition in psychology of experimentally examining moral reasoning.

  5. Why is it that questions of moral responsibility and free will seem to involve abhorrent acts?

    Are people not morally responsible for benevolent acts? Is it determinism for when we’re kind to people and free will when we’re not? Or vice versa?

    Or, more likely, do we have a rational thinking (human) brain sitting on top of a lizard brain? And that sometimes the lizard brain gets the better of us (it’s faster acting, for sure)?

    I think every one of us has been in a situation where your anger/fear/negative emotional state came through in an instant and then you almost as instantly regret it. But perhaps by then are “too far along” in your commitment to that anger/fear response. So, your rational brain watches in horror as the lizard brain continues to act in a way that you wish it wouldn’t.

    It’s not determinism. Nor is it free will. It’s neuroanatomy.

  6. Knobe is a regular Bloggingheads contributor; see here, for example. At first, his “experimental philosophy” seems simplistic, but it’s often very interesting.

  7. The article says:

    ‘It is as though their capacity for abstract reasoning tells them, “This person was completely determined and therefore cannot be held responsible,” while their capacity for immediate emotional reaction keeps screaming, “But he did such a horrible thing! Surely, he is responsible for it.”’

    That may be true for most people, but its not true for me. I tend to view morality as a social contract, so in the abstract the extent that we have “free will” isn’t important, what is important is the social impact of the behavior and that necessarily has to be judged in the context in which it occurs. So if a context in which bad behavior occurs is one where there was unnecessary provocation to act badly, then that is less of a threat to social stability then bad behavior in a context where there was no such provocation. This is true independently of how much “free will” we have.

  8. The whole determinism vs free will debate seems based on a confusion of language to me.

    Those denying free will are defining it in such a way as to render it meaningless: ‘free will’ to them seems to imply that, all things being equal, a ‘free’ individual could make a completely different decision in exactly the same circumstances.

    I don’t see any ‘freedom’ here – I just see random behaviour. What’s ‘free’ about it? If every individual made ‘free’ decisions in this way we would all behave, statistically and over a period of time, the same as each other. That wouldn’t be exercising free will, it would merely be acting entirely at random.

    The ‘free will’ deniers are also defining ‘free will’ as some kind of supernatural agency, outside of physics and chemistry, which looks like a straw man to me: where *are* those of us who believe in free will making claims to the supernatural? I mean here, on this site, not ‘out there’ in the wider world? It’s pretty much a given that, trolls aside, this site is inhabited almost exclusively by naturalists.

    A supernatural argument wouldn’t necessarily support ‘free will’ in any case: is ‘my soul made me do it’ any less deterministic than ‘my brain made me do it’? What about ‘the Devil made me do it’ or ‘Jesus guided my hand’?Most religions have myths around possession, zombies, etc so the dichotomy of determinism/naturalism vs free will/supernaturalism doesn’t hold up.

    ‘Free will’ isn’t something exterior to the brain, acting through that brain: ‘free will’ *is* the brain acting entirely naturally. The fact that ‘free will’ has a natural explanation does not mean that it does not exist.

    Nor is ‘free will’ the same as ‘consciousness’: just because you have made a decision before you become conscious of it does not mean that it is not *you* who have made that decision, any more than you can claim somebody has ceased to exist simply because they are asleep.

    It all rests on the nature of identity. If you think the ‘self’ is simply an illusion then naturally such a ‘self’ cannot have ‘free will’. But I’ve yet to see a convincing answer to the question ‘If the self is an illusion, who or what is having that illusion?’ The brain? How would that work? What’s the difference between the ‘illusion’ of a self and an actual self? The illusion hypothesis suggests two entities at work: a material brain and an illusion which inhabits it – and what’s more it still doesn’t explain how that illusion is *experienced*.

    On the other hand if your brain *is* the self and ‘free will’ is the way that self manifests itself this is to suggest a single entity. No supernatural agency, no Zen mysticism about illusory selves experiencing the illusion of selfhood, just a recognition that free will and the self are real, natural processes of the brain as real as the facility for language.

    1. I agree, but maybe amend your definition of free will not merely to the brain acting “naturally”, but of the brain acting with purpose. But it’s not necessary that the brain act consciously to act with purpose. I think that’s where the sticking point is in such discussions.

      It seems that determinism denies that there can be a purpose-driven decision process, whereas free will-ism (?) understands that — consciously or subconsciously — the individual human often makes decisions that would be decided in a different way under almost the same circumstances.

      Determinism, to me, says that we would make the exact same decisions about the same or similar situations every time. I hope that’s not the case. I can think of many, many decisions that I have made that I could easily have done differently — even absent hindsight and increased knowledge.

      Even if I am ruled by my molecules, I don’t wish it to be so — therefore, I’m determined to reject the deterministic philosophical position.

      Is my position rational? I hope so. Is it “true” either trivially or profoundly. I hope so, too.

      1. “Even if I am ruled by my molecules, I don’t wish it to be so — therefore, I’m determined to reject the deterministic philosophical position.”

        Wait wait wait… you’re admitting that you would reject an evidence-based conclusion for the sole reason that you don’t like it?!

        1. Heh. Yes, I suppose I am. Now you’re allowed to call me a hypocrite. I guess none of us is perfect.

          I believe I’m correct in rejecting a deterministic view of the world. I think my interpretation of the scientific research and the philosophical discussions are appropriate and in keeping with a conclusion that we are not merely our molecules, but something just a bit more capable than that.

          But if there were a way to “prove” determinism vs free-will-ism, I’d probably cling to my current views for a very, very long time. Because it the current arguments/evidence in favor of determinism are — to me at least and at present — completely uncompelling. I’d count this as one of those extraordinary claims that demands extraordinary evidence.

          Is that deterministic? Or me demonstrating free will against the evidence?

          It may turn out to be one of those questions that can’t be decided with science. Funny. I never thought of it that way before.

          How would you falsify free will?

          1. “How would you falsify free will?”

            How would you falsify the idea that human cognitive events happen for no reason whatsoever? Maybe the fact that it’s logically incoherent.

            Or you could consider that we can affect the physical human brain with magnets, electricity, surgery, and drugs, and every time we do, it has some effect on cognition, emotion, perception, or behavior. In many cases we can identify the physical process by how this works. Neurons are electrochemical – use external electricity to mess with their firing, and they stop functioning properly. Use drugs that mimic the effect of, say, natural dopamine in reward centers of the brain, and you find that people become quite attached to that drug.

            And then there’s the fact that we can hook someone up to an fMRI/EEG/MEG/PET etc, and find that their thoughts change what their hemodynamic response/brainwaves look like. When people study a list of words and are asked to recognize those words later, activity in areas of the brain involved with long-term memory is lesser if the person cannot recall the word. Activity is greater if they can. We can see people’s brains working (or not working) to accomplish the things we do everyday. Or, to take an example more germaine to our discussion, multiple studies (iirc) have shown that when humans make cold, logical cost/benefit analyses of what it is morally acceptable to do in a given situation, they are in fact using the logical parts of their brains. (Example: When you agree to flip a switch that would redirect a runaway train onto a track where 1 person is standing, as opposed to the original track where 5 people were standing.) Under slightly different circumstances, where the situation involves, say, pushing one person onto the tracks in order to stop the train and save the 5 people standing further down the track, brain imaging shows us that people judge this using emotional parts of the brain (which is why people will say that this mathematical equivalent of the previous situation is unethical. Our moral programs tell us [via our emotions] not to use other living things as tools to achieve our ends, which is why the second example feels wrong.)

            In sum, we can see our brain activity reflect the things our minds do, and we can manipulate the brain to affect the things our minds can do. There’s never been a case where brain and mind seemed independent of each other. Sure, these facts don’t falsify free will any more than the fact that the big bang can happen on its own falsifies God. But what facts are there to support free will, and how do you even make sense of it as a concept?

            1. So, you are arguing that in those cases in which a person is NOT under the influence of drugs or brain penetrating electrodes or transcranial magnetic stimulation, they can be said to have free will. If they are shown to be under such influences, then that free will for hypothesis some specific behaviors is falsified?

            2. I don’t entirely understand your last sentence, but from what I do understand – no, that’s not what I’m saying.

              I’m saying that we know from our experience in manipulating the brain and imaging the brain that actions and thoughts are always correlated with activity in the brain. There has never been any evidence that anything going on in our minds is in any way “free” from the physical interactions of neurons, chemicals, et cetera.

              Does this falsify free will? Well, free will would predict that we can find some way to be free of our brains, because otherwise every choice we make is the necessary product of the brain state that came before it (combined with any intervening external stimuli). And science has never found this to be the case. Furthermore, how could it? Not even invoking the supernatural could explain how our choices could be based on nothing and yet still be ours.

            3. Last sentence was garbled. Should read:

              “If [someone] is shown to be under those influences[drugs, TMS], then the free will hypothesis for some specific behaviors is falsified.”

              Otherwise, if you’ve been following the free will discussion on this blog, you know that some people are arguing that physical determinism doesn’t disprove free will, and I’d probably even go up a few levels of abstraction and say that psychological determinism wouldn’t disprove free will. That is, if we could map an individual’s psyche so accurately that we could predict a narrow range of behaviors in response to some stimuli, we could still consider them to have free will.

              To me, you’re seem to be arguing that unless our whims are independent of who we are, then we don’t have free will. Which, fine, I guess you can define it that way. I just don’t think that sort of free will can carry the weight people usually want the concept of “free will” to carry.

            4. I’ve been following a good deal of the discussion here, though not everything. From what I’ve seen, the people who say Determinism is compatible with free will are just defining away the problem, except their new definition of free will doesn’t, as you say, carry the weight that people want it to.

              So what definition of free will are you using? And what kind of weight do you want it to have?

            5. I guess the concept of free will lets people own their actions, so that a criminal can be held responsible for their crimes, and an artist or a scientist can be proud of their inspirations, for example. I and I think most compatiblists think it’s ridiculous to deny free will at that level.

              If one wants to deny free will, one should assert something like the following: suppose that a person’s economic status at the age of 30-40 is almost entirely determined by their economic status when they’re 0-3 yrs old. Say grades in school and highest degree earned might be highly variable, but they don’t contribute to adult economic status. Then we could say economic status isn’t a function of free will. We might say educational achievement in such case is.

              Basically, to deny free will in a practical sense, to me, means to deny that the effort we put into making personal choices has little or no effect on personal outcomes. I guess you could think of it in terms of personal narrative: free will means that different people who make different choices would lead lives that result in different stories, while without free will different people who make that same choices may nevertheless find themselves living out the same story, either in specific details or in broad narrative arc, depending on how free we are. You could think of it as do we have free will to change the course of our lives, or is our fate written in the stars (or in our socio-economic background). The reality, of course, is that we have some but not total freedom. There are strong outside influences, but there is wiggle room.

              Now, in the sense I’ve described above, I see how denying free will might require us to radically rethink our notions of moral responsibility and so on. I don’t see how lack of free will at the level of physics, or biology, or neurology, or individual psychology has the same implications.

              The reason is that, for instance, physics is too low a level of abstraction to even talk about people, let alone free will. As I said above, if you require our whims to be independent of physics in order for us to have free will, then you need our whims to be independent of who we are in order for us to have free will. And if our whims really were independent of who we are, then it seems clear that it would be wrong to hold people responsible for their actions. That is, by your requirements, free will would absolve people of moral responsibility.

              I keep seeing people here talking about how an acceptance of the fantasy of free will would result in a much more humane penal system that doesn’t hold criminals responsible for their crimes. But you can just as well argue that if we don’t have free will, societal order should be enforced through, say, some sort of eugenics program and widespread drug use for behavioral normalization, ala A Brave New World.

            6. I still don’t understand your definition of free will, since you mostly defined it using other ill-defined concepts. Free will allows people to “own” their actions? Be “held responsible” for them? What the terms in quotes mean is precisely the question I’ve been asking this entire thread. What does it mean to have responsibility for something? That you physically could have chosen otherwise given the same brain state? If that’s the case, then none of us are responsible for anything. Or does it mean that your will was involved in making the choice to do something? Then sure, we are responsible for most of the things we do. But then you’ve also defined free will as “having the ability to make choices,” which is fine if you want to define it that way, but you have to at the same time accept that the choices we make are determined and we could not have not made them. If that’s the case, I think it would be silly to call this “free will,” since the idea of being determined to do something is completely at odds with what most people think of when they say “free” will.

              But hey, I’m not here to argue semantics. My point is, I still don’t know what free will means to you.

              “Basically, to deny free will in a practical sense, to me, means to deny that the effort we put into making personal choices has little or no effect on personal outcomes.”

              That’s simply not the case. Every moment of our lives we have options. In every moment we have the opportunity to make a choice about what to do next. The choices we make affect our lives profoundly, so there certainly is an “effect on personal outcomes.” Not having a “free” will simply means that we are determined to make the choices that we make. Given the same brain state, we would make the same choice every time. If you can explain how there is some wiggle room outside of that, please do.

              “But you can just as well argue that if we don’t have free will, societal order should be enforced through, say, some sort of eugenics program and widespread drug use for behavioral normalization, ala A Brave New World.”

              I don’t think you can argue for it. What exactly is your argument? Behavioral correction can be accomplished without drugs or eugenics, so those things certainly aren’t necessary. Furthermore, drugs subvert the will. I think the method of behavioral correction that humans would find most attractive (given that we don’t have free will) would be one that respects others’ rights to make choices. If you coerce me into behaving a certain way, I haven’t chosen it. However, if I am a criminal and you rehabilitate me, then I’ve become a changed person who can choose to do the right thing of my own will. There’s no reason why humanity couldn’t do things this way.

            7. Tim,
              You’re criticizing a lack of precision in my definition of free will. The box I have to type in here is less than 2.5″ wide, the column it’s ultimately displayed in is still less than 3″ wide. I’m trying to be brief. I’m failing, but I’m trying. Anyway, if you want to have this conversation, you need to expect a conversation at a level appropriate to the format.

              I’m not sure how precise a definition of free will you want, anyway. You are saying that physical determinism negates free will, so I guess you want the definition in terms of a physical theory. Maybe something to do with quantum mechanics in nanostructures in the brain. Really, though, I talked about the level of abstraction that free will is supposed to apply too, and the level of electrons and protons and quantum mechanics isn’t it. You’ll find it’s really hard to define “human” or an individual human at that level of abstraction, too.

              I’m also not trying so much to define what free will is. I was trying to explain why I think people have expressed existential dread at the thought of not having free will, and also why others have expressed brave acceptance of that frightful condition.

              Also, I think I did give a more thorough definition of at least what it means to NOT have free will. And I’m not sure how to interpret your response, I think you meant that the world doesn’t conform to my description of a world without free will. Or maybe you meant that I don’t actually think that’s what it means not to have free will.

              Anyway, my coarse definition of fatalism isn’t so absurd. Surely, if one decides that they don’t have free will, they could just stop trying and ride the waves of fate. It’ll all work out the same in the end. But since of course they couldn’t do that without expecting a drastic change in their life, whatever it is a person might think they want out of free will, they actually really have.

              Also, I didn’t say I have an argument that determinism justifies A Brave New World. I said that idea that acceptance of the fact that our lives are determined by physics will result in a more humane justice system is absurd, and a person could make a similarly sloppy argument that such determinism would justify A Brave New World.

              Finally, are you suggesting that if people did have free will, then rehabilitation would not be a worthwhile objective of the penal system? Or if not, why did you bring it up? It’s just a distraction.

              Actually, though this is already too long, I need to make one more long comment on one more thing. You and others keep insisting the question hinges on “could one have chosen differently [if],” without specifying what the counterfactual really is. If what? Could one have chosen differently if things were THE SAME? Uh, maybe, maybe not. THE SAME when? Things could have been THE SAME at the beginning of the universe, and the universe, including our individual lives, can evolve differently. THE SAME in the moments before the decision was made? Well, maybe not, but do you need to have such freedom? THE SAME except that you are a different person? Oh, well, of course you could have chosen differently then. THE SAME except that you had learned some important lessons earlier in your life? THE SAME except that you learned some unimportant lessons earlier? How same do things have to be to have a free will worth valuing? Keeping in mind, again, that others have expressed existential dilemmas at the thought of not having free will. For reasons that I don’t really understand.

            8. Peter,

              Space is important when you need to give detail, not when you need to be precise. Many dictionary definitions are quite precise, though short. Lack of space is not the problem with your definition; the problem is that you didn’t define anything. You just restated “free will” using other ambiguous terms. “The ability to make choices” and “the ability to make different choices given the same brain state” are 2 precise, short definitions of free will that let you know exactly what I’m talking about. And notice they don’t mention any physics. I don’t need a physics-based definition from you; I just need something that tells me what you’re talking about. How can I tell whether an organism is “owning” its actions or not?

              “Also, I think I did give a more thorough definition of at least what it means to NOT have free will.”

              Mmm… which part was that?

              “Surely, if one decides that they don’t have free will, they could just stop trying and ride the waves of fate.”

              Meaning… they would stop agonising over difficult choices? That would just be an example of misunderstanding determinism. You still have to make choices (even choosing not to do anything is a choice); determinism just means you that whatever you choose, you were always going to choose it. But you don’t know what you’re going to choose until you go through the deliberation process. Lack of free will does not eradicate choice.

              “I said that idea that acceptance of the fact that our lives are determined by physics will result in a more humane justice system is absurd”

              Oh, so the assertion you made without argument was different from the one I thought you made. Ok. Still, you’ve given no argument.

              “Finally, are you suggesting that if people did have free will, then rehabilitation would not be a worthwhile objective of the penal system?”

              No. I brought it up as a better alternative to drugging people.

              “You and others keep insisting the question hinges on “could one have chosen differently [if],” without specifying what the counterfactual really is.”

              I’ve been clear about this the entire time. The question is “could one have chosen differently given the same brain state.” The answer is no.

              Feel free to reply in the main thread to get our space back.

            9. Tim,
              The most central point that I’m trying to express, that seems to keep going past you, is that the question of free will is not a simple analytic question, the answer to which is either true or false. It’s a value laden question. Have you noticed that? It’s raised in particular contexts. And I think the analytic definition you are insisting on: Free will = ~Determinism, does not have much to say about the sort of contexts that free will comes up in.

              Anyway, rather than try to reply in detail to your specific questions right now, I’d rather propose a thought experiment (which I proposed in a previous thread, too). Suppose someone wires a quantum mechanical random number generator into their brains, so that it’s frequently firing off random synapses. Could a person using such a prosthetic have free will? Would they necessarily have free will? Yes, their behavior is determined in part by the RNG, and the particular way that it’s wired to their brain, but the RNG is indeterministic with respect to everything else. If the RNG is biased toward certain numbers, what would the implications of that be?

            10. It would be a simple yes or no if you would just define it. Again, look at the example definitions I gave in my last comment. Under one of them, we have free will. Under the other, we don’t. Unequivocally. I don’t know what you’re trying to say about values because you haven’t made it clear, but if you want to say anything about a concept, then it’s about time you told me what the concept is. You don’t seem to get that.

              “And I think the analytic definition you are insisting on: Free will = ~Determinism, does not have much to say about the sort of contexts that free will comes up in.”

              1. That’s not my definition. If the universe was random and didn’t follow laws, that still wouldn’t make us any freer.

              2. And what contexts have you mentioned, exactly? Crime and punishment, and making choices? Anything else? Because I’m pretty sure the idea that people deserve punishment for their bad actions and praise for their good is exactly what people start to question when they realize that determinism is true. So please, if you’re going to say that I’m not being relevant, back that up with something.

              “Could a person using such a prosthetic have free will? ”

              Seriously? Define free will already! You’re the one who wants to use some definition different from the standard, and yet you refuse to provide it. Can you answer this question? “Could a person using such a prosthetic have snarfblatt?” Maybe you can’t answer because you don’t know what that last word is? Right, and neither do I regarding your question. Define your terms or there’s no point to this conversation.

  9. My mind state is deterministic at any point in time, but my mind state can be chosen by my mind at an earlier point in time. And, yes, my mind state at that earlier time was deterministic, but it was chosen by my mind at a still earlier point in time. Loop it enough times and you have free will and moral responsibility.

    1. Infinite regress and all…yes.

      I’m still a bit unclear as to the *level* of determinism that makes one a determinist. Seems to me there’s a continuum.

      Is it free will if I change my mind at the ice cream stand and have chocolate instead of the vanilla-chocolate swirl? Is it determinism that made me marry my first wife? If I have an argument with a neighbor, is that determinism? What if I regret it later? Is that more evidence of determinism?

      Is this just navel-gazing? And if it is, does that mean I’ve been pre-determined to gaze at my naval? Or did I choose to do so freely? Or maybe only partially freely?

      My head hurts. I’m going to go watch the end of the baseball game.

Leave a Reply to Joel Cancel reply