Bear and Bloom: An experiment on the illusion of conscious will

May 11, 2016 • 10:00 am

“Because it lags slightly behind reality, consciousness can “anticipate” future events that haven’t yet entered awareness, but have been encoded subconsciously, allowing for an illusion in which the experienced future alters the experienced past.”  —Adam Bear

In discussions about our idea of “agency” (or, if you will, “choice” or “free will”), I’ve described experiments showing that you can, to a substantial degree, predict what kind of binary choice—a choice between two actions—someone will make up to 7 seconds before they report having made a conscious choice. This has now been shown in several experiments, and it suggests this: your brain makes “choices” for you before you’re conscious of having made them. And that comports with determinism: the view that our feeling of free agency is illusory, for at any moment when we face a “choice” there is only one choice we can make: the one the laws of physics dictate, acting via your genes and environmental influences. Yet we feel otherwise, and strongly so.

That’s really not much of a surprise, though those who believe in libertarian free will, or even in compatibilism (i.e., free will is compatible with physical determinism) don’t like those experiments.

What is surprising, though, is the suggestion that your consciousness of having made a choice comes not only after your brain has made the decision, but after you’ve actually made the choice.

That’s really not all that surprising, though—not if the notion of having chosen something freely is a confabulation: a part of our neuronal circuitry, perhaps evolved, that makes us feel as if we’ve chosen something when the result of the “choice”—the action—has already occurred. If it’s the case, it seems like a spooky reversal of time. But it’s really not. It’s just our brains fooling us by giving us an experience, or implanting a “memory”, that is in an incorrect time sequence with respect to an action.

This is one of the conclusions you can draw (there are others; see below) from a nice new paper in Psychological Science by Adam Bear and Paul Bloom at Yale (see reference below; free access). Bear has also written a very good and comprehensible summary of the results at Scientific American, “What neuroscience says about free will.” (Answer: you don’t have it, at least in the libertarian form.)

The experiments were clever, and came from the hypothesis that if conscious choice was illusory, you could think you’d made a choice after the choice was actually made and acted upon.

The first thing the authors did was expose the subjects (who had been trained) to five randomly-placed circles on a computer screen, asking them to choose one circle quickly. Then, after intervals of time ranging from 50 to 1000 milliseconds (0.05 to 1 second), the computer randomly turned one of the circles red.

The subjects were then asked if their chosen circle turned red. They had three choices: “yes”, “no” and “I didn’t have time to choose before the circle turned red”, all indicated by pressing one of three keys on a keyboard.

Without any “postdictive bias” of the kind I described above, one would expect “yes” to be answered about 20% of the time when subjects reported that they did make a choice, because the circle that turned red was one of five chosen randomly by the computer. Instead, regardless of the interval before the circle turned red, the probabilities that you said “yes, my chosen circle turned red” was always higher than 20%. That’s shown in the graph below, which plots “probability of a yes answer” against the interval after which the circle turned red.

What’s important about this plot is not only that the probability was higher than 20%, which means that people were saying that their “choice” turned red more often than they should, but that that probability was higher when the interval between the start of the experiment and the circle’s turning red was shorter. That is, people’s bias—that they had “chosen” the circle that later turned red—was higher when they had less time to “make” a choice:


probs 2_0
Fig. 3 (from paper) Results from Experiment 1: probability that participants chose the red circle on trials in which they claimed to have had time to make a choice. The error bands denote 95% confidence intervals. Also shown are the results of the best-fitting logistic model of responses as a function of the reciprocal of time delay.

That makes sense, for according to the author’s model of choice confabulation (below), your memory bias would be greater for the shortest delay between the start of the experiment and the circle’s turning red. Confabulation is likely limited to a short window of time, simply because it’s less likely you can reverse your experience or rewrite history after a longer period. The authors describe this as “the window of unconscious processing”. Note that you wouldn’t expect a negative relationship of the sort shown above people were simply lying about whether they chose the red circle, as such lying shouldn’t show any time dependence.

There are other controls described as well, like seeing if the degree of confidence a subject had in his/her choice affected this relationship (it didn’t); but you can read the short paper yourself.

Here’s the authors’ model (be sure to read the caption):

Screen Shot 2016-05-11 at 8.29.22 AM
Fig. 1 (from paper). A model of postdictive choice in Experiment 1. Although choice of a circle is not actually completed until after a circle has turned red (choice time > delay), the choice may seem to have occurred before that event because the participant has not yet become conscious of the circle’s turning red (choice time < delay + lag in consciousness). The circle’s turning red can therefore unconsciously bias a participant’s choice when the delay is sufficiently short.

The authors thought of one problem with the experiment above. If the subjects were confused about whether they had chosen the circle that turned red, they might simply randomly press the “yes” or “no” button. That would drive the “yes” answers, expected to be 20%, towards 50%, giving the higher-than-expected “yes” rate shown above.

To deal with this, they used an experiment in which they showed TWO randomly positioned, and colored, circles on a screen, with the two colors chosen from an array of six. The told the subjects to choose one color. They then added a third circle between the two that had a color randomly chosen from the two initially displayed. And, as in the five-circle experiment, the third circle appeared at intervals ranging between 0.05 and 1 second. This way a random punch of “yes” and “no”—”I chose the right color” or “I chose the wrong color”, respectively—a randomness due to confusion, would not bias the results. With only two circles, a random punch would just make the probabilities of “yes” and “no” closer to 50%, which is what they should be anyway.

And again, the same bias was shown: subjects generally reported that they chose the circle of the same color as the one that appeared later with a probability of higher than 50%: as high as 63% at short time intervals. And again, the shorter the time interval, the greater bias was seen in the self reports. Here’s the graph of probability of saying “yes” against the time delay. The overall pattern is statistically significant (p = 0.002):

Screen Shot 2016-05-11 at 8.50.25 AM
Fig. 5. (From paper). Results from Experiment 2: probability that participants chose the circle that matched the color of the middle circle on trials in which they claimed to have had time to make a choice. The error bands denote 95% confidence intervals. Also shown are the results of the best-fitting logistic model of responses as a function of the reciprocal of time delay.

What both of these experiments seem to show is that, as Bear wrote in the Scientific American piece, “Perhaps in the very moments that we experience a choice, our minds are rewriting history, fooling us into thinking that this choice—that was actually completed after its consequences were subconsciously perceived—was a choice that we had made all along.” The paper with Bloom cites earlier experiments that also support this result. We have to face the possibility, just as we now realize that choices can be made by the brain before we become conscious of them,” that choices may actually be carried out before we become conscious of having made them; and yet that we feel that the sequence was the opposite of what really happened.

Three issues remain:

  • How common is this phenomenon? This is the first experiment I know of that tested the “confabulatory choice” idea, and we clearly need more and differently designed studies to test the robustness of the conclusions.
  • Could there be another explanation? Yes, the authors mention at least one.  They describe an alternative to their explanation that having made a choice subliminally biases you into thinking you made it before you did, and in fact after the choice was enacted. The alternative is that you experience the choice at the correct moment you made it (i.e., after the circle had turned red), but that the time of that choice is “immediately afterward encoded into memory incorrectly, which subsequently biased their reports of what they had chosen.” Thus we have a memory-revision versus a misperception hypothesis. To me this is a distinction without much difference, for it leads to the same phenomenon: we think we make conscious choices not only after they’re unconsciously made by our brain, but also after we actually carry out the actions.  I should add that the authors give three other limitations of their conclusions, but they’re not the kind that invalidate their results; and you can see them by reading the paper.
  • Why does the brain work this way? Under determinism, there is no problem with us becoming conscious of having made a choice only after our brains have made it for us. Nor is there a naturalistic problem in accepting that, in short intervals, our actions could actually precede our having the sense of “choosing” to do them. What we don’t understand is why we have the illusion of being conscious agents: an illusion of having made the choice at the moment our brains made it, and of having performed an action only after we’re conscious of having decided to do it. All the experiments suggest that these “feelings” don’t represent the real temporal sequence of decision-making.

As I mention in my lecture on free will, the illusion of agency could be either an epiphenomenon of our complex brains, or it might be evolved, and for various reasons. One is that we would leave more copies of our genes if we hold others and ourselves responsible for making choices—for deceiving ourselves into thinking that we could have done otherwise. This could lead to a schema of reward and punishment that could allow one to function better in a small social group. But that’s just a guess, of course. In his Scientific American piece, Bear speculates along these lines, and I’ll leave the last word to him:

Perhaps the illusion can simply be explained by appeal to limits in the brain’s perceptual processing, which only messes up at the very short time scales measured in our (or similar) experiments and which are unlikely to affect us in the real world.

A more speculative possibility is that our minds are designed to distort our perception of choice and that this distortion is an important feature (not simply a bug) of our cognitive machinery. For example, if the experience of choice is a kind of causal inference, as Wegner and Wheatley suggest, then swapping the order of choice and action in conscious awareness may aid in the understanding that we are physical beings who can produce effects out in the world. More broadly, this illusion may be central to developing a belief in free will and, in turn, motivating punishment.

The unstated implication here is that a belief in free will and motivation for punishment (and I’ll add “reward”) leads you to leave more copies of your genes than do individuals without such beliefs and motivations.


Bear, A. and P. Bloom. 2016. A Simple Task Uncovers a Postdictive Illusion of Choice. Psychological Science.Published online before print April 28, 2016, doi:10.1177/0956797616641943

60 thoughts on “Bear and Bloom: An experiment on the illusion of conscious will

  1. That’s really not much of a surprise, though those who believe in libertarian free will, or even in compatibilism (i.e., free will is compatible with physical determinism) don’t like those experiments.

    I’m a compatibilist and I’m entirely happy with such experiments. They are entirely in line with the compatibilist view that a “decision” is deterministically computed by the brain.

    1. I should have said SOME compatibilists dislike those experiments. Most compatibilists are also determinists, so there’s not much of a reason to have a bias against the results.

      1. I was under the impression that all compatibilists are determinists. Isn’t the definition of compatibilism that free will is compatible with determinism? If not, how would you define the term.

        1. Jerry, did you mean to say “… not ONLY after …” here?:

          “To me this is a distinction without much difference, for it leads to the same phenomenon: we think we make conscious choices not after they’re unconsciously made by our brain, but also after we actually carry out the actions.”

      2. I’m not aware of any compatibilists who dislike those experiments, per se.

        (Though I certainly may have missed some).

        I AM aware of compatibilists who dislike
        much of the analysis they see of those experiments.

        That is when incompatibilists draw inferences from those experiments, compatibilists see question-begging, rash or faulty inferences. For instance, question-begging reductionism of the “I” to only being the moment of consciousness, where the “brain making decisions FOR US” is implied as being “not us” and therefore “we” weren’t the ones making the decision.

        1. As a Computer Scientist AND a Compatibilist I find these “time delay” experimental results totally and utterly irrelevent. In any hierarchical, distributed parallel multiprocessor system processing reaction delays of this nature are absolutely inherent in the functionality of the system. In a massively parallel system such as the human brain then, why is anyone surprised at all to see the same effects? This behaviour is not evidence one way or the other in settling the debate about free will.

          1. A thought experiment to show this:

            You are to participate in a “free will” experiment.
            You, being a conscious, highly self formed agent yourself, write the program for a small computer – the program to choose how long to wait before ringing a bell after a “stimulus light” is turned on in the experiment. You leave the room. You reenter the room some time later – the bell is now ringing. The experimenter says -“see.. you have NO FREE WILL, the ringing started before you were conscious of it.
            Your retort – “not at all, the bell was rung with me holding the ultimate responsibility for the decision taken by my subordinate processing entity, running MY program”

            Time differences to consciousness means nothing – only the responsibility for the “self formed” decisional process being used really matters, in such free will experiments.

    2. Not only that, but the “our brains are faking it” is the very essence of compatibilism.

      If we wouldn’t be unable to disentangle what happens with what we think happens, there wouldn’t be any compatibilism.

      But likewise, obviously compatibilism does not bear on the biology of the brain-body mechanisms. It is just a curious observation, not something that “free will” theology can be supported by. (But admittedly, it may confuse some who may be enticed by the religious view.)

  2. Maybe someone can help me here: Why does having your unconscious mind make a decision before your conscious mind is aware of it inconsistent with free will?

    It is still your mind (brain) which developed, aged, and mature, as part of you.

    The other question is if decisions are the inevitable outcome of physics. However, the brain is not only a physical system but a computing device. We can certainly obtain indeterminate answers from computing systems. Either based on inputs or as part of the programming.

    1. I see it as a beneficiary versus generator problem. Autonoetic consciousness is “inherited” post hoc and is simply an unfurling of inexorable molecular configurations. The sensation of free will is so powerful that is appears synchronized.

      1. But this doesn’t solve the problem. It is still your brain performing a calculation and making a decision.

        Would you consider it “not you”, if you are made aware of the result of the calculation even if it was performed by your brain?

        1. It has to be done consciously, and you could have consciously decided otherwise. If you are predetermined to make only one decision, that is absolutely inconsistent with libertarian free will.

    2. Why does having your unconscious mind make a decision before your conscious mind is aware of it inconsistent with free will?

      Most conceptions of libertarian free will have the consciousness being the willer. I guess one could invent a libertarian free-will concept that was nothing to do with the consciousness, but that would be weird to most of those who believe in such things.

      1. I guess I don’t understand the implication that your unconscious is not part of you.

        The unconscious taking part on decision making processes seems uncontroversial to me.

        1. The idea is not just the unconscious, but that a. you FEEL as if you’re making the decision and b. You can make only ONE decision rather than being able to make several decisions, with is the linchpin of libertarian free will. It is the form of “free will” held by most people surveyed and, of course, purveyed by many religions.

          If you still don’t get it, read Sam Harris’s “Free Will”

  3. I worry that the >20% result might be misinterpreted by pseudoscientists as (weak) evidence for precognition. ie. I predicted the future better than the expected 20% of the time.

  4. These experiments are always interesting, but it seems to me that Dennett’s discussion of the early ones in _Consciousness Explained_ still has to be addressed carefully. This is the question of timing and “what happens where”. The analysis of the experiment seems to still presuppose a linear process, which is increasingly false the “further in” one gets. It this becomes not well defined to when an action is initiated, because it seems plausible to say that many processes are initiated in parallel, etc.

    1. It seems to me that a lot of the problem with this whole discussion is what exactly we mean by “you”.

      Sometimes the term is used to indicate be your conscious awareness, sometimes it seems be be your unconscious self, sometimes it seems to be your physical brain.

      Might be easier just to chuck the whole concept of a unified consciousness and admit that the whole thing is an illusion.

  5. I don’t think any of this addresses what ordinary people think about free will. Nor what courts address when they seek to determine[!] competence and responsibility.

    The important thing is not the details of what the biochemistry is doing, but whether a person — as a behaving system — is capable of learning from experience and capable of understanding consequences.

    This fork in the road leads either to incompetence and a mental institution, or to competence and punishment.

    One can and should argue about the utility and effectiveness of the legal system, but the basic question of responsibility is sensible. The imposed consequences are part of the system that determines behavior. The philosophical issues are just academic. Angels on pinheads.

    1. Does moral responsibility require conscious decision making?

      If not you might be right, if yes we might be punishing people just for being unlucky.

      Maybe humans are not as competent as they like to think they are.

      1. Altruistic act is a good example. What is altruism? If it makes you feel better to act, was that not a selfish act, by definition, even if it improves another’s life.

        I contend that the best one can argue for altruism is an unconscious act, like manners. Open the door for someone, without thinking. Smile at a stranger, without thinking. Pick up a napkin for someone who dropped it, without thinking. These are morally responsible, require energy, and, in many instances, are done without thought.

        We are trained like monkeys to be altruistic, but if you dive onto a mine to save a fellow soldier, it is likely you did it, unconsciously or consciously, knowing that you would prefer it that way and that preference is self-motivated.

  6. Why can’t we remember the future? (*)

    If memories work, evolutionarily, to serve a purpose, consciousness may support only remembering the past and not the future. Many aspects of our reality are reversible, why can’t we remember those future events? If the events are actually incapable of being remembered, consciousness probably worked out the best and possibly only method to reward us with an experience that most closely matches the thermodynamic arrow of time and makes us capable of comprehending our own awareness without a discrete solution for future events…possibly because we already know we cannot know what those events will be.


  7. I guess it’s main function is to enforce/encourage cooperation.

    If humans lacked this perception of making choices in complete freedom, praising/rewarding and blaming/punishing other humans for their actions would be much more difficult.

  8. This is an interesting set of experiments, but I think there’s a much simpler explanation for the results than one involving any ideas about free will or lack thereof. I think a much simpler and satisfying explanation is that participants were just more inclined to always agree; i.e. were biased towards answering “yes”. The authors mention this issue in the Results section of Experiment 2:

    “These effects held when we excluded 2 participants who indicated in debriefing that they believed they had a greater bias to choose the circle matching the middle circle’s color on trials with shorter delays.”

    I wouldn’t expect that these 2 participants were the only ones affected by this bias, and I would suggest that the results of this study are really just measurements of this bias in the study population.

    Furthermore, I don’t buy the assumption that if participants were confused about whether or not they “chose” the red circle that this confusion would result in them randomly pressing “yes” or “no”. Under the proposition that participants are just more likely to agree, no matter what the task or question, we would not expect a 50/50 breakdown.

    I think given the study population (i.e. mostly undergrad college students, some participating for course credit!), the idea that participants were just more inclined to agree no matter what the task is even more likely.

    Definitely an interesting pilot study, but I think the authors are reading too much into their results.

  9. Adding my voice to several people above who wrote that it is still me making a decision if I make it unconsciously. Making a distinction between conscious me and non-conscious not-me seems to be merely a variant of mind-body dualism, something that I would have hoped people who disbelieve in the supernatural would have gotten over. You are your body, not merely an immaterial ghost living in it.

    That leaves only “could have made another choice”, which is not an issue for compatibilists. But it is also a tricky one in general, as every event only ever happens once, be it a human decision or a rock tumbling down a slope. So how is it different from any other context in which we explore how stuff works? Whenever we say “could have been different” we mean “if circumstances had been slightly different”.

  10. The mystery here isn’t “free will” which cannot even be defined in any physically meaningful way. The mystery is experience itself. I can find no reason that conscious experience should exist or any use for it. Even if it exists it seems to have no observable effect.

    Except we somehow have the power to talk about it.

    1. It probably is a way for the mind to model and predict its own behavior. Just like it models and predicts the external evironment.

      1. That’s not really an explanation of anything. It’s like if I could travel into the future and you said “It’s just a way to plan for the future.”

        Yeah time travel can be used for that. But that fact is not an explanation of time travel.

        Also a computer can be programed to model the external environment to make predictions. It can do so without any evidence of subjective experience or any need for them. Even if it had subjective experiences that fact plays no role in understanding the program. Subjective experiences would only make it a helpless witness to events that it has no control over. Thus subjective experiences have no external effect that make them detectable.

        Except we somehow have the power to talk about them.

        1. Sorry I can’t laid out the precise neurophysiology here. If I could I’d collect my Nobel prize.

        2. We don’t just model the external environment. We model ourselves as well. When making a decision, it’s not enough to extrapolate various alternative future environments; we must also extrapolate our own subjective feelings about those alternatives in order to know which ones we prefer. That can’t be done without some sort of internal awareness of self.

          So I think you’re mistaken in thinking that subjective experience contributes nothing to decision-making, or that a computer could be programmed to do everything we do and still feel nothing.

          1. You are missing the point.

            Computers monitor their own state all the time. There is nothing special about that. But again they can do so without the need for subjective experiences. It is entirely objective.

            Consciousness feels like a feedback loop. And there is no reason that it cannot be a feedback loop. But there is no reason that it should feel like a feedback loop and feeling like a feedback loop seems to add nothing .

            Self monitoring and feedback is ubiquitous in both nature and our technology. There is no reason to think it feels anything.

            1. It seems to me you’re painting yourself into a corner. Your intuition tells you that systems made of atoms (no matter how complex) can’t be conscious; there ought to be nothing it feels like to be such a system.

              But this is clearly falsified by our own experience. So maybe it’s time to abandon that intuition and accept that there is indeed something it feels like to be a complex self-modeling system. We don’t understand precisely how that feeling emerges from the collective action of atoms, but our lack of understanding doesn’t negate the fact that it happens.

              Our technology incorporates feedback, but to date we have come nowhere close to building an artificial introspective system capable of carrying on an intelligent conversation about the nature of consciousness. If at some point we do, I’m certainly not prepared to say that there are no feelings going on under the hood and that its reports of such feelings must necessarily be false.

              1. No my intuition is telling me that subjective experiences absolutely do exist. Logic tells me that deterministic laws and causal webs no matter how complex cannot bridge the subjective/objective divide.

                A line of dominoes will fall once the first one is pushed. You can make the pattern of dominoes as complex as you like and even set them up to do calculations. But the end point is determined by the starting point. There is no role for subjective experience in understanding the pattern of falling dominoes. Complexity is irrelevant.

                A computer program is much like falling dominoes. It is simply a vast Boolean equation where the next state follows from the previous state and in fact is caused by the previous state. Just like the falling dominoes.

                The same is true of any system that follows causal deterministic rules. You may as well be watching a movie depicting events rather than real events. You don’t need to refer to free will or subjective experience to explain anything in either.

                I think that it is you who is being driven astray by intuition. We live in a world of subjective experience like a fish lives in water. The fish likely views the water as a platonic given and sees no mystery. Until it discovers the surface. Similarly our immersion in subjective experience makes it hard to understand that there is a mystery there.

                And as for compatibilism… I can not wrap my head around why anyone is drawn toward it.

              2. If by “mystery” you mean we don’t currently know the answer, then obviously I agree. If you mean we can never know the answer, I think it’s way too soon to leap to that conclusion.

                My own view is that the so-called “hard problem” — why we feel like something instead of nothing — will come to be viewed as a pseudo-problem of the same ilk as the “problem” of existence itself. (Many people already view it this way.)

                My guess is that once we understand the mind well enough to build an artificial one, such notions as introspection without self-awareness, or experience that doesn’t feel like anything, will seem as incoherent as the idea of p-zombies that talk knowledgeably about the nature of consciousness without ever having experienced it.

                Regarding your domino computer, Hofstadter argues roughly as follows:

                If we limit our analysis to the level of physics and falling dominoes, we can explain what happens in any given run of the machine. But that gives us no insight into what will happen with different inputs; the only way to answer that question is to set up the dominoes (or domino-level simulations of them) and see where they fall.

                But if we know what algorithm the computer implements, then we can make intelligent predictions about what it will do without actually running it. So complexity is not irrelevant; ignoring the individual dominoes in favor of a systems-level view gives us understanding that physics alone does not.

              3. Very well put. The hard problem is probably nothing else than applying an internal simulator to our own behavior. Couple this simulator to a dopamine squirt when obtaining better outcomes, and you have an incentive to keep doing the simulation.

              4. Obviously I’m saying that we don’t know the answer but no I am not saying we can’t know the answer. Maybe we can’t but that is a different subject.

                What I am saying is that it is a very different kind of question than science usually deals with. As a result I cannot imagine even what the answer would look like.

                Maybe it is a pseudo-problem. If so then working out and understanding how that is so is a very much more interesting problem than the free will question. All I can say is you should be careful. Calling it a pseudo-problem may be simply a result of our total failure to get any grip on the problem.

                Complexity isn’t irrelevant for trying to predict the output of an algorithm. I never said it was. At some point you really do have to take an incomplete system view and that puts practical limits on your ability to make predictions. I would not say that this system level view gives us understanding that physics alone does not. In fact taking system level views is exactly how you do physics. Newtonian mostly works in every day problems but it ignores some minor details. You add quantum mechanics and get more detail and now you can understand electricity and magnetism. You then get a deeper understanding by adding quarks and quantum chromodynamics. Then you add the Higgs mechanism for electroweak symmetry breaking. Physics is a vast system of nested system level views.

                No, I didn’t say that complexity was irrelevant for predicting complex system. I said it was irrelevant for explaining subjective experience. The output of an algorithm is objective. Experience isn’t.

    2. “The mystery here isn’t “free will” which cannot even be defined in any physically meaningful way.”

      Sez you. 😉

      A Compatibilist description of free will (per Wikipedia): An instance of “free will” as one in which the agent had freedom to act according to his own motivation. That is, the agent was not coerced or restrained.

      This understanding of free will describes different physical situations, and can be tested empirically. Sounds physically meaningful to me.

      <i"The mystery is experience itself.

      Why? We have senses that take in information from the external world from which our brain produce internal models or impressions, hence “internal experiences.”
      Imagination is our way of using these impressions to run through various models of our own behavior to more likely predict which behaviors will get what we want.
      Sounds pretty useful to me.

      “Experience” is a label we put on this process in our minds.

      “I can find no reason that conscious experience should exist or any use for it. Even if it exists it seems to have no observable effect.”

      Well, in the context of what is being generally discussed in this thread – the relationship of consciousness to our actions and decisions – it seems to me consciousness still looks important. Whatever is happening “beneath” our consciousness, it’s mostly the things we are conscious of that matter most to our decision making and deliberate actions.


      1. No.

        Our behavior is constrained in exactly the way a falling rock is constrained. Our behavior is far more complex but that complexity adds nothing to the subject of constraint. The weather is complex but it is just as constrained as the falling rock.

        Perhaps you would like to tell me how I can measure how constrained a system is so that I can tell how much free will it has. For example how do I analyze a computer program to see how much free will it has?

        And on experience – a robot can have equipment that can “take in information from the external world” and form “internal models”. A self driving car can do just that. But does it have experiences? Or is it just a calculation?

  11. I wonder how long before this is picked up and completely misrepresented by the ‘Scientists have found …’ TV trivia shows (as lampooned recently by John Oliver). “Why your brain is lying to you!” “How your subconscious controls you!”

    Or the Deepak…


  12. This article was posted on another forum a while back by people saying “see, no free will! We don’t choose what to do, aren’t in control…”

    Putting aside all the question-begging assumptions that have been hashed out on thee pages before…

    Just like other experiments on consciousness that people raise in the free will debates, it still strikes me as incredibly rash to infer too much from such experiments.

    It’s like the example I’ve given before: the Defense Lawyer who puts his client on the stand, and says “Now, I’m going to tap your knee with this rubber mallet. Please, decide not to move your leg at all in response.” The Lawyer taps his client’s knee with a rubber mallet, and when the leg reflexively moves, the Lawyer declares “You see, judge, this example indicates that my client’s behavior is entirely reflexive and out of his control. Therefore he could not have been responsible for planning the defrauding scheme for which he is accused! I rest my case!”

    No one ought to get away with such a gratuitous leap of logic; they need to show how the process from that example accounts for FAR more than a little leg jiggle. How would that model account for vastly more complex scenarios, playing baseball, chess, planning trips, starting businesses, planning crimes. And it would have to start with even basic challenges to the model: for instance, innumerable simple experiments suggesting the client IS in control of his actions – e.g. “I’m going to ask everyone to stand in this room, but I’d like you to remain seated if you can.” No problem passing that test for most people, or any number of similar tests of one’s control. This “purely reflexive” model of behavior can’t just cherry pick from one example, and ignore any number of other examples of human behavior that don’t fit the model. Same with any attempts to draw inferences from that study.

  13. Similarly, for the experiment conducted in the Scientific American article: the article explores the idea that the consciousness of a decision plays no causal role in making the decision, with the implication “This could SOMETIMES lead us to think we made a choice when we actually didn’t or think we made a different choice than we actually did.”


    This pattern of responding suggests that participants’ minds had SOMETIMES swapped the order of events in conscious awareness,

    (My emphasis). Ok, so if it is only “sometimes” then what are we to think of the fact many, sometimes the majority depending on how it was run, of the subject’s answers did not suggest this result?

    Importantly, participants’ reported choice of the red circle dropped down near 20% when the delay for a circle to turn red was long enough that the subconscious mind could no longer play this trick in consciousness and get wind of the color change before a conscious choice was completed.

    So, this effect dropped when there was more choice for the decisions.

    How then is this supposed to map on, and explain the vast number of projects humans engage in which allow for far more time in contemplation and decision-making? Why in the world would anyone think we can infer from these specific, artificially forced tests where a minority of the results would be explained as a mismatch between our consciousness and our decisions…to a model that explains how we operate in ALL situations and “SEE, WE NEVER HAVE FREE WILL.”

    This just seems as gratuitous a leap as the Lawyer with the mallet in the courtroom.

    Remember, the hits have to be explained just as well as the misses, with any good theory. We would have to explain all the times when our conscious understanding of our decisions DO seem to map on to, make sense of, what we actually chose to do – e.g. all those times we do what we have consciously declared beforehand we are going to do. Among a myriad other issues.

  14. I try to avoid calling things profound, but the lag time in consciousness and the stories we then tell to explain our “choices” strike me as the most profound insights about human nature I’ve learned about. Partly this is because it’s so obvious: of course consciousness lags behind the physical action of the brain which lags behind the things that act on the brain. But the implications of taking this in pivot how I see ethics, culture, and institutions. Far-reaching.

    At the same time, I’m not sure I get how far the results of this finding extend. This cautions me and makes me look forward to where this line of research heads.

    1. I’m also stricken with puzzlement about this post and the paper. I keep reading and reading excerpts–like trying to figure out a biostats equation–and even tried diagramming –and it’s nearly outside my grasp. Outside my ability.

      Very cool. And I still need to reread this.

  15. Why does the brain work this way?

    Let’s tackle this from the other direction. There must be some neutral or beneficial use to the ‘postdictive illusion of choice’ otherwise our species would have already have adapted to another cognitive strategy, or gone extinct.

    Given two males (or two females) competing for the best genes for their offspring how would a having more convincing ‘postdictive illusion of choice’ than the competitor work? My suspicion is that a ‘salesman’ who believes mote strongly in the product they are selling is more likely to make the ‘sale’ – and in a social species like ours *behaving* as if ‘I have a better product to sell’ is a key behaviour to getting access to mates.

    tldr: The ‘postdictive illusion of choice’ boots self confidence and hence mating success.

    1. We will do what the laws of physics requires us to do in an effectively deterministic world. The same as any program driven robot. our “mating success” flows from the deterministic nature of that program.

      Our experience of free will is just the result of the fact that we experience our own thought process much the way we experience color. That is what creates the “I” inside or at least the illusion.

      The mystery here is not in the program or the existence of free will however you wish to define it. The mystery is in the fact of experience.

  16. I think the last quoted paragraph is most likely: Over time the close association in time between our brains making the choice and our awareness of an action became hard-wired and the awareness (consciousness) making the choice — because the causality illusion had survival and/or reproductive value.

  17. From what I’ve read about in modern cognitive science, our brains aren’t singular entities but instead contain multiple “yous”. Sort of like Congress. These different “yous” have varying primacy throughout the day. The “you” that you think “you” are, the one that feels like it’s in control, is more like the press secretary for Congress. And that press secretary’s job is to make the decisions made by Congress socially acceptable.

    But, crucially, the press secretary has no clue what algorithm Congress used to arrive at its conclusion; plausible deniability for the press secretary is inherent to the system.

  18. I’m starting to think of consciousness in Computer Science terms. The mind is making decisions in a highly distributed fashion and once a final unified decision has been made the various modules need to have this fact acknowledged. Consciousness occurs when all of the various modules are checking back in with each other and this mass communication frenzy is what we experience as our subjective experience of making choices. This would explain the delay that these experiments are seeing.

    It isn’t a great analogy but its all I got.

  19. as a longtime fan of spider-man, i can’t help but notice the similarities between this description of unconscious vs conscious decision-making and standard portrayals of the operation of SM’s legendary spider-sense, his precognitive ability to sense and evade threats before they can do him harm.

    under bear and bloom’s model, SM’s subconscious decision-making apparatus simply has a louder “voice” than us regular joes, at least when it perceives threats. yet this voice, however loud, is not articulate in any way that gives him specific details (no picture nor sound) about potential threats. and while his spider-sense is often described as alerting him to danger before it happens, in practice that danger always seems to be already in motion, so we can rule out violations of causality.

    so combined with a heightened sensory acuity, B&B’s model could offer one tested theoretical basis for SM’s mysterious disaster-avoidance system.

Leave a Reply