“Because it lags slightly behind reality, consciousness can “anticipate” future events that haven’t yet entered awareness, but have been encoded subconsciously, allowing for an illusion in which the experienced future alters the experienced past.” —Adam Bear
In discussions about our idea of “agency” (or, if you will, “choice” or “free will”), I’ve described experiments showing that you can, to a substantial degree, predict what kind of binary choice—a choice between two actions—someone will make up to 7 seconds before they report having made a conscious choice. This has now been shown in several experiments, and it suggests this: your brain makes “choices” for you before you’re conscious of having made them. And that comports with determinism: the view that our feeling of free agency is illusory, for at any moment when we face a “choice” there is only one choice we can make: the one the laws of physics dictate, acting via your genes and environmental influences. Yet we feel otherwise, and strongly so.
That’s really not much of a surprise, though those who believe in libertarian free will, or even in compatibilism (i.e., free will is compatible with physical determinism) don’t like those experiments.
What is surprising, though, is the suggestion that your consciousness of having made a choice comes not only after your brain has made the decision, but after you’ve actually made the choice.
That’s really not all that surprising, though—not if the notion of having chosen something freely is a confabulation: a part of our neuronal circuitry, perhaps evolved, that makes us feel as if we’ve chosen something when the result of the “choice”—the action—has already occurred. If it’s the case, it seems like a spooky reversal of time. But it’s really not. It’s just our brains fooling us by giving us an experience, or implanting a “memory”, that is in an incorrect time sequence with respect to an action.
This is one of the conclusions you can draw (there are others; see below) from a nice new paper in Psychological Science by Adam Bear and Paul Bloom at Yale (see reference below; free access). Bear has also written a very good and comprehensible summary of the results at Scientific American, “What neuroscience says about free will.” (Answer: you don’t have it, at least in the libertarian form.)
The experiments were clever, and came from the hypothesis that if conscious choice was illusory, you could think you’d made a choice after the choice was actually made and acted upon.
The first thing the authors did was expose the subjects (who had been trained) to five randomly-placed circles on a computer screen, asking them to choose one circle quickly. Then, after intervals of time ranging from 50 to 1000 milliseconds (0.05 to 1 second), the computer randomly turned one of the circles red.
The subjects were then asked if their chosen circle turned red. They had three choices: “yes”, “no” and “I didn’t have time to choose before the circle turned red”, all indicated by pressing one of three keys on a keyboard.
Without any “postdictive bias” of the kind I described above, one would expect “yes” to be answered about 20% of the time when subjects reported that they did make a choice, because the circle that turned red was one of five chosen randomly by the computer. Instead, regardless of the interval before the circle turned red, the probabilities that you said “yes, my chosen circle turned red” was always higher than 20%. That’s shown in the graph below, which plots “probability of a yes answer” against the interval after which the circle turned red.
What’s important about this plot is not only that the probability was higher than 20%, which means that people were saying that their “choice” turned red more often than they should, but that that probability was higher when the interval between the start of the experiment and the circle’s turning red was shorter. That is, people’s bias—that they had “chosen” the circle that later turned red—was higher when they had less time to “make” a choice:
That makes sense, for according to the author’s model of choice confabulation (below), your memory bias would be greater for the shortest delay between the start of the experiment and the circle’s turning red. Confabulation is likely limited to a short window of time, simply because it’s less likely you can reverse your experience or rewrite history after a longer period. The authors describe this as “the window of unconscious processing”. Note that you wouldn’t expect a negative relationship of the sort shown above people were simply lying about whether they chose the red circle, as such lying shouldn’t show any time dependence.
There are other controls described as well, like seeing if the degree of confidence a subject had in his/her choice affected this relationship (it didn’t); but you can read the short paper yourself.
Here’s the authors’ model (be sure to read the caption):
The authors thought of one problem with the experiment above. If the subjects were confused about whether they had chosen the circle that turned red, they might simply randomly press the “yes” or “no” button. That would drive the “yes” answers, expected to be 20%, towards 50%, giving the higher-than-expected “yes” rate shown above.
To deal with this, they used an experiment in which they showed TWO randomly positioned, and colored, circles on a screen, with the two colors chosen from an array of six. The told the subjects to choose one color. They then added a third circle between the two that had a color randomly chosen from the two initially displayed. And, as in the five-circle experiment, the third circle appeared at intervals ranging between 0.05 and 1 second. This way a random punch of “yes” and “no”—”I chose the right color” or “I chose the wrong color”, respectively—a randomness due to confusion, would not bias the results. With only two circles, a random punch would just make the probabilities of “yes” and “no” closer to 50%, which is what they should be anyway.
And again, the same bias was shown: subjects generally reported that they chose the circle of the same color as the one that appeared later with a probability of higher than 50%: as high as 63% at short time intervals. And again, the shorter the time interval, the greater bias was seen in the self reports. Here’s the graph of probability of saying “yes” against the time delay. The overall pattern is statistically significant (p = 0.002):
What both of these experiments seem to show is that, as Bear wrote in the Scientific American piece, “Perhaps in the very moments that we experience a choice, our minds are rewriting history, fooling us into thinking that this choice—that was actually completed after its consequences were subconsciously perceived—was a choice that we had made all along.” The paper with Bloom cites earlier experiments that also support this result. We have to face the possibility, just as we now realize that choices can be made by the brain before we become conscious of them,” that choices may actually be carried out before we become conscious of having made them; and yet that we feel that the sequence was the opposite of what really happened.
Three issues remain:
- How common is this phenomenon? This is the first experiment I know of that tested the “confabulatory choice” idea, and we clearly need more and differently designed studies to test the robustness of the conclusions.
- Could there be another explanation? Yes, the authors mention at least one. They describe an alternative to their explanation that having made a choice subliminally biases you into thinking you made it before you did, and in fact after the choice was enacted. The alternative is that you experience the choice at the correct moment you made it (i.e., after the circle had turned red), but that the time of that choice is “immediately afterward encoded into memory incorrectly, which subsequently biased their reports of what they had chosen.” Thus we have a memory-revision versus a misperception hypothesis. To me this is a distinction without much difference, for it leads to the same phenomenon: we think we make conscious choices not only after they’re unconsciously made by our brain, but also after we actually carry out the actions. I should add that the authors give three other limitations of their conclusions, but they’re not the kind that invalidate their results; and you can see them by reading the paper.
- Why does the brain work this way? Under determinism, there is no problem with us becoming conscious of having made a choice only after our brains have made it for us. Nor is there a naturalistic problem in accepting that, in short intervals, our actions could actually precede our having the sense of “choosing” to do them. What we don’t understand is why we have the illusion of being conscious agents: an illusion of having made the choice at the moment our brains made it, and of having performed an action only after we’re conscious of having decided to do it. All the experiments suggest that these “feelings” don’t represent the real temporal sequence of decision-making.
As I mention in my lecture on free will, the illusion of agency could be either an epiphenomenon of our complex brains, or it might be evolved, and for various reasons. One is that we would leave more copies of our genes if we hold others and ourselves responsible for making choices—for deceiving ourselves into thinking that we could have done otherwise. This could lead to a schema of reward and punishment that could allow one to function better in a small social group. But that’s just a guess, of course. In his Scientific American piece, Bear speculates along these lines, and I’ll leave the last word to him:
Perhaps the illusion can simply be explained by appeal to limits in the brain’s perceptual processing, which only messes up at the very short time scales measured in our (or similar) experiments and which are unlikely to affect us in the real world.
A more speculative possibility is that our minds are designed to distort our perception of choice and that this distortion is an important feature (not simply a bug) of our cognitive machinery. For example, if the experience of choice is a kind of causal inference, as Wegner and Wheatley suggest, then swapping the order of choice and action in conscious awareness may aid in the understanding that we are physical beings who can produce effects out in the world. More broadly, this illusion may be central to developing a belief in free will and, in turn, motivating punishment.
The unstated implication here is that a belief in free will and motivation for punishment (and I’ll add “reward”) leads you to leave more copies of your genes than do individuals without such beliefs and motivations.
Bear, A. and P. Bloom. 2016. A Simple Task Uncovers a Postdictive Illusion of Choice. Psychological Science.Published online before print April 28, 2016, doi:10.1177/0956797616641943