Almost all of us agree that we’re meat automatons in the sense that all our actions are predetermined by the laws of physics as mediated through our genes and environments and expressed in brains. We differ in how we interpret that fact vis-à-vis “free will and “moral responsibility,” though many of us seem to think that the truth of determinism should be quietly shelved for the good of the masses.
Michael Gazzaniga is a psychology professor at the University of California at Santa Barbara, and director of the SAGE Center for the Study of Mind. He’s written a gazillion books and many articles on cognitive neuroscience, and has a special interest in cognitive functions of split-brain patients.
And over at Big Think, he takes on the idea of free will in a 3.25-minute talk called “Brains are automatic, but people are free.” The title shows that he’s clearly a compatibilist, but what form does he accept? I can’t embed the talk here, but watch it. I have excerpted the crucial bit (most of the presentation) below.
“If you think about it this way, if you are a Martian coming by earth and looking at all these humans and then looking at how they work you wouldn’t—it would never dawn on you to say, ‘Well, now, this thing needs free will!’ What are you talking about?
What we’re knowing is, we’re learning and appreciating the ways in which we produce our perception, our cognition, our consciousness and all the rest of that. And why do you want something in there that seems to be independent of all that?
The central part of free will that people want to hold on to is the sense that that therefore makes you responsible for your actions, so it’s the idea of personal responsibility. And I think that’s very important and I don’t think that all this mechanistic work on the brain in any way threatens that.
We learn that responsibility is to be understood at the social level—the deal of rules that we work out living together. So the metaphor I like to use is cars and traffic. We can study cars and all their physical relationships and know exactly how that works; it in no way prepares us to understand traffic, when they all get together and start interacting: that’s another level of organization and description of these elements interacting.
So the same is it with brains. We can understand brains to the nth degree, and that’s fine and that’s what we’re doing, but it’s not going to, in any way, interfere with the fact that taking responsibility in a social network is done at that level.
So the way I sum it up is that brains are automatic but people are free—because people are joining the social group and in that group are laws to live by. And it’s interesting that every social network—whether it’s artifactual, Internet, or people—accountability is essential or the whole thing just falls apart. You gotta have it.”
No one has anything to worry about, I don’t think, from science in terms of whatever we discover about our nature, and however good we get at describing it: it’s not going to impact that essential value that everybody has to be held accountable, because it’s at a totally different level. And it’s in the social level, which is so crucial and important for the human race.”
I agree with Gazzaniga’s determinism, which he expresses at the outset, but I don’t have any idea what he means by saying “people are free.” That appears to be a statement that sounds good but is manufactured post facto to put a shiny patina on “accountability” (notice he doesn’t use the term “moral responsibility”, though he uses “responsibility”). Or maybe he just wants to save the term “free”, as in “free will.”
And yes, for our society to function smoothly we need to punish criminals to prevent further malfeasance and to set examples for other people. I’m with him 100% here.
Where we diverge is in two places. First, apparently contra Gazzaniga, I do think science will impact the nature of accountability: how people are treated when they perform bad actions. If we truly believe, as Gazzaniga does, that we don’t have a “free choice” in what we do, doesn’t that have any implications for punishment? Surely it does, for the legal system already gives special treatment to those whose actions seem to have been compelled by things over which they have ‘no control,’ like organic disease or mental illness. If we’re also compelled by things whose etiology is less clear, wouldn’t we want to know that, too? And wouldn’t that have some effect on the way we try to punish or rehabilitate people? I find it hard to conceive that the answer is “no.”
Second, I don’t see why on Earth he uses the word “free”? Why are people “free” if their actions are determined? The phrase “Brains are automatic, but people are free” may sound appealing, but it seems to lack content. We can consider them free if somehow helps us psychologically in assigning responsibility, but we can also assign responsibility if we consider ourselves “unfree” in the deterministic sense. If you committed a crime, you are responsible for that crime, whether or not you had a choice to do it. You have to be punished for societal protection and deterrence of yourself and others.
Responsibility isn’t threatened by science, but moral responsibility is. If people want to hold onto that, then they are threatened by science.
And it’s time to get rid of the term “free will.” “Responsibility for an act” seems an adequate replacement, one less freighted with historical baggage.