Michael Graziano is a neuroscientist, a professor of psychology at Princeton University, and, on the side, writes novels for both children and adults. His speciality is the neurology and evolutionary basis of consciousness, about which he’s written several pieces at The Atlantic.
His June 6 piece, “A new theory explains how consciousness evolved“, attempts to trace how consciousness (which I take to be the phenomenon of self-awareness and agency) could arise through evolution. This is a very good question, although resolving it will ultimately require understanding the “hard problem” of consciousness—the very fact that we are self-aware and see ourselves as autonomous beings. We’re a long way from understanding that, though Graziano is working on the neuroscience as well as the evolution.
In the meantime, he’s proposed what he calls the “Attention Schema Theory,” or AST, which is a step-by-step tracing of how consciousness might have arisen via evolutionary changes in neuronal wiring. To do this, as Darwin did when trying to understand the stepwise evolution of the eye, you need to posit an adaptive advantage to each step that leads from primitive neuronal stimuli (like the “knee reflex”) to full-fledged consciousness of the human sort.
That, of course is difficult. And we’re not even sure if the neuronal configurations that produced consciousness were really adaptive for that reason—that is, whether the phenomenon of consciousness was something that gave its early possessors a reproductive advantage over less conscious individuals. It’s possible that consciousness is simply an epiphenomenon—something that emerges when one’s brain has evolved to a certain level of complexity. If that were the case, we wouldn’t really need to explain the adaptive significance of consciousness itself, but only of the neural network that produced it as a byproduct.
Now I haven’t read Graziano’s scholarly publications about the AST; all I know is how he describes it in the Atlantic piece. But, as I’ve already said, if you’re describing some complex science in a popular article, at least the outline of that science should be comprehensible and make sense. And that’s what I find missing in the Atlantic article. Graziano lucidly describes the steps by which a lineage could become more complex in its sensory system, with each step possibly enhancing reproduction. But when he gets to the issue of consciousness itself—the phenomenon of self-awareness—he jumps the shark, or, rather, dodges the problem.
Here are the steps he sees in the AST, and when each step might have occurred in evolution.
1.) Simple acquisition of information through neurons or other sensory organs. This could have happened very early; after all, bacteria are able to detect gradients of light and chemicals, and they were around 3.5 billion years ago.
2.) “Selective signal enhancement,” the neuronal ability to pay attention to some environmental information at the expense of other information. If your neuronal pathways can compete, with the “winning signals” boosting your survival and reproduction, this kind of enhancement will be favored by selection. This will confer on animals the ability to adjudicate conflicting or competing signals, paying attention to the most important ones. Since arthropods but not simpler invertebrates can do this, Graziano suggests that this ability arose between 600 and 700 million years ago.
3.) A “centralized controller for attention” that could draw one’s “overt attention” among inputs from several different sensory systems (for example, you might want to go after the smell of food rather than toward the darkness, as in that moment it’s better to get food than to hide). This, says Graziano, is controlled by the part of the brain called the tectum, which evolved about 520 million years ago.
The tectum, Graziano adds, works by forming an “internal model” of all the different sensory inputs. As he says,
The tectum is a beautiful piece of engineering. To control the head and the eyes efficiently, it constructs something called an internal model, a feature well known to engineers. An internal model is a simulation that keeps track of whatever is being controlled and allows for predictions and planning. The tectum’s internal model is a set of information encoded in the complex pattern of activity of the neurons. That information simulates the current state of the eyes, head, and other major body parts, making predictions about how these body parts will move next and about the consequences of their movement. For example, if you move your eyes to the right, the visual world should shift across your retinas to the left in a predictable way. The tectum compares the predicted visual signals to the actual visual input, to make sure that your movements are going as planned. These computations are extraordinarily complex and yet well worth the extra energy for the benefit to movement control. In fish and amphibians, the tectum is the pinnacle of sophistication and the largest part of the brain. A frog has a pretty good simulation of itself.
I’m still not sure what this “internal model” is: the very term flirts with anthropomorphism. If it’s simply a neuronal system that prioritizes signals and feeds environmental information to the brain in an adaptive way, can we call that a “model” of anything? The use of that word, “model,” already implies that some kind of rudimentary consciousness is evolving, though of course such a “model” is perfectly capable of being programmed into a computer that lacks any consciousness.
4.) A mechanism for paying “covert” as well as “overt” attention. Covert attention is stuff that we attend to in our brains without directly paying attention to it. An example is focusing your hearing on a specific conversation nearby and ignoring extraneous sounds. Of course the very concept of “paying selective attention” sort of implies that we have some kind of consciousness, for who is doing the “paying”?
The part of the brain that controls covert attention, says Graziano, is the cortex. That evolved with the reptiles, about 300 million years ago.
And here’s where the problem with the article lies, for Graziano subtly, almost undetectably, says that with this innovation we’ve finally achieved consciousness. His argument is a bit tortuous, though. First he gives a thought experiment that implies cortex = consciousness, then undercuts that thought experiment by saying that that that doesn’t really explain. consciousness. He then reverses direction again, bringing consciousness back to center stage. It’s all very confusing, at least to me.
Here’s the part where consciousness comes into his piece. Graziano starts with crocodiles, which have a selectively attentive cortex, and describes a Gedankenexperiment that explictly suggests consciousness:
Consider an unlikely thought experiment. If you could somehow attach an external speech mechanism to a crocodile, and the speech mechanism had access to the information in that attention schema in the crocodile’s wulst, that technology-assisted crocodile might report, “I’ve got something intangible inside me. It’s not an eyeball or a head or an arm. It exists without substance. It’s my mental possession of things. It moves around from one set of items to another. When that mysterious process in me grasps hold of something, it allows me to understand, to remember, and to respond.”
But then Graziano takes it back, for he realizes that selective attention itself could be a property of neuronal networks, and doesn’t imply anything about the self-awareness and sense of “I” and “agency” that we call consciousness. (Note that the words “I’ve got something intangible inside me” is an explicitly conscious thought.) But in denying the intangibility of consciousness, he simultaneously affirms his presence. Here’s where the rabbit comes out of the hat:
The crocodile would be wrong, of course. Covert attention isn’t intangible. It has a physical basis, but that physical basis lies in the microscopic details of neurons, synapses, and signals. The brain has no need to know those details. The attention schema is therefore strategically vague. It depicts covert attention in a physically incoherent way, as a non-physical essence. And this, according to the theory, is the origin of consciousness. We say we have consciousness because deep in the brain, something quite primitive is computing that semi-magical self-description. Alas crocodiles can’t really talk. But in this theory, they’re likely to have at least a simple form of an attention schema.
But an “attention schema” isn’t consciousness, not in the way that we think of it. Nevertheless, Graziano blithely assumes that he’s given an adaptive scenario for the evolution of consciousness, an evolution that’s only enhanced because you also have to model the consciousness of others—what Dan Dennett calls “the intentional stance.” Graziano:
When I think about evolution, I’m reminded of Teddy Roosevelt’s famous quote, “Do what you can with what you have where you are.” Evolution is the master of that kind of opportunism. Fins become feet. Gill arches become jaws. And self-models become models of others. In the AST, the attention schema first evolved as a model of one’s own covert attention. But once the basic mechanism was in place, according to the theory, it was further adapted to model the attentional states of others, to allow for social prediction. Not only could the brain attribute consciousness to itself, it began to attribute consciousness to others.
So here he’s finessed the difficulty of self-awareness by simply asserting that once you have mechanisms for providing both covert and overt attention, you have consciousness. I don’t agree (though of course I’ve read only this article). Why couldn’t a computer do exactly the same things, but without consciousness? In fact, they do those things, as in self-driving cars.
Graziano goes on to say that figuring out what other members of your species do, based on the notion that they have consciousness, is itself a sign of consciousness. And again I don’t agree. A computer can have an “intentional stance,” using a program and behavioral cues to direct its own behavior, without consciousness. The “hard problem”—that of self-awareness—has been circumvented, assumed without a good reason.
Graziano finishes by talking about semantic language, something that’s unique to humans and surely does require consciousness (I think! Maybe I’m wrong!). But that’s irrelevant, for the evolution of consciousness has already been assumed.
I admire Graziano for realizing that if consciousness, which is closely connected with our sense of agency and libertarian “free will”, evolved, there may be an adaptive explanation for it. He doesn’t consider that consciousness may be an epiphenomenon of neural complexity, which is possible.
I myself think consciousness and agency are indeed evolved traits, traits whose neuronal and evolutionary bases may elude our understanding for centuries. I take a purely evolutionary view rather than a neuroscientific view, for I’m not a neuroscientist. And using just evolution, one can think of several reasons why consciousness and agency might have been favored by selection. I won’t reiterate these here as I discuss them at the end of my “free will” lectures that you can find on the Internet. And I always say that the problem of agency is unsolved. It still is, as it is for consciousness.
Graziano is making progress with the neuroscience, but the AST is still a long way from being a good theory of how consciousness evolved.