Scholarpedia defines the “hard problem” of consciousness this way:
The hard problem of consciousness (Chalmers 1995) is the problem of explaining the relationship between physical phenomena, such as brain processes, and experience (i.e., phenomenal consciousness, or mental states/events with phenomenal qualities or qualia). Why are physical processes ever accompanied by experience? And why does a given physical process generate the specific experience it does—why an experience of red rather than green, for example?
. . . and characterizes the “easy problems” this way:
The hard problem contrasts with so-called easy problems, such as explaining how the brain integrates information, categorizes and discriminates environmental stimuli, or focuses attention. Such phenomena are functionally definable. That is, roughly put, they are definable in terms of what they allow a subject to do. So, for example, if mechanisms that explain how the brain integrates information are discovered, then the first of the easy problems listed would be solved. The same point applies to all other easy problems: they concern specifying mechanisms that explain how functions are performed. For the easy problems, once the relevant mechanisms are well understood, there is little or no explanatory work left to do.
Here’s a new article in Aeon, brought to my attention by reader Rick, that tries to show that this distinction is not fruitful, and that’s there’s a third way: the “real problem” of consciousness. The author, Anil Seth, is professor of Cognitive and Computational Neuroscience at the University of Sussex as well as “Co-Director (with Prof. Hugo Critchley) of the Sackler Centre for Consciousness Science and Editor-in-Chief of Neuroscience of Consciousness.”
After I read the article three times, I decided two things:
a.) There is no “hard problem of consciousness”. Once you connect empirically studied brain functions with qualia as given by self-report, you’ve solved the only meaningful problem. The “hard problem” is not a scientific problem, but a metaphysical problem.
b.) Seth’s suggestion, that consciousness is the same thing as the brain’s evolved method of checking its a priori models of the world by testing them against sensory input from the outside, sounds good, but I’m not sure how it produces consciousness.
But I’m getting ahead of myself. Let’s look at Seth’s distinction between the “hard” and “easy” problem, and his posing of what he calls the “real problem of consciousness”:
Let’s begin with David Chalmers’s influential distinction, inherited from Descartes, between the ‘easy problem’ and the ‘hard problem’. The ‘easy problem’ is to understand how the brain (and body) gives rise to perception, cognition, learning and behaviour. The ‘hard’ problem is to understand why and how any of this should be associated with consciousness at all: why aren’t we just robots, or philosophical zombies, without any inner universe? It’s tempting to think that solving the easy problem (whatever this might mean) would get us nowhere in solving the hard problem, leaving the brain basis of consciousness a total mystery.
In other words, the easy problem is establishing what parts of the brain are responsible not just for consciousness, but the content of consciousness: our feeling of “I-ness” and, more important, qualia: the way we have sensations, like why the sensation of blue is different from that of red, or how we can tell the scent of a lemon from that of mint.
But this is a correlational approach, and as such is denigrated by metaphysical “researchers” such as Philip Goff, who say that even if we can know every neurological detail from the sniffing of a lemon to the perception of the scent of a lemon, that doesn’t explain why we have these sensations. In other words, claim people like Goff, we could understand everything connected with the perception of the color red—every neural and physiological detail that makes it different from the perception of the color yellow—and yet not understand why red looks red and yellow looks yellow. (Goff’s solution is just to fob the hard problem off on lower levels: every bit of the universe, like atoms and molecules, has some form of consciousness, ergo in complex organisms these rudimentary bits somehow combine to get the “higher” consciousness that creates the hard problem. This is the nonsense called “panpsychism.”)
The more I think about it, the more I think there IS no hard problem. That is, as Patricia Churchland and the more sensible neurophilosophers say, establishing the correlation does solve the hard problem. Red looks red because there is a certain sequence of events that make things look red to people. And we can, in principle, figure that out using science: a combination of neurophysiology and self-report (“I see red”). To ask the further “hard” question: “but WHY do things look red?” simply has the answer “because that’s the way it is.” To further query, as the metaphysical neurophilosophers do, “but why do we perceive and feel things at all?” is not a “how” question but a “why” question. Seth, though he doesn’t dwell on this, has one “why” answer: “because natural selection favored the advent of consciousness.” In other words, consciousness would be favored by selection because it gives us survival and reproductive advantages. Seth’s theory is implicitly evolutionary, as you’ll see. And the evolutionary answer is the only sensible answer to the “why” question.
Further, an evolutionary answer is testable in principle, but of course isn’t a satisfactory answer to people like Goff. The metaphysical types want to know why we have sensations instead of being insensate zombies, and why those sensations are like they are. To me, the combination of proximate explanations (the string of events that cause us to perceive qualia) and “ultimate” explanations (evolution led to consciousness) is the answer. There is no answer to Goff’s “why” besides his ludicrous and untestable hypothesis of panpsychism—an approach that Seth rejects in his article.
Here’s what Seth sees as the “real problem” of consciousness, and I agree with him.
But there is an alternative, which I like to call the real problem: how to account for the various properties of consciousness in terms of biological mechanisms; without pretending it doesn’t exist (easy problem) and without worrying too much about explaining its existence in the first place (hard problem). (People familiar with ‘neurophenomenology’ will see some similarities with this way of putting things – but there are differences too, as we will see.)
There are some historical parallels for this approach, for example in the study of life. Once, biochemists doubted that biological mechanisms could ever explain the property of being alive. Today, although our understanding remains incomplete, this initial sense of mystery has largely dissolved. Biologists have simply gotten on with the business of explaining the various properties of living systems in terms of underlying mechanisms: metabolism, homeostasis, reproduction and so on. An important lesson here is that life is not ‘one thing’ – rather, it has many potentially separable aspects.
This makes sense, except it is possible to contemplate the evolutionary origin of consciousness (“its existence”), even if any solution is very hard to test. To me, the “real problem”—the correlational problem—is the only meaningful problem of how consciousness works, while the evolutionary problem deals with how it originated. All else is metaphysics. Further, no person I know of, least of all me, pretends consciousness doesn’t exist. Dennett, for example, says it’s real, and he’s the Boss.
Seth draws useful distinctions between various kinds of consciousness: the level of consciousness, the content of consciousness (qualia), and the “conscious self”: the experience of being a unitary and sentient organism. He argues, and I agree, that each of these can be (and some have been) investigated scientifically, and we’re beginning to get answers. We’re starting, for instance, to find the neural correlates of the separate aspects of consciousness. Seth also parses the different ways we perceive “self”: the “perspectival self”, the “volitional self”, the “narrative self” and the “social self”. I’ll leave you to read about them, but again, in principle, these can be empirically investigated, and Seth describes some studies. Consciousness, or its components, are not unitary phenomena with a single correlational solution.
What I find really interesting about Seth’s article is his theory about where consciousness comes from. He offers a mechanical (correlational) solution, but it’s also implicitly evolutionary. It’s a kind of Bayesian hypothesis, in which the brain makes models or predictions about the world, and then these are refined through sensory input. Here’s how he describes it:
The classical view of perception is that the brain processes sensory information in a bottom-up or ‘outside-in’ direction: sensory signals enter through receptors (for example, the retina) and then progress deeper into the brain, with each stage recruiting increasingly sophisticated and abstract processing. In this view, the perceptual ‘heavy-lifting’ is done by these bottom-up connections. The Helmholtzian view inverts this framework, proposing that signals flowing into the brain from the outside world convey only prediction errors – the differences between what the brain expects and what it receives. Perceptual content is carried by perceptual predictions flowing in the opposite (top-down) direction, from deep inside the brain out towards the sensory surfaces. Perception involves the minimisation of prediction error simultaneously across many levels of processing within the brain’s sensory systems, by continuously updating the brain’s predictions. In this view, which is often called ‘predictive coding’ or ‘predictive processing’, perception is a controlled hallucination, in which the brain’s hypotheses are continually reined in by sensory signals arriving from the world and the body. ‘A fantasy that coincides with reality,’ as the psychologist Chris Frith eloquently put it in Making Up the Mind (2007).
. . . To answer this, we can appeal to the same process that underlies other forms of perception. The brain makes its ‘best guess’, based on its prior beliefs or expectations, and the available sensory data. In this case, the relevant sensory data include signals specific to the body, as well as the classic senses such as vision and touch. These bodily senses include proprioception, which signals the body’s configuration in space, and interoception, which involves a raft of inputs that convey information from inside the body, such as blood pressure, gastric tension, heartbeat and so on. The experience of embodied selfhood depends on predictions about body-related causes of sensory signals across interoceptive and proprioceptive channels, as well as across the classic senses. Our experiences of being and having a body are ‘controlled hallucinations’ of a very distinctive kind.
. . . These findings take us all the way back to Descartes. Instead of ‘I think therefore I am’ we can say: ‘I predict (myself) therefore I am.’ The specific experience of being you (or me) is nothing more than the brain’s best guess of the causes of self-related sensory signals.
. . . It now seems to me that fundamental aspects of our experiences of conscious selfhood might depend on control-oriented predictive perception of our messy physiology, of our animal blood and guts. We are conscious selves because we too are beast machines – self-sustaining flesh-bags that care about their own persistence.
In other words, consciousness is the set of brain processes that makes guesses about the world and then refines them from what we take in from the real world via our senses. And this, of course, is adaptive in both a physiological and evolutionary sense. It’s useful to know, for instance, when your arm is being shaken: whether you have some condition that makes it shake, or whether an enemy or a predator has hold of it. This could, in principle, explain the feeling one has of being a “self.” But testing the evolutionary hypotheses seems very hard, if not impossible.
Still, there seems to be a missing link in Seth’s hypothesis. Isn’t it possible to do this same kind of Bayesian perception and action without any “consciousness”? If your arm is shaking, there could be a computer program that determines if it’s endogenous or connected to another computer or object. Why can’t the entire system of self-prediction and refinement be done without any consciousness at all? Couldn’t a non-conscious computer do exactly these things? It, too, could have a sense of “self,” but one that is programmed rather than “conscious.”
Perhaps I’m not understanding what Seth is saying, or why it’s a solution to the issue of “qualia”. He does give some facts that he considers tests of his hypothesis, like the existence of hallucinations, which he sees as the brain’s predictions unconstrained by input from the outside world. But those aren’t definitive tests.
At any rate, Seth’s solution, be it right or wrong, is still a “correlational” solution: how the brain creates the sensation of consciousness. (It’s also evolutionary in that it suggests how natural selection gave rise to consciousness.) But what it is not is metaphysical. If anything seems to be true, it is that the phenomenon of consciousness will be solved, if it is solved, by a naturalistic program. Philosophy has little to add save important guidance about how to think clearly. And metaphysics like panpsychism has nothing to offer.