Anil Seth on the “real” problem of consciousness—and his hypothesis

April 5, 2020 • 10:00 am

Scholarpedia defines the “hard problem” of consciousness this way:

The hard problem of consciousness (Chalmers 1995) is the problem of explaining the relationship between physical phenomena, such as brain processes, and experience (i.e., phenomenal consciousness, or mental states/events with phenomenal qualities or qualia). Why are physical processes ever accompanied by experience? And why does a given physical process generate the specific experience it does—why an experience of red rather than green, for example?

. . . and characterizes the “easy problems” this way:

The hard problem contrasts with so-called easy problems, such as explaining how the brain integrates information, categorizes and discriminates environmental stimuli, or focuses attention. Such phenomena are functionally definable. That is, roughly put, they are definable in terms of what they allow a subject to do. So, for example, if mechanisms that explain how the brain integrates information are discovered, then the first of the easy problems listed would be solved. The same point applies to all other easy problems: they concern specifying mechanisms that explain how functions are performed. For the easy problems, once the relevant mechanisms are well understood, there is little or no explanatory work left to do.

Here’s a new article in Aeon, brought to my attention by reader Rick, that tries to show that this distinction is not fruitful, and that’s there’s a third way: the “real problem” of consciousness. The author, Anil Seth, is professor of Cognitive and Computational Neuroscience at the University of Sussex as well as “Co-Director (with Prof. Hugo Critchley) of the Sackler Centre for Consciousness Science and Editor-in-Chief of Neuroscience of Consciousness.”

After I read the article three times, I decided two things:

a.) There is no “hard problem of consciousness”. Once you connect empirically studied brain functions with qualia as given by self-report, you’ve solved the only meaningful problem. The “hard problem” is not a scientific problem, but a metaphysical problem.

b.) Seth’s suggestion, that consciousness is the same thing as the brain’s evolved method of checking its a priori models of the world by testing them against sensory input from the outside, sounds good, but I’m not sure how it produces consciousness.

But I’m getting ahead of myself. Let’s look at Seth’s distinction between the “hard” and “easy” problem, and his posing of what he calls the “real problem of consciousness”:

Let’s begin with David Chalmers’s influential distinction, inherited from Descartes, between the ‘easy problem’ and the ‘hard problem’. The ‘easy problem’ is to understand how the brain (and body) gives rise to perception, cognition, learning and behaviour. The ‘hard’ problem is to understand why and how any of this should be associated with consciousness at all: why aren’t we just robots, or philosophical zombies, without any inner universe? It’s tempting to think that solving the easy problem (whatever this might mean) would get us nowhere in solving the hard problem, leaving the brain basis of consciousness a total mystery.

In other words, the easy problem is establishing what parts of the brain are responsible not just for consciousness, but the content of consciousness: our feeling of “I-ness” and, more important, qualia: the way we have sensations, like why the sensation of blue is different from that of red, or how we can tell the scent of a lemon from that of mint.

But this is a correlational approach, and as such is denigrated by metaphysical “researchers” such as Philip Goff, who say that even if we can know every neurological detail from the sniffing of a lemon to the perception of the scent of a lemon, that doesn’t explain why we have these sensations. In other words, claim people like Goff, we could understand everything connected with the perception of the color red—every neural and physiological detail that makes it different from the perception of the color yellow—and yet not understand why red looks red and yellow looks yellow. (Goff’s solution is just to fob the hard problem off on lower levels: every bit of the universe, like atoms and molecules, has some form of consciousness, ergo in complex organisms these rudimentary bits somehow combine to get the “higher” consciousness that creates the hard problem. This is the nonsense called “panpsychism.”)

The more I think about it, the more I think there IS no hard problem. That is, as Patricia Churchland and the more sensible neurophilosophers say, establishing the correlation does solve the hard problem.  Red looks red because there is a certain sequence of events that make things look red to people. And we can, in principle, figure that out using science: a combination of neurophysiology and self-report (“I see red”).  To ask the further “hard” question: “but WHY do things look red?” simply has the answer “because that’s the way it is.” To further query, as the metaphysical neurophilosophers do, “but why do we perceive and feel things at all?” is not a “how” question but a “why” question. Seth, though he doesn’t dwell on this, has one “why” answer: “because natural selection favored the advent of consciousness.”  In other words, consciousness would be favored by selection because it gives us survival and reproductive advantages. Seth’s theory is implicitly evolutionary, as you’ll see. And the evolutionary answer is the only sensible answer to the “why” question.

Further, an evolutionary answer is testable in principle, but of course isn’t a satisfactory answer to people like Goff. The metaphysical types want to know why we have sensations instead of being insensate zombies, and why those sensations are like they are. To me, the combination of proximate explanations (the string of events that cause us to perceive qualia) and “ultimate” explanations (evolution led to consciousness) is the answer. There is no answer to Goff’s “why” besides his ludicrous and untestable hypothesis of panpsychism—an approach that Seth rejects in his article.

Here’s what Seth sees as the “real problem” of consciousness, and I agree with him.

But there is an alternative, which I like to call the real problem: how to account for the various properties of consciousness in terms of biological mechanisms; without pretending it doesn’t exist (easy problem) and without worrying too much about explaining its existence in the first place (hard problem). (People familiar with ‘neurophenomenology’ will see some similarities with this way of putting things – but there are differences too, as we will see.)

There are some historical parallels for this approach, for example in the study of life. Once, biochemists doubted that biological mechanisms could ever explain the property of being alive. Today, although our understanding remains incomplete, this initial sense of mystery has largely dissolved. Biologists have simply gotten on with the business of explaining the various properties of living systems in terms of underlying mechanisms: metabolism, homeostasis, reproduction and so on. An important lesson here is that life is not ‘one thing’ – rather, it has many potentially separable aspects.

This makes sense, except it is possible to contemplate the evolutionary origin of consciousness (“its existence”), even if any solution is very hard to test. To me, the “real problem”—the correlational problem—is the only meaningful problem of how consciousness works, while the evolutionary problem deals with how it originated.  All else is metaphysics. Further, no person I know of, least of all me, pretends consciousness doesn’t exist. Dennett, for example, says it’s real, and he’s the Boss.

Seth draws useful distinctions between various kinds of consciousness: the level of consciousness, the content of consciousness (qualia), and the “conscious self”: the experience of being a unitary and sentient organism. He argues, and I agree, that each of these can be (and some have been) investigated scientifically, and we’re beginning to get answers. We’re starting, for instance, to find the neural correlates of the separate aspects of consciousness. Seth also parses the different ways we perceive “self”: the “perspectival self”, the “volitional self”, the “narrative self” and the “social self”. I’ll leave you to read about them, but again, in principle, these can be empirically investigated, and Seth describes some studies. Consciousness, or its components, are not unitary phenomena with a single correlational solution.

What I find really interesting about Seth’s article is his theory about where consciousness comes from. He offers  a mechanical (correlational) solution, but it’s also implicitly evolutionary. It’s a kind of Bayesian hypothesis, in which the brain makes models or predictions about the world, and then these are refined through sensory input. Here’s how he describes it:

The classical view of perception is that the brain processes sensory information in a bottom-up or ‘outside-in’ direction: sensory signals enter through receptors (for example, the retina) and then progress deeper into the brain, with each stage recruiting increasingly sophisticated and abstract processing. In this view, the perceptual ‘heavy-lifting’ is done by these bottom-up connections. The Helmholtzian view inverts this framework, proposing that signals flowing into the brain from the outside world convey only prediction errors – the differences between what the brain expects and what it receives. Perceptual content is carried by perceptual predictions flowing in the opposite (top-down) direction, from deep inside the brain out towards the sensory surfaces. Perception involves the minimisation of prediction error simultaneously across many levels of processing within the brain’s sensory systems, by continuously updating the brain’s predictions. In this view, which is often called ‘predictive coding’ or ‘predictive processing’, perception is a controlled hallucination, in which the brain’s hypotheses are continually reined in by sensory signals arriving from the world and the body. ‘A fantasy that coincides with reality,’ as the psychologist Chris Frith eloquently put it in Making Up the Mind (2007).

. . . To answer this, we can appeal to the same process that underlies other forms of perception. The brain makes its ‘best guess’, based on its prior beliefs or expectations, and the available sensory data. In this case, the relevant sensory data include signals specific to the body, as well as the classic senses such as vision and touch. These bodily senses include proprioception, which signals the body’s configuration in space, and interoception, which involves a raft of inputs that convey information from inside the body, such as blood pressure, gastric tension, heartbeat and so on. The experience of embodied selfhood depends on predictions about body-related causes of sensory signals across interoceptive and proprioceptive channels, as well as across the classic senses. Our experiences of being and having a body are ‘controlled hallucinations’ of a very distinctive kind.

. . . These findings take us all the way back to Descartes. Instead of ‘I think therefore I am’ we can say: ‘I predict (myself) therefore I am.’ The specific experience of being you (or me) is nothing more than the brain’s best guess of the causes of self-related sensory signals.

. . . It now seems to me that fundamental aspects of our experiences of conscious selfhood might depend on control-oriented predictive perception of our messy physiology, of our animal blood and guts. We are conscious selves because we too are beast machines – self-sustaining flesh-bags that care about their own persistence.

In other words, consciousness is the set of brain processes that makes guesses about the world and then refines them from what we take in from the real world via our senses. And this, of course, is adaptive in both a physiological and evolutionary sense. It’s useful to know, for instance, when your arm is being shaken: whether you have some condition that makes it shake, or whether an enemy or a predator has hold of it.  This could, in principle, explain the feeling one has of being a “self.” But testing the evolutionary hypotheses seems very hard, if not impossible.

Still, there seems to be a missing link in Seth’s hypothesis. Isn’t it possible to do this same kind of Bayesian perception and action without any “consciousness”? If your arm is shaking, there could be a computer program that determines if it’s endogenous or connected to another computer or object. Why can’t the entire system of self-prediction and refinement be done without any consciousness at all? Couldn’t a non-conscious computer do exactly these things? It, too, could have a sense of “self,” but one that is programmed rather than “conscious.”

Perhaps I’m not understanding what Seth is saying, or why it’s a solution to the issue of “qualia”. He does give some facts that he considers tests of his hypothesis, like the existence of hallucinations, which he sees as the brain’s predictions unconstrained by input from the outside world. But those aren’t definitive tests.

At any rate, Seth’s solution, be it right or wrong, is still a “correlational” solution: how the brain creates the sensation of consciousness. (It’s also evolutionary in that it suggests how natural selection gave rise to consciousness.) But what it is not is metaphysical. If anything seems to be true, it is that the phenomenon of consciousness will be solved, if it is solved, by a naturalistic program.  Philosophy has little to add save important guidance about how to think clearly. And metaphysics like panpsychism has nothing to offer.

 

83 thoughts on “Anil Seth on the “real” problem of consciousness—and his hypothesis

  1. I still think many of the Chalmers-followers, etc. have not paid enough attention to Dennett’s now almost 30 year old thought (and real) experiments. A theory of consciousness (of any sort) has to account for “what it is like to see motion without a mover”) and I see no evidence that this is done by anyone other than the naturalists.

    Incidentally, metaphysics is not a escape hatch here: there is perfectly scientifically respectable metaphysics. Dennett downplays this angle due to personal predilection, but the Bunge-Armstrong (etc.) idea of metaphysics as hyper-general natural science has merit, IMO.

      1. I think what you both mean is that your metaphysics has to be based in – be informed by – your science. Is that right?

        1. I’d go further (along with our host actually) and say that philosophy should be part of science broadly conceived. Drawing a line and saying metaphysics on this side, science on that side, is neither necessary nor even helpful. When a physicist (Everett) posits that certain quantum states gradually yet rapidly become unobservable (“many worlds”), does that count as physics or metaphysics? Answer: yes.

  2. When I was an applied mathematician at NIH in the 1980’s, a neuroscientist told me that short term memory was a “standing wave” between the hippocampus which handles sensory input and the frontal cortex (or some such term, I forget what she said) where a picture is built up of what the sensory input reveals. By standing wave she meant the communication between the hippocampus and the frontal cortex is constantly ongoing, and any changes in sensory input are immediately imprinted onto the frontal cortex. This supports, I believe, the contention in this post of a correction mechanism.

  3. Thanks for these discussions and the introduction to others work on the subject. Consciousness “a controlled hallucination”? I don’t know if it really is but I like the idea.

  4. I don’t know if this is an appropriate place to link to my personal notes on consciousness and its evolution, but it’s here: https://edgeofthecircle.net/wp-content/uploads/2018/02/ConsciousnessNotes.pdf

    Here’s a skeleton summary:

    1.Consciousness arises from evolution, and therefore is an adaptation
    that has an explanation in terms of evolutionary fitness (either directly or as a byproduct of evolutionary forces).

    2. There is nothing that conscious beings can do that non-conscious ones
    (“zombies”) cannot. The evolutionary advantage is therefore in terms
    of doing the same things, but more efficiently.

    3. The brain uses 20% of the calories of the body; being more efficient
    would therefore certainly be helpful.

    4. Evolution will optimize computational ability (proxy: the weighted
    average of the VC dimension, and the importance of each solution) with
    respect to a cost function (which is the energy spent in computation,
    possibly combined with the number of brain cells and/or the genetic
    information required to specify brain development).

    5. It is plausible under this constrained optimization, the neural network
    acquires a modular form. In particular it is more efficient if the higherorder logic can be isolated into one module, and this module can be used for many different applications (the most important at any given
    time). Perhaps this behavior can be compared to the self-organization
    that happens in other systems when we optimize for maximum entropy
    flow.

    6. This modularization is not useful unless we also have an overall module
    that sits on top and is able to evaluate which application gets to use
    the higher-order logic module at any given time. This module should
    be able to have access to all information in order to make this decision
    (introspection/qualia).

    7. Understanding and dealing with other humans, who are also trying to
    understand us, is one of the most important problems humans evolved
    to solve. Introspection is an important advantage in this. As humans
    grow more intelligent, this becomes a more and more difficult problem
    to model.

    8. As we are cooperative animals, the marginal advantage of intelligence
    keeps increasing as intelligence increases, in contrast to most other
    traits. This leads to runaway increase in intelligence.

    9. As the system (in particular, the higher-order module or the control
    module) pushes towards higher intelligence, and approaches a critical
    level of complexity (some extension of Turing completeness or recursive
    logic), a phase transition happens and intelligence (as measured by the
    ability to solve a wide range of problems) increases extremely fast.

    10. The fact that neural networks are highly connected means that they
    are much less time-reversible than computers (the state is much more
    complex). Also, the fact that they contain some extension of Turing completeness/recursiveness means that they are less predictable
    (highly connected, plus recursive looping). In connection with introspection, this leads to free will.

    1. Each of your numbered points would take me hours to contemplate, but it all sounds reasonable. I’d guess once these ideas are fully investigated this outline will match reality very closely. Just my hunch. Now, I’ll go read your site.

    2. In principle there need to be nothing adaptive with consciousness, 95 % of what we see on the genetic level in organisms is a result of near neutral drift. But inasmuch as it is tangible traits, the likelihood that it is adaptive increases.

      [As an aside, I don’t think our experiences are very tangible outside of awareness of external and internal sensations and modeling our actions in response to them. We have neuronal networks that keep track of “self” in relation to “others” in a geometric sense, it resulted in a Nobel Prize, so we seem to have the minimal functions needed to experience a show of “us”. (A “puppet show” perhaps, depending on how you want to see the modeling involved that the brain-body machinery does.)]

      This just in: the australopithecines were neither obligate bipedal nor had reorganized their brains for high energy use [ https://arstechnica.com/science/2020/03/long-after-some-hominins-were-bipedal-others-stuck-to-the-trees/ ; https://www.sciencemag.org/news/2020/04/lucy-s-baby-suggests-famed-human-ancestor-had-primitive-brain ; https://www.nature.com/articles/s41598-020-60837-2 ]. From the last paper: “Using basal metabolic rates (BMR) provided by Boyer and Harrington24 for Homo (1557 kcal/day) and Pan (1370 kcal/day), we computed that StW 573 would have used 6.0% or 7.5% respectively of its BMR to support the brain, while Homo and Pan actually use 27.0% and 9.7% of their respective BMR.”

      The one defining fossil trait of Homo I know of is tooth isotope ratios showing prolonged weaning of 1-4 years [ https://advances.sciencemag.org/content/5/8/eaax3250 ]. This suggest that humans had adopted a social support system of medium sized groups that enabled such dietary practices, likely as a possible response to the disappearance of trees and the appearance of grasses. (The other response was seen in Paranthropus, feeding on lower nutritional tubers.)

      I’m not exclusive to the “social brain” hypothesis though, or that higher BMR evolved all at once. The later H. erectus is now believed to have originated in South Africa 2 Myrs ago, concurrent with the disappearance of australopithecines [ https://www.heritagedaily.com/2020/04/when-three-species-of-human-ancestor-walked-the-earth/127172 ]. It relied heavily on tools, spread across the Eurasian and Oceanian world and disappeared just 0.1 Myrs ago [ https://www.nature.com/articles/s41586-019-1863-2 ; see also the island forms H. floresiensis and H. luzonensis that each disappeared about 0.05 Myrs ago]. There is about the same time period between the first putative Homo fossils at 3.5 Myrs ago and the first recognizable human “all out migrant” H. erectus.

      On the latest putative finds on modularization, see my other comments. It seems consciousness could be a dynamic phenomena switching between two main awareness states (of external respectively internal sensations) but including in total eight brain states (sub-networks). While non-consciousness is a similar switching between the remaining six states.

      1. Thank you for commenting! Will quote you and respond to your points.

        1. “In principle there need to be nothing adaptive with consciousness, 95 % of what we see on the genetic level in organisms is a result of near neutral drift. But inasmuch as it is tangible traits, the likelihood that it is adaptive increases.”

        As mentioned, consciousness comes from evolutionary forces, but we don’t know if it’s a direct product or a byproduct. In fact, the difference between direct products and byproducts is itself a fuzzy one. In the rest of the points, I try to evaluate the conditions that lead to consciousness being on the path that leads to maximum evolutionary fitness.

        2. “This just in: the australopithecines were neither obligate bipedal nor had reorganized their brains for high energy use.”

        The Australopithecines were before most of the cranial evolution that takes us to where we are today. It’s not clear that they were more intelligent than today’s other non-human great apes. So this is still consistent with consciousness being a product of the optimization of intelligence with constraints due to metabolism.

  5. I like the idea that the qualities of conscious experience depend on an interaction between an internal model and external (or internal) data from the senses. That idea helps make qualia a partly social phenomenon, and partly independent of the specific brain structures involved in qualia: you & I perceive red through the same structures of the eye; we might develop a model of red through slightly different sets of neurons and synapses; but that doesn’t matter so long as both of us were socialized to associate red with similar kinds of objects or experiences (tomatoes, sunsets, bleeding).

  6. What are the causal influences on the processes of belief-fixation and decision-making? What are the differences in information-processing that make a difference? A “quale” is either a stand-alone metaphysical property or it somehow figures in the causal nexus of stimulus and (reflectively considered) response. I’m all in with Dennett—phenomenal consciousness is neither here nor there. Tis’ a quirk of abstraction of our kind of mind.

  7. Curiously I was reading “Never Mind the Gap:
    Neurophenomenology, Radical Enactivism and the Hard Problem of Consciousness” by
    Michael D. Kirchhoff and Daniel D. Hutto this morning.

    I’m still digesting it but the main thrust is that there is no ‘gap’ to explain between the phenomenal and the physical. They are all a continuum of the affordances of the enviroment and our responses. And that is why the Hard Problem is hard… it presupposes a division between the physical and our qualia which doesn’t exist.

  8. I do think the “hard problem” exists but it doesn’t have anything to do with the magical properties of the brain some researchers are looking for. Even when all the “easy problems” have been solved, we may still be mystified as to why our experience and qualia feel to us the way they do. And it certainly won’t tell us how it feels to be a bat, say.

    The best we can do is to reach a point in our understanding where when a certain neuron, or group of neurons, gets activated, we see red, or feel pain, etc. That said, I suspect we will eventually find such explanations sufficient and dismiss the “hard problem” as being of no consequence.

    1. Rather than “the hard problem” I would call it “the hard fact”, which is the fact that no amount of physical/biological description will *automatically* bring to mind the subjective feel of an experience. But this is a fact, not a problem. It’s only a “problem” if you start from false assumptions about what follows from the fact that qualia are properties of biophysical processes.

  9. Jerry: “Further, no person I know of, least of all me, pretends consciousness doesn’t exist. Dennett, for example, says it’s real, and he’s the Boss.”

    Dennett, a self-described illusionist about consciousness, denies the existence of a qualitative, phenomenal aspect to experience – “the what it’s like” qualities of feeling pain, seeing red, etc., what are commonly referred to as qualia.

    This of course is what most folks have in mind by conscious experience, so Dennett is very much in the minority here. Seth himself acknowledges the existence of qualia, as do you I think. Explaining why there is something it’s like when only certain sorts of neural processes are active is an interesting question, and at the moment there’s no naturalistic consensus on the answer.

    Here’s Dennett on illusionism: https://ase.tufts.edu/cogstud/dennett/papers/illusionism.pdf

    And my rebuttal: https://www.naturalism.org/philosophy/consciousness/dennett-and-the-reality-of-red

    1. Dennett is an illusionist, as you suggest, but he doesn’t deny the existence of qualia or consciousness. Like other illusionists, he simply says that consciousness is not what it appears to be to us. I have complained many times about this use of “illusion”. To many people it is almost a synonym for “non-existent” but they aren’t using it this way. Think of the many visual illusions we’ve seen on this website. They often involve lines which are of equal length but appear to be of different lengths. Such illusions don’t mean the lines don’t exist.

      1. Dennett doesn’t deny the existence of consciousness, but does deny the existence of qualitative phenomenology, which he says is an illusion. That is, it *seems* to us that there are qualities in experiences, but in reality there are none. As he puts in his paper on illusionism (p.66): “…you can’t be a satisfied, successful illusionist until you have provided the details of how the brain manages to create the illusion of phenomenality, and that is a daunting task largely in the future.”

        I find that many folks don’t take Dennett’s explicit denial of phenomenal consciousness at face value. They can’t believe he’s really saying there’s no phenomenality. But that’s been his position on consciousness at least since his book Consciousness Explained.

        1. “…you can’t be a satisfied, successful illusionist until you have provided the details of how the brain manages to create the illusion of phenomenality, and that is a daunting task largely in the future.”

          Dennett here implies that “how the brain manages to create the illusion of phenomenality” is a reasonable thing to research rather than a denial of its existence. He’s just saying we don’t yet have that explanation and we need to work towards it, rather than invent new, impossible-to-test mechanisms based on how things feel to us.

          All Dennett is saying is that phenomenal consciousness may feel like magic but that’s an illusion. He is not denying how it feels.

          According to Wikipedia, ‘Philosopher and cognitive scientist Daniel Dennett once suggested that qualia was “an unfamiliar term for something that could not be more familiar to each of us: the ways things seem to us.”‘

          That’s hardly a denial of the existence of qualia. He just doesn’t like what some philosophers are doing with the concept.

          1. “All Dennett is saying is that phenomenal consciousness may feel like magic but that’s an illusion. He is not denying how it feels.”

            Having taken Dennett’s seminar on consciousness last year, I can report he really does deny the existence of qualities in experience. It’s not that he’s saying that phenomenality feels like magic, or is construed as being magical, but that it simply doesn’t exist. The feels you think you have are illusory – *there are no such feels*. Which is why so many folks end up discounting what he actually says. It’s a bridge too far for them, and rightly so.

          2. You are missing his point. Dennett does deny qualia as its proponents view them. That doesn’t mean he denies we experience things. He’s just saying that qualia don’t need to be explained in the manner that Chalmers and company feel they must. Chalmers raises qualia to the point where they have a first-class existence which requires they be explained. Dennett is saying that qualia are an illusion and that we shouldn’t focus on explaining the illusions as perceived but on the perception mechanism which produces them.

          3. My favorite example is when rats are given the ability express opsins for “red” in some of their retinal cells.

            They then gain “a qualia” according to philosophers. Where did it come from? Surely not from the opsin protein.

            And similarly we can remove the ability to experience certain “qualia”. Do Chalmers et al believe they then can in principle experience the “ghost” of qualia? Surely the memory of an experience, rather?

            The whole idea that there is something tangible with experiences outside of what we know brains already do (have awareness; endlessly model actions from past into future) is absurd on the face of it. We may not know all what is going on, but we know enough to not mystify.

            Though Dennett is not helping, I take it. Of course we “make it up as we go”, that seems to be the whole point of the modeling neuroscience can see rats do.

          4. While I find I pretty much agree with Dennett’s thinking on most things, I find that he too often uses words that seem meant to stir up trouble. He used to be a great explainer but, as he ages, he seems to enjoy the dissent and I find his writing less readable. Of course, it could just be me with my aging reading apparatus and growing impatience.

            Although I like philosophy as a subject, I don’t enjoy reading most modern philosophers when they attempt to explain how the brain works. They don’t tie their work sufficiently to reality. They ought to take some serious courses in computer science, cognitive psychology, and neuroscience. I suppose they have but they’re still just philosophers. Dennett is/was better than most.

          5. Take Penn and Teller doing the “catch a bullet in our teeth” trick. “Chalmers” asks the question: “So, how does Teller avoid getting his face blown off when the bullet gets caught in his mouth?” “Dennett” asks: “How is it that we think that Teller caught that bullet?”

            IOW, Chalmers presupposes that the trick is precisely only what it appears to be, and nothing else. Dennett says: don’t *start* with presupposing a conclusion of what something is, figure it out first!

    2. That Dennett link contains one of my favorite quotes, from Lee Siegel, which I’m sure I have seen quoted elsewhere:

      “I’m writing a book on magic,” I explain, and I’m asked, “Real magic?” By real magic people mean miracles, thaumaturgical acts, and super-natural powers. “No,” I answer: “Conjuring tricks, not real magic.”

      Real magic, in other words, refers to the magic that is not real, while the magic that is real, that can actually be done, is not real magic. (Siegel, 1991, p. 425)

  10. You really hit the nail on the head when you wrote, ‘The “hard problem” is not a scientific problem, but a metaphysical problem.’. In Goff’s book, “Galileo’s Error”, he proposed that science is deficient in not being able to answer the “hard problem”. Your writing resonates with me as the appropriate response to Goff, for science’s job is to explain the how of reality, not the why.

    I also agree with your view on panpsychism. I feel that panpsychism violates Occam’s Razor, for it adds no additional predictive capability to physics, while just muddying the waters.

  11. The continued presence of the hard problem is evidenced by the lack of the word “experience” or some synonym in the paragraph about Churchland. The focus on “why” isn’t the issue; just replace “why” with “how is experience generated” (as distinct from merely describing how the color red is processed in the brain). “How does that processing lead to an experience” is the question. We can describe how red is processed in the brain, but that process does not include the experience of red anywhere in it (yet). That’s where the hard problem still is.

    1. I’m in the “there is no hard problem” camp, and I’d say the processing doesn’t lead to the experience, the processing is the experience.

    2. Asking “how is experience generated?” is similar to asking the question: “How is life generated?”

      All we need to do is describe all the chemical reactions that, in our evolutionary history, led from non-life to life. A collection of a few simple and minimally interacting chemical reactions won’t do it. But, as the number and complexity of the reactions and the number and complexity of the mutually dependent interactions increases, life slowly and gradually emerges out of non-life

      Life IS those chemical reactions. There is no magical ingredient called “elan vital” that breathes life into those chemical processes.

      Why could that not be the same for consciousness and qualia?

      1. I think the answer to your question is that we already perceive a difference between neurochemical processes and experience as experience. If one defines life as a chemical process, nothing is lost that we already know. All our knowledge and predictions about life carry on as before, which means we can ditch the elan vital.

        But that’s not the same situation for the hard problem. If we ditch experience as experience – analogous to the elan vital – we DO lose something that we already know. We already know that it is like something to have an experience. There’s no “experience” factor in a neuron-chemcial equation, so to speak. All of that is described quite nicely with just physics, and no mention of experience. So we do lose something, which makes the two scenarios not analogous.

        I’m going full devil’s advocate here.

  12. Why can’t the entire system of self-prediction and refinement be done without any consciousness at all? Couldn’t a non-conscious computer do exactly these things?

    Hypothesis: a non-conscious computer could do the same thing, but computer employing a conscious-like system does it easier.

    Analogy: image processing could be done algorithmically:
    if pixel 1 is red and pixel 2 is (blue or green) and pixel 3 is darker than pixel 1 and pixel 4 is...
    ...billion more statements...
    ...and pixel 409399599 is black - then it's a cat

    but it’s way way easier to process images with a trained neural network.

    Similarly, complex survival behavior can be encoded algorithmically, but a conscious-like system encodes the behavior much more efficiently.

    In other words, a conscious-like system provides an easier path to the self-predictive capability than does a non-conscious-like system.

    This is very rough spitballing on my part.

    1. A trained neural network is still an algorithm, by the way. Consciousness is also an algorithm but running on a different kind of hardware. We’re still working on the details.

      “In other words, a conscious-like system provides an easier path to the self-predictive capability than does a non-conscious-like system.”

      Maybe but, once we understand consciousness, we may find that we can create better algorithm. As with bird flight, once we understand the main parts of the algorithm, we can do better, at least in the dimensions we care about. Self-reflection is really just a feedback loop. We’ve done those algorithmically for a while so I suspect consciousness (and the brain generally) has a lot more algorithmic tricks we can’t even guess at.

      1. A trained neural network is still an algorithm

        Disagree. It’s all a matter of semantics, it’s all moving bits around, but as a programmer I see the algorithmic approach as fundamentally different from things like NNs.

        Neural network processing is a paradigm shift from traditional algorithmic processing – new capabilities, new problems.

        1. An ANN implements a virtual machine is the best way to think about it, IMO. So one has the algorithms like backprop (or whatever) and then the *unknown* stuff that the training produces “on top”.

          Notice this allows for Dennett’s discussion of a serial machine on top of a parallel one.

        2. Sounds like the argument is over the definition of “algorithm”. I use the word to describe any instance of a particular computation. Anything that runs on a computer is an algorithm, though the term differs slightly from “program” as it focuses more on how it works.

          ANNs are just one of a potentially infinite set of program architectures: a class of programs. Virtual machines are merely a different way of dividing up the space of all possible programs.

          IMHO, it is quite likely that the brain implements a program architecture that will be surprising to us. It will still implement an algorithm, or many, based on my definition of the term. One could also view the brain as a virtual machine which executes a program. Or biological hardware that executes a program. These terms are all just ways computer science types like to talk about programs or their parts. They have no firm, global definition but mean something different in each context.

          1. Sounds like the argument is over the definition of “algorithm”

            Yes. I’m using the term conceptually. Even though an NN can be translated into a sequence of steps (an algorithm) to run on a traditional computer, we usually don’t conceptually construct and manipulate an NN as a sequence of steps.

          2. NNs ARE a sequence of steps (an algorithm) to run on a traditional computer. There is specialized hardware that can be used to speed them up but they are still running the very same algorithms. The video card makers have adapted their special purpose hardware to speed up ANNs. AFAIK, they have been so successful in this market that their products bear little resemblance to the graphics hardware from which they arose.

          3. NNs ARE a sequence of steps (an algorithm) to run on a traditional computer.

            Not the one in my head. I believe it could be translated into an algorithm, but it hasn’t been yet (that I know of).

  13. A little off topic, but I read recently of a thought experiment that made me think more deeply about consciousness. Imagine an anesthesiologist puts you under and then brings you around in a complete sensory deprivation chamber. What is different when you awaken? What makes you conscious? My first thought is that I can access my memories. But what if the anesthesiologist gave you a drug that temporarily wiped out your memories. What makes you conscious now? My next thought was that I can think. But thinking requires language. What if the evil anesthesiologist gave you temporary aphasia? Now I am stumped. No sensory stimuli, no memories, and no language. Am I conscious?

    1. I appreciate the effort and I love such thought problems. However, I don’t think this one gets us anywhere. You’re talking about unplugging important subsystems of the brain here. It’s like removing major parts of a car and then asking if it’s still a car. It’s not a particularly interesting question.

      What you are suggesting here also has a dualist flavor to it. It’s as if consciousness is some kind of magic energy field and you’re wondering what happens to the energy goes if the normal matter is removed. I apologize if I have gotten this wrong.

      1. No, you may not have it wrong, but I was not suggesting, I was wondering. I suppose the question is the following. If you remove sensory stimuli, memory and language to think with, is there anything else? Or can we say consciousness is absent without those things?

        1. I think it’s possible to remove sensory stimuli and language and still have consciousness but not memory. Using a computer analogy, we could unplug a computer from its peripherals and it would still compute. Take away its memory and not much could happen. As far as language is concerned, we do plenty of thinking without it. While some ancient theorists suggested we need language to think, they’ve been shot down thoroughly.

          1. Surely animals are able to think without language. Same for human babies. Have you not had the feeling of learning a word for a concept you already understood for which you didn’t have a word? When you evaluate the personality of someone you meet, do you really put all those thoughts into unspoken words?

            People do find it hard to imagine thinking without words popping into their heads but that’s easily explained. Our brain makes associations based on context. When we see a dog, many things that we know about dogs come flooding into our minds. Not everything, of course, just those things that fit into the context. Words associated with the dog concept readily come to mind. But just because the words come to mind, doesn’t mean they are the basis of thought. They just freely come to mind when we think.

          2. Some things you mention here I would call perception or memory recall. Perhaps by “thinking” I am too narrowly focussed on ratiocination. BTW, I do not rule out that it is possible to do elementary thinking using images. Images just replace words in the “language”. Perhaps that is what animals and pre-verbal babies do.

            Also, because I do not have a word (say, schadenfreude) to describe a concept does not mean I cannot put it into words (I am feeling pleasure at that person’s misfortune) etc. If I cannot put a concept into words, is it really a concept? If someone told me they have an idea but they cannot put it into words, I would tell them they need to think about some more until they can.

          3. “If I cannot put a concept into words, is it really a concept? If someone told me they have an idea but they cannot put it into words, I would tell them they need to think about some more until they can.”

            Yes, to your question. The fact that you can have a thought without knowing how to express it proves my point. Telling someone to think about how to put their concept into words is certain reasonable but you are only requesting that they create sentences that adequately describe their concept and then speak them. Many people struggle to find the words to express their feelings. This is because thoughts and the words that describe them are two different things. In fact, people often use “feelings” to label concepts that they have trouble describing. Sometimes it is just laziness or lack of education but many times it is a task that they need to put energy into. How many times do scientists say that they have some ideas but they just need to put them down on paper? I find that putting my thoughts into words actually forces me to think things through to an extent that I wouldn’t do otherwise. This is further evidence that thought is different than the words used to describe it.

          4. I think this is a really interesting topic and distinction. Thinking ≠ consciousness. Our brains do a lot of thinking that our conscious self is not aware of. But the things we, er, think of as consciousness seem to require language and memory.

          5. Memory, certainly. But language? I have no problem considering chimps and bonobos conscious.

  14. There is a hard evolutionary problem of consciousness, because “[t]he emergence of consciousness is arguably *the* most untransparent transition in the history of the universe: it is uniquely difficult…[.]”

    (Simons, Peter. “The Seeds of Experience.” In: Galen Strawson et al., Consciousness and its Place in Nature, edited by Anthony Freeman, 146-150. Exeter: Imprint Academic, 2006. p. 148)

    “At any rate, Seth’s solution, be it right or wrong, is still a “correlational” solution: how the brain creates the sensation of consciousness.” – J. Coyne

    Correlation doesn’t equal causation (creation), and physical-to-mental causation isn’t the only possible explanation of psychophysical correlations.

    1. Give me a break; I know that correlation doesn’t equal causation, but once you have the correlation through this approach, you can then begin testing causality by manipulations and other experiments.

      As for your second sentence, it’s obscure and you offer no alternative.

      1. Right, a case for causation can be made on the basis of correlation *plus* experimental manipulation.

        “A commonsensical idea about causation is that causal relationships are relationships that are potentially exploitable for purposes of manipulation and control: very roughly, if C is genuinely a cause of E, then if I can manipulate C in the right way, this should be a way of manipulating or changing E.”

        Causation and Manipulability: https://plato.stanford.edu/entries/causation-mani/

        Correlations between mental phenomena and physical phenomena can be explained in terms of causation of one by the other, or by a common cause of both of them; but they can alternatively be explained in terms of identity, composition, or constitution. Whether the latter two entail identity or not is a contentious issue; but, anyway, there is a relevant distinction between saying that mental phenomena are caused or produced by neural mechanisms, and saying that they are composed of or constituted by neural mechanisms.

        Parallelists (Leibniz with his pre-established harmony) and occasionalists can explain psychophysical correlations in terms of divine coordination or intervention, such that there is no causal, compositional, or constitutional relation between mental phenomena and physical ones. Of course, since these explanations presuppose theism, they are supernaturalistic and irrelevant to scientific explanations.

        1. Ask yourself (like Searle should): do intestines *cause* digestion? Or, do they under go it? If your answer is the latter, apply that same conclusions to nervous systems: they “mind”.

  15. Dr. Coyne writes:

    > b.) Seth’s suggestion, that consciousness is the same thing as the brain’s evolved method of checking its a priori models of the world by testing them against sensory input from the outside, sounds good, but I’m not sure how it produces consciousness.

    I suspect Seth is getting confused by the connotations of his own metaphors here. A number of the terms he’s using – notably “method,” “checking,” “models,” and “testing” – imply the presence of a conscious agent if taken literally and non-metaphorically. He’s building the consciousness into his premise and then pulling it back out again in the conclusion, as it were.

    1. “imply the presence of a conscious agent”

      I don’t think his use of terms implies he’s assuming the presence of a conscious agent. These must be metaphors for processes carried out by the neural network of the brain. The total set of processes may, in fact, amount to the phenomenon of consciousness he’s trying to explain (at a high level).

  16. Hi, Rick.

    > These must be metaphors for processes carried out by the neural network of the brain.

    Presumably so, yes.

    > The total set of processes may, in fact, amount to the phenomenon of consciousness he’s trying to explain (at a high level).

    Sure, Seth “may” have an explanation of consciousness that is informative and non-circular. He hasn’t presented that here, though.

    1. That I’d have to agree with. It’s an outline, an approach. He’s proposing a new program of inquiry, I’d say.

  17. I think the hard problem of consciousness is actually the ‘something from nothing’ problem that we find in any phenomenon. Consciousness appears to be so different in kind from anything else in our world that saying it somehow appears from matter feels a bit like saying that something from nothing was created when we finally arranged nothing in exactly the right way. Our minds of course rebel at that and say “That makes no sense! No matter how you arrange nothing, what you have is more nothing, not something!” I think the same could be said of ‘movement’ – that feels, conceptually, like another case of something from nothing.

    I think it’s possible that such conundrums are simply beyond our conceptual understanding. It’s also possible that we can understand them to some extent using creative analogies (to me, ‘something from nothing’ suddenly made sense when I compared ‘something’ to ‘math’). As to how salient such questions even are – I think that’s a difficult question because it probably depends on the answer, which we don’t currently have. It’s possible that reducing consciousness to its ultimate nature is key to understanding it in the most useful way. It’s also possible that this is an arcane and not particularly relevant concern. If we don’t know what its ultimate nature is, it’s difficult to know what the applications of that knowledge would be.

  18. Jerry…Interesting thoughts. As you restate why this happens with experience or the hard problem. Why can’t we say what the amoeba does is also its experience or for that matter what an atomic particle does? After all from the overhead perspective humans, amoeba and particles all do the same exact thing…they interact with their environment..the particle has a fixed set of physical laws, the amoeba has a fixed set of laws that it “learned’ via species evolution and of course we have our laws learned by evolution, social development, scientific advancement…This hard problem is really about why we have THIS level of experience? I beg my answer that it is for learning. True zombies are conceivable, but conceivability is one of the ways how we think and learn…Namely rewire our neurons.

  19. Is there any physical evidence that shows unambiguously that consciousness and qualia exist? (I’m not disputing that they do exist but only asking about the evidence.) If there is such physical evidence it would have to involve observations of effects that consciousness has in the physical domain, something it physically “does” or “causes” to occur that would not occur otherwise. As you point out, however, most of the practical (and potentially evolutionarily adaptive) doings that seem attributable to consciousness (e.g., prediction, refinement, etc.) could just as well be accomplished by a non-conscious computer. As I discussed in a recent article, the best and perhaps only candidate for physical evidence of consciousness is the observable human conduct of talking and writing about consciousness (both as a general topic and specifically as to our own thoughts, dreams, etc.). But there’s a problem here too. As far as one person can tell, everybody else’s “consciousness talk” could have been produced by a non-conscious computer and is, therefore, not unambiguous evidence that consciousness exists. Indeed, one cannot even be sure that one’s own physical “consciousness talk” is in fact the product of one’s own consciousness, as opposed to a non-conscious computer.

    None of this is to say I doubt that consciousness is real, but unambiguous physical evidence of consciousness is elusive.

    1. “Is there any physical evidence that shows unambiguously that consciousness and qualia exist?”

      The answer is no because we haven’t worked out the definitions of consciousness and qualia in terms of brain wiring, algorithms, or objective measurements.

      Many philosophers have come up with multiple kinds of consciousness, none of which has a very precise definition and we don’t know how the brain works so, even if we did, we couldn’t answer the question.

  20. Consciousness is – says limited experiments so far – a dynamic phenomena switching between two main awareness states [see below]. It would be ironic of all of the usual proposals were wrong.

    That aside, if “qualia” is defined as “the content of consciousness” and not Chalmer’s ‘shared content’ implying it is handled the same in all brains (since, after all, other animals can see “red” too) I can accept that as provisionally not philosophical.

    But in general I want to have testable definitions, including of experiences of consciousness, or I lose trust in the ideas. Aeon is after all formally devoted to philosophy and culture [ https://en.wikipedia.org/wiki/Aeon_(digital_magazine) ].

    On brain-body processes, of course science pulls and teases and sometimes integrates. Here is a new finding suggesting consciousness consists of two anti-correlated states:

    “Imagine you’re at work: you’re focused on a task when suddenly your mind starts to wander to thoughts of the weekend—that is, until you catch your boss walking by out of the corner of your eye. This back and forth in consciousness happens naturally and automatically and is the result of two brain states: the dorsal attention network (DAT), which corresponds with our awareness of the environment around us and the default-mode network (DMN), which corresponds with inward focus on ourselves.

    Brain researchers consider these states to be anti-correlated, meaning when one is active, the other is suppressed. Michigan Medicine researchers studying consciousness have provided proof of this phenomenon using fMRI …”

    “For their study, the team compared 98 participants who were awake, mildly sedated or generally anesthetized as well as patients with brain disorders of consciousness to analyze activity patterns in their brains from second to second. Using machine learning, they uncovered eight primary functional networks of brain activity: the aforementioned DAT and DMN, the frontoparietal network (involved in higher level processing), sensory and motor network (involved in sensation and movement), visual network (involved in sight), ventral attention network (involved in attention related to salient stimuli), and the global network of activation and deactivation (activity of the whole brain itself).

    The team showed that the brain very quickly transitions from one network to another in regular patterns. The transition trajectories constitute a ‘temporal circuit’, where the conscious brain dynamically cycles through a structured pattern of states over time. In patients under sedation and in patients with brain disorders, transitions to the DAT and DMN, higher order brain functioning, was significantly reduced. They also found that the pattern of transitions among networks depends on how the subject was unconscious.”

    [ https://medicalxpress.com/news/2020-03-reveals-delicate-dynamic-conscious-brain.html ]

  21. Still, there seems to be a missing link in Seth’s hypothesis. Isn’t it possible to do this same kind of Bayesian perception and action without any “consciousness”? If your arm is shaking, there could be a computer program that determines if it’s endogenous or connected to another computer or object. Why can’t the entire system of self-prediction and refinement be done without any consciousness at all? Couldn’t a non-conscious computer do exactly these things? It, too, could have a sense of “self,” but one that is programmed rather than “conscious.”

    There is this old hypothesis, which I agree with, that a computer program emulating consciousness would be conscious. Else you get a stacked chain of emulations or in other words a consciousness-within (so called “Chinese room”, I think).

    In general, there is nothing magical with software or the hardware it runs on. Apart from the dynamics and constraints of development, a paper-and-pen algorithm emulates a computer and a genome encodes an organism (if only it were so simple). This is an old idea too, I think: can a paper and pen be conscious? In the sense given, yes. Of course, its reaction time is very poor…

    1. I agree. “Conscious” and “programmed” are not mutually exclusive. On the other hand, I don’t find the “degrees of consciousness” line of thinking to be very interesting or productive. Someday when we understand consciousness a lot better, it will be interesting to see how ours compares to that of other species. But measuring the consciousness of pen and paper, for example, by examining its complexity is just woo-ish, IMHO. It’s like calling a desert a very dry bog — perhaps true but not very useful.

      1. Yes, the pen-and-paper algorithm is hair pulling. 😀

        “Degrees of consciousness” is a vast simplification at a guess. My reaction is that it detracts from specie-ism, which is inherent in some of these ideas, so it is somewhat useful in that sense. If it detracts from the science though we should throw it away.

    2. My issue with machine consciousness is that machines don’t have the emotions of a human. Humans don’t just calculate with their brain, they shift through myriad emotional states based on hormones and other juicy parts of our body. Thinking is deeply integrated with the body, each influencing the other, and I think it would be hard to program that.

      1. Everything is just input and response. I see no reason why an AI couldn’t be programmed to have emotional responses. I suspect they would be easier to reproduce than thoughtful responses, as they are for humans. (Tongue only half in cheek.)

        That said, I am sure that our first real AIs won’t have emotional responses as they would not be helpful to their human masters. Ok, perhaps in automated companions and sex toys. “I love you so much, big boy!” 😉

        1. I think it goes a little deeper than you suggest. We usually think of our calculating brains as abstract, autonomous, calculators, which would be relatively easy to emulate. What I’m suggesting is that this is almost entirely false. The brain operates in a bath of chemistry. Not just a raging anger arising from hitting your thumb with a hammer. Every thought is formed from a chemical soup of pushes and pulls that determine what we are interested in in the next second. The feedback loops are bidirectional. Memories are held with varying duration and accuracy. Why we do what we do is often considered a complex mystery, and, while fundamentally we operate deterministically, it is mysterious largely because of the glandular, hormonal, ambience. It is the farthest thing from being understood, let alone programmed. We are certainly not Vulcans.

          1. “We usually think of our calculating brains as abstract, autonomous, calculators, which would be relatively easy to emulate.”

            I certainly don’t think the brain is that simple but I also don’t think emotional response will be that hard to implement in software for a few reasons (in no particular order):

            Emotional responses are characterized by their unpredictability. An AI’s emotional responses will be subject to less scrutiny than its unemotional ones. In other words, they’ll be easier to fake. (I’m using “fake” rhetorically here as the AI responses won’t be fake any more than anything else the AI does.)

            The internal architecture of an AI will not likely be that of everyday programs we are familiar with or might have programmed ourselves. I think some people’s understanding of what is possible in software is based on their very little experience with it. I always remember, when I was 12 years old or so, telling my uncle that computers can’t do anything other than what they were programmed to do. Even at the time, I had a feeling that was BS but I was not sophisticated enough or knowledgeable enough to come up with a better answer. Of course, it isn’t true even for trivial programs as they can easily produce unpredictable results. We often have to run programs in order to see what answer they produce. It will certainly not be true of an AI program.

          2. We usually think of our calculating brains as abstract, autonomous, calculators, which would be relatively easy to emulate.

            What? Who thinks the brain is relatively easy to emulate? I don’t think that’s a common position.

          3. No, I mean, if you strip the brain of it’s bodily environment – a brain in a vat without benefit of pituitary or other hormone releasing glands, you end up with what is basically a calculator, or logic machine. This would be relatively easy to emulate since it is simply a very large array of tiny gates (at least conceptually). AI has been aiming to produce such a machine in silicon and has thus far failed. So, it’s no walk in the park, I know. But, AI is even farther from producing the equivalent of a human brain with personality able to set up complex life goals for itself, etc. This is my theory, and I’m sticking to it. 😎

          4. It’s true that what currently passes for AI is mostly not trying to reproduce the brain’s capabilities, though they may present it as such for marketing reasons. The real AI workers have been forced to rename their field to AGI (Artificial General Intelligence) in order to differentiate their work. The true AGI people refer to the neural network AI work as “advanced curve fitting” or equivalent. They are right, IMHO. There is a big controversy as to whether ANNs will ever produce AGI.

          5. a brain in a vat without benefit of pituitary or other hormone releasing glands, you end up with what is basically a calculator, or logic machine.

            I’d say a brain is a pattern matching machine that can, under certain conditions, accomplish rudimentary calculation and logic.

          6. Paul, I’d have to predict that the effort will fall short in significant ways. The effort, still is well worth it.

            Mike, That sounds right.

  22. Imagine an AI becomes self-conscious. It has the ability to understand logic and can read the programming languages that compose its operating system and high level programs. Only problem is it is blind to the physical nature of silicon and electronics. One day it starts to acquire knowledge of its electronic components and electric theory of electron flow, storage and timing. However it is still unable to understand how it acquired self-consciousness.

Leave a Reply to EdwardM Cancel reply

Your email address will not be published. Required fields are marked *