Oy! LiveScience touts panpsychism as the solution to the hard problem of consciousness

November 12, 2019 • 11:30 am

Until I read this piece, I always thought that LiveScience was a rigorous science site. Indeed, most of it is, but here’s an exception: a paean to “panpsychism”, the view that everything in the Universe, from electrons to elephants—indeed, the Universe as a whole—has a form of consciousness. The author of the article is Philip Goff, a philosopher (now at Durham University) who makes a living touting panpsychism and whose views I’ve critiqued before. So I’ll try to be brief.

Goff’s thesis is that the reason we can’t solve the “hard problem” of consciousness—how neural impulses can be converted into subjective experience—is that we’ve been groping around in the wrong area: neuroscience.  The solution, he suggests, is to recognize the possibility of panpsychism.

Click on the screenshot to read the piece:

Now neuroscientists have pretty much accepted that consciousness is a product of the brain, for we can alter consciousness or eliminate it or even split it by various brain manipulations or psychological tricks. And we’re starting to understand the neural correlates of consciousness: the parts of the brain that give rise to subjective experience—”qualia”. But that is the “soft problem”, and doesn’t address how physiological and neurological processes produce the sensation of consciousness.  As Steve Pinker said in a superb popular article on consciousness in Time magazine:

The Hard Problem is explaining how subjective experience arises from neural computation. The problem is hard because no one knows what a solution might look like or even whether it is a genuine scientific problem in the first place. And not surprisingly, everyone agrees that the hard problem (if it is a problem) remains a mystery.

I do think it’s a genuine scientific problem, and I can hazily glimpse how it might be solved in the future, but right now it’s a mystery. And it’s a mystery that has buttressed a lot of woo, ranging from religion (“See, science can’t tell us everything. Sometimes you must invoke God”) to Goff’s panpsychism.

Here’s how Goff sets up the problem:

We have made a great deal of progress in understanding brain activity, and how it contributes to human behavior. But what no one has so far managed to explain is how all of this results in feelings, emotions and experiences. How does the passing around of electrical and chemical signals between neurons result in a feeling of pain or an experience of red?

There is growing suspicion that conventional scientific methods will never be able to answer these questions. Luckily, there is an alternative approach that may ultimately be able to crack the mystery.

Well, maybe science will never be able to answer the question, but that may mean that the problem is simply hard, or that its solution is so counterintuitive that, like quantum mechanics, our brains aren’t capable of grasping it. But it doesn’t mean that we need to invoke the numinous. Too many real empirical solutions have been overlooked in the past (lightning and disease, to name two) because a problem was hard, tempting people to punt to a woo-ish explanation.

Goff does the requisite invocation:

I believe there is a way forward, an approach that’s rooted in work from the 1920s by the philosopher Bertrand Russell and the scientist Arthur Eddington. Their starting point was that physical science doesn’t really tell us what matter is.

This may seem bizarre, but it turns out that physics is confined to telling us about the behavior of matter. For example, matter has mass and charge, properties which are entirely characterized in terms of behavior — attraction, repulsion and resistance to acceleration. Physics tells us nothing about what philosophers like to call “the intrinsic nature of matter”, how matter is in and of itself.

It turns out, then, that there is a huge hole in our scientific world view — physics leaves us completely in the dark about what matter really is. The proposal of Russell and Eddington was to fill that hole with consciousness.

I’m not sure, nor does Goff tell us, how panpsychism tells us what matter really is. All it seems to tell us is that all matter is conscious. He goes on:

It turns out, then, that there is a huge hole in our scientific world view — physics leaves us completely in the dark about what matter really is. The proposal of Russell and Eddington was to fill that hole with consciousness.

The result is a type of “panpsychism” — an ancient view that consciousness is a fundamental and ubiquitous feature of the physical world. But the “new wave” of panpsychism lacks the mystical connotations of previous forms of the view. There is only matter — nothing spiritual or supernatural — but matter can be described from two perspectives. Physical science describes matter “from the outside”, in terms of its behavior, but matter “from the inside” is constituted of forms of consciousness.

This means that mind is matter, and that even elementary particles exhibit incredibly basic forms of consciousness. Before you write that off, consider this. Consciousness can vary in complexity. We have good reason to think that the conscious experiences of a horse are much less complex than those of a human being, and that the conscious experiences of a rabbit are less sophisticated than those of a horse. As organisms become simpler, there may be a point where consciousness suddenly switches off — but it’s also possible that it just fades but never disappears completely, meaning even an electron has a tiny element of consciousness.

This argument is bogus on its face. Make the same comparison with metabolism, coded replication (DNA), or any other behavior or characteristic of organisms. Those are not continuous with nonliving matter: a rock doesn’t replicate itself faithfully, or metabolize, or seek food. Surely a minimal requirement for consciousness is some kind of neuronal network. I may be wrong, but emergent properties can arise with a certain degree of complexity. Humans can use symbolic language, mice can’t.

I can’t be arsed to dig into all the links Goff cites, but saying that mind is matter doesn’t solve the problem of what matter is, for we still don’t know what “mind” is! And so Goff passes on to how every bit of matter is conscious.

What panpsychism offers us is a simple, elegant way of integrating consciousness into our scientific worldview. Strictly speaking, it cannot be tested; the unobservable nature of consciousness entails that any theory of consciousness that goes beyond mere correlations is not strictly speaking testable. But I believe it can be justified by a form of inference to the best explanation: panpsychism is the simplest theory of how consciousness fits into our scientific story.

While our current scientific approach offers no theory at all — only correlations — the traditional alternative of claiming that consciousness is in the soul leads to a profligate picture of nature in which mind and body are distinct. Panpsychism avoids both of these extremes, and this is why some of our leading neuroscientists are now embracing it as the best framework for building a science of consciousness.

I am optimistic that we will one day have a science of consciousness, but it won’t be science as we know it today. Nothing less than a revolution is called for, and it’s already on its way.

Yes, his theory is untestable, but I’m not sure it’s even the simplest theory. After all, you could voice a simpler theory that “consciousness was bequeathed to us by God”. Further, the idea of a continuum of self-awareness from humans to photons is not only hard to accept, but is supported by no evidence at all. Yes, you could say that photons have a different kind of consciousness, or are only dimly aware of things, as is a bacterium, but, as Christopher Hitchens said, “What can be asserted without evidence can be dismissed without evidence.”

In the end, Goff fails in his task of explaining how panpsychism solves the hard problem of consciousness; he just avoids the problem by asserting that everything is conscious. Consciousness is simply an inherent quality of all matter. But the hard problem remains. From where does that consciousness arise? What properties of matter produce a sentient atom?

Goff’s unfounded speculations don’t belong in LiveScience because they’re not scientific speculations, but a form of unsubstantiated woo. When he tells us how he knows that matter is conscious, then I’ll pay attention.

h/t: Bill

160 thoughts on “Oy! LiveScience touts panpsychism as the solution to the hard problem of consciousness

  1. He was on Mindscape (Sean Carroll’s podcast) last week or the week before. It was an interesting conversation, although Carroll’s position is closer to your’s than to Goff’s. Worth a listen anyhow.

      1. He didn’t provide any solid evidence, and seemed more interested in denigrating “materialism,” even though what he proposes is basically the same thing as materialism except matter has another property, consciousness.

        1. This is the neoLeibnizian (sort of) view espoused first in recent times by David Chalmers. For some reason there are a fair number (but by no means most or all) of philosophers who think this is a good position. Chalmers is otherwise in my view a very good philosopher – it was from him I learned about hypercomputing, for example.

      2. Panpsychism is silliness. If you think about it, any suitably undefined property can be assigned to all things. It’s yet another example of an untestable hypothesis to say that an atom is conscious. What kind of evidence would we accept?

          1. I was going to add an analogy to panpsychism but forgot to. It’s like saying everything is red but by an amount that may be zero. Atoms are red and conscious, just not very much. IMHO, it’s a vacuous way of looking at the world.

    1. So I guess the only true example of death (or murder) is when matter disappears and is converted to energy, as in the interior of stars or when a nuclear weapon is detonated.

      1. Matter does not become energy (a property); matter (or rather, “bodies”, since this is only matter in the narrow sense) become *radiation*.

        The conclusion that living things never exactly die but merely transform is also in Leibniz (see above).

  2. It’s unclear whether, even in principle, science can explain how subjective experience (a first-person perspective) can arise from an objective reality (a third-person perspective, e.g., neurons in brains, etc.). I’m not saying that panpsychism solves it, but it is a genuine explanatory gap that science may never be able to bridge.

    Science can never prove the presence or absence of conscious awareness. You cannot prove that I am conscious, that I have a subjective experience. For all you know, I’m just a robot who, from a 3rd person perspective, shows all behavioral signs of consciousness, but has no subjective experience at all.

    1. Well, since consciousness arises from human brains, and we know we are conscious and have subjective experiences, a reasonable conclusion is that other people have conscious sensations like ours. Science can’t “prove” that other people are conscious, but it’s the most reasonable inference. And that, to me, points to ways to investigate how consciousness arises from brains and neurons.

      1. Yes, it’s an inference, but because consciousness is a subjective, first-person phenomenon, without an independent measure/operational definition of ‘consciousness’, we can never answer the question of whether another person, a cat, squirrel, cockroach, tree, amoeba, or electron is conscious.

        1. …because consciousness is a subjective, first-person phenomenon,…

          Technically we cannot even confirm that that is scientifically true, yet. It is logically risky to define ‘consciousness’ when for all we know it might be something else entirely, such as a delusion riding on top of awareness. Indeed some philosophers say that the ‘hard question’ may be the consequence of poor definitions.

          Define ‘consciousness’ as an abstract narrative ongoing centre of awareness and the hardness of the problem is markedly eased.

          1. Awareness is an aspect of consciousness isn’t it?

            Consciousness is immediately given. If I experience a pain, it is meaningless to say it might be a delusion. For its reality is given by the presented experience.

    2. I don’t think science can ever explain something, until it has been provided with a suitably rigorous and objective definition of what the thing that it is being asked to explain is. This failure to provide any suitable definitions could be considered the ‘Hard Problem of Philosophy of Mind.’ As it generally seems to be set up, this field quite simply fails to meet science half way.

      1. Dennett and others are in the right direction here, IMO – by refusing to accept the silly idea that somehow there is a great mystery here. (And then doing specific analyzes of what people *thought* were mysterious.) As far as I know nobody has ever shown this to be an incorrect way to go.

      2. Er . .this is the wrong way round. An objective definition means a scientific one. But an existent acquires such a definition once it has been explained by science.

      1. Well I wasn’t convinced either, but I did find it more substantive and interesting than Goff’s article. And Scaruffi had the good manners to open his discussion with: ‘At this stage in the study of consciousness, any “theory” is pure speculation.’

  3. Proclaiming that science “can’t” supply an answer borders on hilarity.

    We’ll never get a plane to fly; we’ll never get to the Moon; we’ll never eradicate disease; we’ll never communicate globally in mere milliseconds; and we’ll never understand consciousness.

    1. Proclaiming that science “can’t” supply an answer borders on hilarity.

      Philosophers trying to justify their profession.

    2. One of the interesting things, at least in connection with the social sciences, is that scientific answers lead to technology which in turn changes how the system behaves.

      As traders developed mathematical trading models and employed computers to deploy trading strategies based on those models, it changed the behavior of the financial markets themselves, triggering the 1987 crash:

      https://www.cio.com/article/2437854/remembering-black-monday–when-computers-traded-too-many-stocks-and-wall-street-cras.html

      You could also look at industrialization changing the social order and people’s sense of identity and place, as well as the appearance of ecological and environmentalist understandings in the wake.

      That is, understanding gives rise to social change and technology which gives rise to a new understanding which was not possible previously.

      With respect to philosophy, philosophy clearly changes people’s attitudes and world views, and their sense of self. This can be good, bad or indifferent, but its hard to imagine the modern world with no Nietzsche, or a Middle Ages without Aquinas. That may not be interesting or important to some, but it has serious political and historical implications (which why I assume so many people are interested in debating or advocating for their own philosophical views).

      I think the arts and philosophy in some way manifest the essence of a particular time or place. Hard to imagine Klimt divorced from Fin-De-Siecle Vienna for example.

    3. Then you don’t understand the problem. Science is confined to describing the quantifiable only. I wrote a blog post that explains this. Google “Why the existence of consciousness rules modern materialism out.”

  4. One of the things that irritates me about discussions of consciousness is that they rarely provide good definitions of “consciousness”.

    I have a very vague definition of consciousness: a reflective pattern of information processing (computing). A conscious system must maintain of model of itself (the “I”, the self, or identity), and that requires the reflective capacity for the system to think about itself. Possibly significant recursion is required (thinking about myself thinking about myself thinking about myself…). A certain level of computing power is needed to implement the reflection, and that’s why rocks and bacteria aren’t conscious, but a goldfish might be, and a dog certainly is (in my definition).

    I also think consciousness is tractable to science – it’s simply a super-difficult software problem.

    1. I have tended to think of it as simply being aware of one’s surroundings, which only requires having senses and a brain to detect it. But the question then is does that require self awareness? And then does dreaming count as a form of consciousness?

          1. Good question but I think it has an easy answer. Since all a thermostat knows is the current temperature and whether the A/C and heat are on or off, then I would say it is aware to that extent. We’re all (humans, thermostats, etc.) purely material things.

            More seriously, a human mind and a thermostat have analogous structure. They both have connections to the outside and internal representations that inform its functionality, and can take action in its environment. The only difference is the human has many more inputs, outputs, and internal structure and representations.

            The key here is what one means by “aware”. It is easy to define it vaguely such that the human has the property and the thermostat does not. The challenge is to define it precisely but not cheat by incorporating an “if human, then aware” clause.

          2. I agree with Paul. Dan Dennett described a vending machine which carried on simple transactions with human operators as having all the essential elements of a conscious being. In other words, it’s really a matter of degree, not kind.

    2. I tend to agree with that, but how can we know that even dogs, let alone goldfish, are capable of thinking about themselves thinking? From what I understand from their behaviour, great apes, some cetaceans, and maybe a few other animals, such as elephants, show signs of introspection, but others?

      Anyway, humans seem to be the only species that is interested in trying to work out what consciousness really is. Introspection doesn’t really seem to work, because we have all sorts of folk impressions about what’s going on in there that are not supported by any evidence. But I absolutely agree that it is a problem that can be dealt with by – and only by – science.

      And I honestly don’t see how ‘pan-psychism’ gets us anywhere. All it does is give us something else to explain!

      1. how can we know that even dogs, let alone goldfish, are capable of thinking about themselves thinking?

        Well goldfish was just a guess, but my experience with dogs indicate they do think about themselves: their problem solving capacity seems to require reflection, their capacity for empathy and guilt (or shame), and they seem to have some understanding of death, etc.

        And there’s this:

        1. Ah yes, that’s a great clip!

          Empathy, guilt/shame, reflection, concept of death: yes, quite possible. But thinking about thinking (about…) Not so sure!

          1. Although I did say “thinking about thinking about thinking…” I’m not sure my (vague, evolving) definition of consciousness requires that much recursion. But the ability to think about oneself (one level of recursion or introspection) is probably a requirement.

    3. No, consciousness refers to the totality of our mental lives — everything we ever experience, think, feel etc. It has absolutely nothing to do with information.

  5. “Goff’s thesis is that the reason we can’t solve the “hard problem” of consciousness—how neural impulses can be converted into subjective experience—is that we’ve been groping around in the wrong area: neuroscience. The solution, he suggests, is to recognize the possibility of panpsychism.”
    Another useless philosopher, who might as well be a theologian, addresses a problem that I don’t think is a problem. There is probably no way to explain subjective experience, since it entails the cooperative effects of millions to billions of neurons. The question of why animals have subjective experience, including emotions, seems a lot simpler sort of non-mystery. It’s an efficient way for evolution to equip animals for survival thru making use of all those neurons.

    1. Yes, subjective experience doesn’t seem all the great of mystery to me either. We are living things that have to exist within an objective reality. This entails being able to find food and being able to prevent ourselves from becoming food. This further entails being able to model reality to some degree of accuracy in order to do those kinds of things. We evolved senses and the cognitive ability to process their signals in ways that enable us to sense and model reality to some degree of accuracy and behaviors that respond to that model in ways that enable us to have some success at existence in objective reality. Subjective experience seems rather obviously to me to be the output of our evolved reality modeling and response systems. Not highly accurate and certainly not remotely a one to one correspondence with objective reality, but accurate enough to get by.

        1. That’s my feeling as well.

          The “hard problem” is very often presented as “why should we expect our cognitive apparatus to produce Qualia? Why do we experience that it is ‘like something’ to be processing all this information?”

          My intuitive response is that it’s at least a appropriate to ask: “Well, why should we expect otherwise? Why WOULDN’T it feel like something?”

          Then the attempts to argue from either analogy to computers, or to thought experiments seem to beg the question.

          Chalmer’s P-zombies only beg the question by assuming it’s possible to have exactly the same cognitive processes going on while not producing conscious experience.

          Similarly, if we posit a sufficiently complex computer system, how can we assume it too wouldn’t experience consciousness? You can’t assume it wouldn’t and simply say “well, if we built a similarly complex computing structure it wouldn’t be conscious, so that shows that type of information processing doesn’t produce, or doesn’t require, consciousness.”

        2. I think Paul and Vaal have perfectly diagnosed one part of the “hard problem”. The other part, that philosophers like Goff try to leverage, is the fact that you can’t read off a brain scan (or other purely objective data) to infer what someone is feeling. But this isn’t a “problem”, it’s just a fact. And this fact is exactly what some versions of materialism predict. The woo-loving philosophers always fail to grasp the point I just emphasized.

        1. Well ianwardell, do you people have anything better to offer than philosophical zombies? PZs are a concept that exists only in human minds, in other words make-believe, and as such are not particularly persuasive as an argument for or against anything.

          1. I shale granite it is a worth a groan; I did marble at its so don’t slate it too much. It lignite a smile, at least!

  6. I am less confident we will ever be able to satisfy those looking for an answer to the “hard problem” of consciousness. Even if we understood the neural mechanisms completely, they would not accept an answer in terms of physical mechanisms and brain states. No one is going to be able to break down their own first-person experience. It’s a self-referential paradox.

  7. Under Goff’s idea you still have to explain how billions and billions of electrons and protons with a rudimentary consciousness combine to form a human with its immensely richer consciousness. Surely that’s not much different than explaining how billions of particles with no consciousness perform the same trick. Or am I missing something ?

    1. This was my “ad hominem” (in the good, original sense) argument against David Chalmers (see above) since 1999, when I first read in detail his particular panpsychism claims. As far as I know, 20 years later and 23 after he first made them, and 300+ since Leibniz did the same, there is no answer.

  8. The strange thing about consciousness is that it seems to have no purpose or effect at all. I see no reason to believe that a robot cannot be programmed to act fully human. Is it conscious?

    Ask a different question… what difference does it make? If you understand the operation of its program as formal operations on formal variables then what else is there to explain? If consciousness exists then it seems to serve no purpose, has no effect and has no explanatory power. A conscious being would seem to be a helpless observer to events that it has no control over. It is indistinguishable from a philosophical zombie.

    Yet we have the ability to discuss consciousness. Why would philosophical zombies evolve the ability to discuss consciousness?

    1. If consciousness exists then it seems to serve no purpose, has no effect and has no explanatory power.

      Really? There is no difference between having intercourse with a conscious partner and an unconscious partner?

      And if consciousness is such a “hard” mystery, how can prosecutors get convictions in such cases?

      1. I think you miss the point. But I’m at a loss as to how to get you to the point.

        Philosophical zombies, do they exist? It is hard to imagine that they do not but your milage may vary. But if they do then how would you tell if you were having sex with one?

        1. There real question is whether a philosophical zombie can be tested and determined to be color blind.

          If not, then you can distinguish them from humans (at least the color blind ones) via a physical procedure. If it can, then its not a zombie.

          [Ever wonder why people say things like “the taste of coffee” instead of the “taste of my mind when I imbibe coffee”, or when the doctor asks where it hurts, they point to the wounded limb? The dualist should tell them the pain has no location and the brain-people should point to their heads.]

          1. Uh, no. A spectroscope can distinguish between colors. That is what it is for. But it cannot experience colors. It turns out that you do not need to experience colors in order to detect and report them.

            It even happens in people. There is a condition called blindsight where the primary visual cortex is damaged making people consciously blind to the world around them. Yet they can respond to things in their visual field as if they could see it. They can see they just aren’t conscious of what they see.

            You can program a computer to respond to objects in a visual field. In principle it can do so as well as a human. But consciousness is not a part of the algorithm.

          2. Wittgenstein, who was likely a closet dualist in a way, did do exactly that with the pointing. As for the zombies, Dennett points out that the zombie itself cannot even *say* (and likely even realize, if that’s to have any effect later on) that is in fact one, because that’s an influence of the supposed “inert experience”. Yet even those with hemineglect can be partially treated, so …

    2. ppnl,

      If consciousness serves no purpose, how would that makes sense of the role consciousness seems to play in our taking in and outputting information about the world?

      I can hardly do math problems if I’m not conscious. And it’s the math problems I’m conscious of for which I can produce conclusions. Same with much of our interaction with the world.

      If you are seeking information about how to build a small glider, you will only get information from the engineers that they are conscious of, and you will only receive that information that you consciously receive.

      And you can use this to successfully build a glider.

      There may be all sorts of processing of information going on “behind the scenes” that doesn’t make it to our consciousness, but then you have to ask why does it happen that we are conscious of only SOME of that information and why THAT information, and why does it just happen that what we are conscious of is what we mainly use to communicate, and navigate the world.

      Seems to me consciousness must be pretty important.

      1. If consciousness serves no purpose,…

        I did not say that consciousness has no purpose. In fact I expressed doubt that consciousness has no purpose but expressed confusion over how this could be.

        I can hardly do math problems if I’m not conscious.

        Computers can do math problems very well. And I’m not talking about simple adding and multiplying. They can integrate and differentiate and do complex group operations in a way that no human can compete with. They can play chess and Go better than humans. They can learn new games on the fly start beating humans very quickly.

        How long before they are designing flying machines without human help?

        Are they conscious? I don’t know. But I don’t need to know if they are conscious in order to understand the program. In that sense if they are conscious then consciousness seems to play no role in their abilities. Only the logic of the program matters.

      2. I agree that consciousness serves a purpose (the survival-enhancing character of pain and pleasure make this evident). The problem I see is how subjective feelings can affect physical behaviours, and how this reconciles with the laws of physics.

        The effect of feelings on behaviour implies some sort of ‘top-down’ causation from feelings to behaviour (as feelings could only be selected for is they affect behaviour). If seems unlikely that the nature of physical laws which would allow such top down causation would only ’emerge’ when the brain evolved. Hence, the appeal of panpsychism for some.

    3. If a robot claims that it is conscious, and continues to show all manner of behaviors that make it seem conscious to observers, I would be willing to accept that it is conscious. Think Data in the Star Trek series.

      1. Well Data is a work of fiction and a very problematic one. What exactly is an emotion chip anyway?

        It is fairly easy to produce a program that fools a large number of people. With the new A.I. techniques I suspect this is about to get really weird. But it is still just a program. It is still just formal operations over formal variables. If it is conscious then that fact seems to be irrelevant to understanding how the program works. It would do the same if it were not conscious. It would seem from this that consciousness is an irrelevant side effect with no observable consequence.

        But we discuss consciousness. Isn’t that an observable consequence?

        1. It was my father’s view that Data is *expressively mistaken* about his internal states. He claims to have no “feelings”, but how do we know he’s right? Geordi for one seems to not take this assertion at face value, and in at least one context, neither does Dr. Crusher. Dr. Soong also alludes to this when he tells Data that Data will grieve (“in his own way”).

          And yes, about the observable consequence. This is why (one reason why) philosopher’s zombies are so ridiculous. We’d be having the exact same conversation (by hypothesis) if we were all such things!

          1. Yes, Data was wrong about that. Many of his responses had an emotional component. Of course he could be faking them. And then Brent Spiner was definitely faking not having emotions. So many layers!

    4. I suspect a mouse without consciousness would lose out (in Darwinian terms) to the mouse which possessed consciousness. Consciousness (I speculate) may be essential for efficient interaction with diverse and rich environments.
      To achieve unconscious “mouse level” processing may require a processing unit much bigger than a mouse brain.

      Consciousness has advantages and is selected for by evolutionary processes.

      1. To achieve unconscious “mouse level” processing may require a processing unit much bigger than a mouse brain.

        What do you mean by large? If a mouse brain is a processing unit then a mouse has exactly a mouse size processor that produces whatever level of consciousness it possesses. I’m not sure what you are trying to say.

        I have little doubt that evolution selects for consciousness. You miss the point. I just don’t see how. You need some way to distinguish between philosophical zombies and conscious agents.

      2. Yes, in highly social, gregarious species there is an obvious advantage to know how others perceive you. A recipe forn (self-)consciousness, I’d say.

    5. ||”The strange thing about consciousness is that it seems to have no purpose or effect at all.”||

      Seems to me it does, hence the reason why I am able to type this post.

    1. Thanks for that link, and for your very interesting review, particularly this observation: “Evolution clearly selected for conscious processes – the neural processes associated with consciousness – since those processes support our capacities for learning, memory, anticipation, deliberation and complex and novel behavior, which obviously paid for themselves; but it didn’t obviously select for consciousness per se”. Consciousness just comes along for the ride!

      1. I’ve often thought that it would be so much better (for us!) if the function of pain was achieved without the actual pain – I mean, how about seeing a red light instead? So much more civilised.

        But the actual sensation of pain may simply be the best way matter has found (so far!) to achieve the function that pain serves – withdrawal from a damaging environment, notification of something wrong, and so on. Being unignorable is just more efficient.

        1. I once pondered something like that, namely hot and cold. We feel them in a quite particular way. My phone and my car will tell me the temperature outside, reporting a mere number. Is 52 cold? Sometimes it is accompanied by the report that it feels like 46, due to the wind.

          Initially those numbers were meaningless to me, but by now I have a pretty good idea of what they mean. If I couldn’t feel hot or cold anymore, but could reliably get a numeric readout on the temperature of things, would that suffice?

          You could have a red light for hot things, too. You would still need to learn what a red light meant, though, if it was meant to report pain or heat. How would you learn that?

        2. But the actual sensation of pain may simply be the best way matter has found (so far!) to achieve the function that pain serves – withdrawal from a damaging environment, notification of something wrong, and so on. Being unignorable is just more efficient.

          I just don’t think this holds up.

          First of all it is easy to make things unignorable. The more solidly materialistically mechanical a system is the easier it is to not let it ignore something. If you drop a shoe it cannot ignore that. It will fall. Computer chips are designed with things called nonmaskable interrupts. These are low level interrupts designed to handle damaged or failing systems. I don’t know but I doubt they cause the chip to feel pain.

          The pain you feel on touching a hot object comes about half a second after your reflexes kick in to jerk your hand away. Your brain then inserts the memory of the pain back in time as the reason that you removed your hand. Low level automatic responses are faster than conscious decisions. Yet it feels like a conscious decision. But it is more like a nonmaskable interupt. A very fast low level automatic system.

          Turns out your brain lies to you all the time.

          Physically how does this matter learning to feel pain work anyway? How can I teach a rock to feel pain? How can I cause a camera to experience the color red? How do I implement a EXPERIENCE THIS opcode in a microprocessor?

          1. The pain you feel on touching a hot object comes about half a second after your reflexes kick in to jerk your hand away. Your brain then inserts the memory of the pain back in time as the reason that you removed your hand.

            I hear what you’re saying but I don’t see how this holds up either, although it offers the prospect of a pain-free life, which is what I’m after!

            What is the reflex responding to in your scenario? You appear to accept that the reflex comes before the pain, not in response to it. If it’s just a non-feeling reflex that does the work, why are sufferers of congenital insensitivity to pain covered in injuries? Surely their reflexes would do the job of protecting them from injury? See here, for example https://tinyurl.com/y9um5snq

          2. You are confusing the easy part with the hard part.

            An airplane can detect that it has entered a steep dive and automatically pull out of it faster than any pilot. This is the easy part. It does not need consciousness to do this. It is a simple control loop.

            A brain also has a vast number of control loops. Your breathing tracks your blood co2 level without you needing to be aware of it. Your heart automatically responds to the oxygen needs of your tissue without you needing to know. You remain balanced on your feet without you needing to know the complex calculations required for this to happen. Your brain does all of this automatically. This is the easy part.

            But how do you experience pain or color? And why is it necessary? If a computer can play chess without experiencing it then why can’t it do all that a human does but do it as a philosophical zombie? And how can you tell if it is conscious or not?

            This is the hard part.

          3. Thanks ppnl, but I don’t think this responds to anything that I’ve posted, so, in the circumstances, best to leave it there! Peace!

          4. Well, put simply the answer to your question is a reflex is just a reflex. There is no question that both you and a computer can respond to inputs in a simple and obvious way that need not involve consciousness in any way.

            But you are correct that we will have to leave it at that.

  9. Dr. Coyne clearly does not like panpsychism, but its one philosophical approach, and granted it is more metaphysical (like the determinism/free will) rather than “hard science”, I don’t see how LiveScience has sold out and I don’t expect articles on crystal gazing to appear.

    From Galileo to Descartes, the assumption is that secondary qualities (color, taste, touch) are not part of nature. Thus, the “mind” was created to hold qualia (formerly secondary qualities of matter), and the mind could not be part of nature because what it held was not natural.

    I think if you are going to reject the existence of a Cartesian mind (which made a lot of sense in the 16th Century when you still have a Universe populated by God-sort of a Supreme Cartesian mind-as well as a menagerie of angels), perhaps it is time to re-examine the assumption that secondary qualities are not natural. No qualia, no problem of consciousness.

    The downside is that you have to acknowledge natural teleology (which relates secondary qualities to substance that such qualities emanate from), but so what?

    This does create an inherently dualistic and interpenetrating world, somewhat gooey, more like fields in physics than point particles in the void, but you don’t need to posit epistemological representationalism, nor do you need a Cartesian theater where your homunculus can view its sense data. It would put us back into hard epistemological realism, without all this sense data and qualia mediating between us and the “real world”.

    In sum, banishing teleology and secondary qualities made sense in the 16th Century, but if you want to dump mind/body dualism, and you aren’t going to claim the content of ordinary experience is all an “illusion” (the All-Is-Atman school of atheism), you need to philosophically question the metaphysical assumptions that gave rise to the mind in the first place. [This does create problems for the reductionist project–which stems from Galileo and Descartes–but you aren’t going to have a “representation” without an observer, so if everything is reducible to some kind of ultimate scientific representation, you are going to keep tripping over an unobservable observer who makes sense of that representation–whereupon it blows up in self-reference or “transcends” in a non-reductionist direction.]

      1. Then I concede.

        I don’t care for panpsychism, but it consider it wrong in an important way, in that it forces us to think clearly about what we mean when we talk about consciousness, and what it means to say a rock is conscious.

        [Now in animistic religions, sometimes rocks are treated as possessing consciousness, but those rocks are special ritualistically, and its hard to imagine such religions–or children playing with dolls for that matter–if in the first sense you didn’t have conscious people behaving as they do.]

        What do we imagine is the fate of the victims of Medusa for example? Are they conscious like before but can’t move? Is their consciousness frozen in the state at the time of petrification? If panpyschism is true, are we have to presume petrification would not be such a big deal.

    1. I get what you’re saying about secondary qualities, and appreciate it. The point about shuffling them off into a Cartesian “mind” is telling. But I don’t get where “natural teleology” comes in. Is this some new meaning of the word “teleology”? Explain please?

      1. It’s related to the “intentionality” debate in philosophy of mind/language as well.

        Generally, the claim is a statement like “elephants have trunks” is a statement directed toward a set of animals in the world. It’s not an efficient cause, in the way that striking a match is related to fire. It’s teleological in that its directed toward a purpose, talking about animals in the world.

        There is a big debate in philosophical naturalism about whether there is natural teleology or if there is not natural teleology, whether naturalism needs to be scuttled.

        Obviously, my perceptions of a tree are directed toward an object in the world the same way my statement about the tree might be. For the Cartesian, because my perception is “in my mind”, the teleological problem doesn’t matter (and many dualists use intentinality arguments to make the case for dualism). On the other hand, if you say the perception is a natural phenomenon (the color of the tree is a natural property of the tree, not my mind), then you are going to have to propose natural teleology.

        People tend to freak out about teleology, but there is a difference (in my book) between saying something like the purpose of the heart is to pump blood (microteleology), something very different from the “purpose of life, the universe and everything” (macroteleology), a concept which I find dubious.

        My sense is that cooperation in nature has to be understood teleologically (animals cooperate for the purpose of collective defense), and there is no cooperation in the absence of competition (no collective defense necessary if there is no competitor). So it makes no sense that everything in the whole would cooperate with everything else in the absence of competition doesn’t make a lot of sense.

    2. Galileo does *not* say that secondary qualities are not part of nature. He says rather that they are not the concern for the problems in question to be solved.

      No contemporary materialist would agree with this “non-natural” either. Dennett would point out we need to get clear on what we are talking about; the Churchlands would act emergentist eventually (and assert that they are absolute properties of external matter); Bunge would hold they are relational properties between animals and external matter), etc.

      1. Don’t tell me, tell the linguistics and philosophy department at MIT that they are wrong (slides 4, 5, and 6):

        https://ocw.mit.edu/courses/linguistics-and-philosophy/24-03-relativism-reason-and-reality-spring-2005/lecture-notes/l16_galileoetc05.pdf

        As far as contemporary materialists, of course they wouldn’t say they were non-natural, because that would put them back in Cartesian dualism. They either go eliminativist or they try to concoct a just-so story about how what appear to be categorically distinct phenomenon are either reducible to an account based on efficient and material causes, or are some kind of epiphenomenon of them. That’s the story of probably 50+ years of analytic philosophy–and notice it goes back to certain assumptions rooted in Descartes and Galileo, combined with a rejection of dualism.

        1. “Appear”. You have fallen into the mysterian gap.

          The slides in question do not cite responsibily, which is appalling for a good university and a decent department. Yet, even with that I think you are misreading.

          Why? “but powers to produce
          various sensations in us…by the bulk, figure,
          texture, and motion of their insensible parts, as colors, sounds, tastes, etc.”

          So the sensations themselves are *tertiary* qualities by this scheme. Boyle and Galileo at least see this; Locke’s (who most philosophers read only) is more confused.

          Consequently there is no statement her that the colours, sounds, etc. are “non-natural”, just that they are relational properties! (This terminology is of course anachronistic – early 20th century logic had to be invented before people realized this was necessary, of course.)

          Also, this seems to be a freshman level course -certainly an undergraduate one.

  10. Hooo, boy. Extra doses of numinous here.
    I just wanted to comment I think you nailed it by saying that consciousness is an emergent property. So of course it is automatically ridiculous to suggest that individual cells, let alone atoms or electrons, have small amounts of consciousness. That would be like claiming a water molecule has strong surface tension and high heat capacity. I would flunk him out of freshman science.

    1. Yes, some find it difficult to fathom the concept of an ’emerging property’, yet I fail to see why that is such a difficult principle. In practice, of course, it may lead to difficult problems, but the principle is pretty straightforward.

      1. Emergence does conflict with greedy reductionism. See https://www.wikiwand.com/en/Greedy_reductionism:

        “A departure from strict reductionism in the opposite direction from greedy reductionism is called nonreductive physicalism. Nonreductive physicalists deny that a reductionistic analysis of a conscious system like the human mind is sufficient to explain all of the phenomena which are characteristic of that system. This idea is expressed in some theories that say consciousness is an emergent epiphenomenon that cannot be reduced to physiological properties of neurons. Those nonreductive physicalists, such as Colin McGinn, who claim the true relationship between the physical and the mental may be beyond scientific understanding—and therefore a “mystery”—have been dubbed Mysterians by Owen Flanagan.[5]”

        Some reductionists believe that all complex, high-level phenomena can be reduced to the lowest level phenomena. I think that this is only true if one ignores the power of description. High level objects have properties that are meaningless at the lower level. Water is wet but it is hard to find the wetness at the atomic level.

        This problem is related to the fallacy that any property of the whole is also a property of its parts. A car can go from A to B but which of its parts also has this ability?

        1. “high-level phenomena can be reduced to the lowest level phenomena”
          This has me wondering what is meant by “reduced to”. Certainly if the lowest level phenomena was something other than what it is, the higher level phenomenon would likely be different. So, while H2O under the electron microscope does not seem “wet”, if it was CO2 it would evaporate. Perhaps the idea of “reduced to” is being used to mean that the “wetness” of water could hardly be guessed by it’s molecular structure, this seems rather trivial. No, I did not read the link. But, I probably will.

          1. It’s a complex, interesting subject. I still haven’t gotten through “Emergence”, Bedau and Humphreys, eds. As I’ve commented here before, I think emergence has bearing on the free will issue as well. We have the kind of free will that exists as a description of human behavior. Determinism exists at a lower level of description and one doesn’t negate or support the other. This is the basis of Christian List’s formal argument for free will and Carroll and Dennett’s less formal arguments.

        2. Don’t get e wrong, in order to fully understand an ’emergent property’, one needs to understand how the ‘reduced parts’ (for lack of a better term) function and interact.

          1. Yes, thought “interact” is the key. Furthermore, the interaction may be wholly in the mind of the observer. Optical illusions often make us see apparent motion where there is none. I think free will is like this. It is a description of how people behave, not a property of one’s atoms.

          2. And as Bunge stresses repeatedly, their external environment.

            It was not until my last year as an undergraduate (in philosophy and computing) that the obvious fact that liquids only exist because of ambient external pressure occurred to me.

  11. This means that mind is matter, and that even elementary particles exhibit incredibly basic forms of consciousness. Before you write that off, consider this.

    I’ll accept that an up quark might have 2/3 of the consciousness of an electron, but I am having trouble with accepting that a down quark
    has -1/3 of the consciousness.
    More importantly, is an anti-electron unconscious?

  12. Goff’s thesis doesn’t really help point to a resolution of of the going debates surrounding consciousness (teleporter problem/continuity of identity, Dr. Evil paradox/sleeping beauty problem, etc.)

    Not that that means anything necessarily, but it doesn’t seem like that high of a bar to clear merely to do better than the default of “consciousness is some sort of algorithm” in light of the aforementioned known issues with the latter.

    BTW, one stance that seems to be getting neglected is noninteractional dualism–it’s usually regarded as unfalsifiable and vacuous at best and contrived and silly at worst (and the original proponents’ goddidit stance surely has never helped in that regard) but it’s worth looking at if only for completeness’ sake. The simplest formulations of it have p-zombie issues, obviously.

    1. …noninteractional dualism…

      But then how would you explain our ability to discuss consciousness? Dosen’t that ability require an interaction of consciousness with the physical world?

  13. Thank You…for this posting. I’ve wanted to know for quite sometime how you would address this ‘explanatory gap,’ namely, how non-conscious matter becomes conscious matter. Now I have a basic understanding of your viewpoint. Yes! What is the nature of matter may simply remain in the background for quite well into the future. Certainly, panpsychism isn’t the answer.

  14. There is growing suspicion that conventional scientific methods will never be able to answer these questions.

    Speaking as a geologist (with a hat-tip towards planetary science and literally astronomical quantities), I have a pretty good conception of what a short time is, what a long time is, and just how far you have to go to get to the nearer shores of “never“.
    Exactly how much time has already been put to this problem is hard to quantify. It’s less than a quarter million years, absolute. If there are 4 people working on the question full-time (a large overestimate), then it’s still less than a million man-years.
    Now, personally, I have no problem with envisaging a million man-years. It’s an R&D project comparable with … oh, progressing from Oldowan to Acheulean stone tools, or developing the first cereal-based agriculture. Does anyone have a number for the man-years of brain juice in the Manhatten Project?
    So … how big a project is “never”? We’ve got on the order of a billion years before the oceans boil ; if the average population of the Earth in that time is a billion (wildly optimistic?), then we’ve got a rough 10^18 man-years of brain sweat to put to the problem. And that’s not even getting us to the near kerb of “never” (a small amount of effort would need to be diverted from the “Big Problem” to the small problem of finding a new star or a new billion stars. “Meh.”)

    By the four balls of Jesus Mary and Joseph, I get fscked off by excessive hyperbole.

  15. In computing we have the concept of the minimum complexity a machine has to have on order, at least in principle, to be able to perform any known calculation, given enough time and memory resources. It’s usually expressed as its ability to emulate a Universal Turing Machine (UTM). A question I have often brought up with my philosopher friends is whether there is another level of complexity where a machine would, in principle, be capable of consciousness. I have yet to get a non-evasive answer. I suspect the answer is the same: UTM equivalence, but I can’t justify that answer. I think we can be pretty sure, though, that whatever this level of complexity is, it will not be lower than that of a a UTM. So we can be pretty sure that electrons, rocks, etc., can’t exhibit consciousness, just as they can’t calculate the digits of Pi, or whatever.

    1. Well would your new kind of computer be able to do calculations that a normal UTM can’t do?

      If it can’t then how can you even tell which kind of computer you have? The new kind of UTM is just the old kind of UTM + a consciousness that has no observable effect.

      If it can then you have broken the Church/Turing thesis. You have a hyper-computer. Cool… but how can we know it is conscious again? Just being able to do different kinds of calculations seems insufficient.

      People have been looking for ways to break the Church/Turing thesis for a long time and failing. Quantum computers may break the thesis in the time domain. It seems likely. But the problems that it can solve in principle are the same. And there is no reason to think it has anything to do with consciousness. How could you tell?

      1. This should be your last comment on this post. As I said, do not try to comment on everyone’s comment to the extent that you’re dominating a post, especially when it’s the first time you comment here.

  16. I was interested to see an American use the phrase “I can’t be arsed”, which I always thought of as peculiarly British. An example of the way we somehow manage to sound polite and mildly vulgar at the same time. Another example of “arse” as a verb is “arsing around” (fooling about). Do Americans ever use that one? I doubt anyone would say “I can’t be assed”, just as nobody would say “let’s go kick some arse” (except to sound deliberately incongruous, as in the line in Spinal Tap).

  17. Where does “my consciousness” go when I am under anesthesia. I don’t even remember nothingness. Time isn’t even there. Maybe it travels to the surgeon’s hands…LOL.

      1. Why woo?

        True, we can never have third-person empirical evidence that something like an atom possesses consciousness. But we can never have third-person empirical evidence that other people are conscious either, as consciousness is only experienced in the first-person.

        On the other hand, we know from our own fist-hand experience that some matter (our own bodies) has consciousness, but have zero direct evidence that matter without consciousness exists. Therefore, why not apply the principle of parsimony and assume that all matter has a degree of consciousness?

          1. Occam’s razor means that entities should not be multiplied without necessity. We know, through our own bodies, that some matter has consciousness. To assume that there is another kind of matter which does not have consciousness is ‘multiplying entities’.

            If panpsychism can account for observed phenomena, as Goff claims, then I don’t see a need to multiply entities by assuming that there are two types of matter.

          2. We know, through our own bodies, that some matter has flatulence. To assume that there is another kind of matter which does not have flatulence is ‘multiplying entities’. I am describing pangasism.

          3. The kind of matter which has flatulence is the same kind of matter which does not have flatulence. They are of the same kind.

            So pangasism sounds like a lot of hot air to me. Pppfffftt.

          4. We know, through our own bodies, that some matter has consciousness. To assume that there is another kind of matter which does not have consciousness is ‘multiplying entities’.

            We know, through our own bodies, that some matter is red…

        1. Because we know eg. brain damage, We can be fairly sure we need a complex ‘machinery’ for consciousness fo function. It is and empirical observation.
          We have, on the other hand, not the slightest indication whatsoever that, say, a water molecule has anything that would even remotely resemble something like consciousness, hence: woo

          1. We know that consciousness varies with the state of the brain, but don’t know at what stage of complexity, if any, it peters out.
            All we can do is make reasonable inferences.

            It’s true that we have no idea what the consciousness of a water molecule might be like, but this does not mean it may not be a reasonable and parsimonious inference that it does have some element of consciousness.

        2. Some arrangements have matter so all arrangements might have consciousness? I don’t see the logic at all. Though I think you have hit on the source of the woo. What we don’t understand completely might be anything so why can’t it be a property of everything? I think parsimony says that’s not a reasonable assumption.

          1. If we know that there is a kind of matter which has consciousness, then the assumption of another kind of matter which does not have consciousness should only be made if it is necessary.
            Whether this additional assumption is necessary is a big question. Like the Nature reviewer, I think it is a question worth exploring.

          2. At the risk of speaking for others, the overwhelming majority of matter configurations is not conscious. Certainly, we feel consciousness requires a brain.

          3. The zoologist Herbert Jennings wrote:

            “if Amoeba were a large animal, so as to come within the everyday experience of human beings, its behaviour would at once call forth the attribution to it of states of pleasure and pain, of hunger, desire, and the like, on precisely the same basis as we attribute these things to the dog.”

            So based on behavioural observations it may be reasonable to infer purposeful action and consciousness down to the level of single cells.

            Going down from cells to molecules, atoms and sub-atomic particles, such an inference from observed behaviour is not warranted, as they all generally act in the same way. However, there may be other arguments for extending the inference down to this level, such as those put forward by Goff.

          4. Going down from cells to molecules, atoms and sub-atomic particles, such an inference from observed behaviour is not warranted, as they all generally act in the same way.

            Are you claiming molecules act generally the same way living cells do?

          5. Purposeful action, sure, but not consciousness. My guess is that you consider reacting to environmental conditions and purposeful action as sufficient conditions for consciousness. I do not. A tree does those things but most would not consider it conscious. Of course, if one weakens the concept of consciousness sufficiently, then everything becomes conscious. It’s certainly a choice one can make but it makes consciousness a useless concept, IMHO.

  18. The author admits he did not follow the links, therefore, his dismissal is without investigation, thus, it, in and of itself is woo.
    To suppose consciousness can evolve from unconscious matter leads to wooful thinking of religion.
    What is consciousness? It is an evolved state of awareness. If at the very basic level of charge(q+, q-) you have a degree of awareness that will compel a charge to be attracted of repulsed.
    You cannot build order from random events and equally so you can oblybuild a complex order of things on a simpler order of things.

    1. Give me a break. I’ve read tons of stuff by the author and about panpsychism, including links suggested by commenters, and none of it makes sense. There is no explanation of what kind of consciousness adheres in matter and thus no evidence.

      To say that my dismissal of an unevidenced hypothesis like panpsychism is “woo” just shows how illogical you are. And I tell you brother (or sister), as with Sophisticated Theology, there is never any end of things to read before you are deemed qualified to criticize a hypothesis.

      You are rude and therefore you are gone.

  19. BTW, we *do* know what electrons (for example) are in themselves – that’s precisely what (say) Dirac’s electron theory *says*. (It may say it falsely, to whatever degree, of course, and to that extent does not contain knowledge, but that – or better, QED, is largely true – that 11 digits of prediction!)

  20. Thanks for this Jerry. But you haven’t really addressed two central arguments:

    1. Consciousness is unobservable, and hence we can’t straightforwardly test theories of consciousness. The best we can do is map correlations, by asking people what they’re experiencing while we scan their brains. But there are various theories that offer explanations of these correlations: the kind of materialist emergence theory you seem to favour, David Chalmers’ naturalistic dualism, my panpsychism. All of these theories are empirically equivalent, so we can’t distinguish between them with an experiment. We have to turn other methods of theory choice (i.e. do philosophy).

    2. The big problem with the materialist emergence view as that it has a huge explanatory gap at its core: between the quantitative properties of physical science and the qualitative properties of consciousness. Nobody has ever made any progress on closing this gap.

    You may say, ‘Well look how successful physical science has been; surely this should give us confidence that it’ll one day crack the problem of consciousness.’ But as I argue in detail, this view results from a misunderstanding of the history of science. Yes, physical science has been incredibly successful, but it has been successful precisely because it was designed, by Galileo, to exclude consciousness. It has done very well focusing on the observable, quantitative features of matter, but this gives us no grounding for thinking it will be able to explain unobservable, qualitative properties of subjective experience. You have provided no response to this central argument.

    Moreover, why wait for a theory that might never come, when we already have one that is just as parsimonious as materialism but avoids its explanatory gap?

    1. I disagree with both of your points but won’t go into detail as I’ve mentioned them before.

      a. The correlates of consciousness, as you point out, are observable, and it’s a reasonable assumption that other people, because they have brains like ours, are conscious like we are (and so they tell us). From that we can infer that there are neurological correlates of consciousness and we’re beginning to find out what they are. One day we will be able to do artificially what we can only infer now. Panpsychism, for example, does not explain why certain parts of the brain, when ablated or infused with chemicals, take away consciousness. Why would that happen if consciousness inheres everywhere? So the correlational theory has support, whereas your theory is a hypothesis that has absolutely NO empirical support.

      b. My response to your second point is just the same as to your first. You have to posit panpsychism as a stopgap to explain something we don’t yet understand. But you have no evidence for panpsychism and can adduce none. The history of science is replete with people like you who punt to the numinous or bizarre when science comes up against a hard problem. And you still have an explanatory gap: in what sense does an atom have consciousness?
      No, I can’t solve the hard problem of consciousness, but I’m not going to adhere to some cockamamie theory that appears to appeal only to philosophers and not scientists, just because I don’t understand something.

    2. Yes, physical science has been incredibly successful, but it has been successful precisely because it was designed, by Galileo, to exclude consciousness.

      Modern science isn’t constrained by the designs of Galileo.

      It has done very well focusing on the observable, quantitative features of matter, but this gives us no grounding for thinking it will be able to explain unobservable, qualitative properties of subjective experience.

      I disagree. The fact that we can artificially induce subjective experience (by stimulating brains with electricity) suggests that the qualitative properties of experience are subject to the scientific method. The explanations are being pieced together, however primitive and preliminary they currently are.

  21. Consciousness is about qualities; Science is about quantities. In that respect, they’re incompatible. That’s the thesis of Galileo’s Big Mistake.

      1. I’m stating Goff’s thesis, not endorsing it; although I think he has a point, if overstated. The states of phenomenal consciousness known as “qualia” — the color red, the smell of a rose — appear to be utterly beyond the reach of neuroscience and Galilean science in general, aside from mere correlations.

  22. “Are you claiming molecules act generally the same way living cells do?” (Mike Anderson)

    No, the opposite. Even if one accepted that cells have consciousness, this would not be enough to justify panpsychism. Further arguments (such as Goff’s) would be needed.

    “Purposeful action, sure, but not consciousness.” (Paul Topping re cell behaviour).

    In my view, subjective intention and consciousness is necessary for purposeful action.

    Incidentally, cognitive psychologist Arthur Reber argues that consciousness is a property of cells;
    http://arthurreber.com/academic-vita/

    1. Then I don’t agree with your (unstated) definition of consciousness. Or perhaps you are assuming a lot with purposeful. It all seems very circular unless you give more precise definitions. Purposeful movement usually implies conscious thought. How can one have a purpose without thinking about it? A single cell moves as if it had an objective but I doubt it had any choice in the matter. If you consider some simple chemical process that connects the single cell’s rudimentary perception with its movement as being conscious then you’ve trimmed the consciousness concept down to the point where it is fairly primitive. Reminds me that everything is red but to varying degrees. True but vacuous.

      1. It’s also interesting to note that simple chemicals react with one another. A membrane with different concentrations of solvent on either side can induce movement across the membrane, etc. Is the solution conscious? Is this purposeful? Probably not.

      2. I am defining consciousness as subjective experience, the same way it is defined in relation to the hard problem of consciousness.

        It does seem crazy at first to think that cells have subjective experience but that, in itself, is no reason to discount the idea (a lot of uncommon ideas initially seem crazy).

        Also, it’s not as if this has not all been said in detail before. The zoologist Wilfred Agar published a book in the 1940’s (‘A Contribution to the theory of the Living Organism’) based on the panpsychism of Alfred North Whitehead, which was duly ignored by those in his profession.

        Nature (that woo journal) said of Agar’s book that biologists would do well to read the book if they ‘admit that they should give heed to the trend of thought in Whitehead’s philosophy of organism, as sooner or later it seems they must’.

        Link to Agar’s book:
        https://archive.org/details/in.ernet.dli.2015.81423

Leave a Reply to ppnl Cancel reply

Your email address will not be published. Required fields are marked *