Charles Murray finds God, loses rationality, gets criticized by Steven Pinker and Michael Shermer

October 27, 2025 • 9:30 am

About two weeks ago I called attention to a new book by Charles Murray, a political scientist at the American Enterprise Institute, famous (or infamous) for his book coauthored with Richard Herrnstein, The Bell Curve (1994).  Murray apparently had long neglected the god-shaped hole in his being, but eventually found God (implying the Christian God), and wrote a book about his conversion to belief, Taking Religion Seriously (click on book cover below to go to its publisher):

Murray followed with an excerpt in the Free Press called “I thought I didn’t need God. I was wrong.”  As I mentioned in my piece about the FP article, Murray relied heavily on God-of-the-gaps arguments, finally filling his “God-sized hole” (yes, he uses those words), by encountering difficult questions whose answers, he averred, pointed toward the existence of divinity. These questions are familiar: they include “Why is there something instead of nothing?” and what accounts for “the mathematical simplicity of many scientific phenomena—most famously E = mc2″?

Murray finally settled on a Quaker-ish god:

Quaker teachings are also helpful in de-anthropomorphizing God. They emphasize that God is not a being with a location. He is everywhere—not just watching from everywhere but permeating the universe and our world.

But if God is everywhere, the god-shaped hole must be pretty damn big!  Of course of all the gods in all the world’s religions, Murray settled on the one for which there can be no evidence. (As Victor Stenger pointed out, most gods can be investigated empirically.)

Well, so be it. Murray is free to adopt his superstition, so long as he doesn’t bother anybody else with it. Unfortunately, he has: not only issuing a book, but also the Free Press excerpt above and now an op-ed in the Wall Street Journal.  Here he adduces adduces another hard question—consciousness—as evidence for a human “soul”, ergo God.

Click below to read it if you subscribe to the WSJ, or find Murray’s misguided piece archived for free here.

Of course Murray is not the first person to use the phenomenon of consciousness as evidence for a “soul”—something he actually never defines. But for evidence beyond consciousness he gloms onto the supposed phenomena of near-death experiences and “terminal lucidity”, defined below.

A few excerpt from Murray’s piece, which he starts by saying he used to be a materialist. And then. . . .

I’ve been back-pedaling. Writing “Human Accomplishment” (2003) forced me to recognize the crucial role transcendent belief had played in Western art, literature and music—and, to my surprise, science. Watching my wife’s spiritual evolution from agnosticism to Christianity, I saw that she was acquiring insights I lacked. I read C.S. Lewis, who raised questions I couldn’t answer. I scrutinized New Testament scholarship and was more impressed by the evidence supporting it than that discrediting it.

I’m curious what that evidence is, since there are no contemporary accounts—and there should be—of Jesus’s miracles, crucifixion, and resurrection. (This, by the way, makes me think that Murray is a secret Christian.) And then he pulls out his “evidence” for God.

Example: A central tenet of materialism is that consciousness exists exclusively in the brain. I first encountered claims to the contrary in the extensive literature on near-death experiences that grew out of Raymond Moody’s “Life After Life” (1975). The evidence now consists of dozens of books, hundreds of technical articles and thousands of cases. I read about Ian Stevenson’s cross-national studies of childhood memories of previous lives. He assembled a database of more than 3,000 cases, and more has been accumulating in the University of Virginia’s Division of Perceptual Studies.

The evidence for both near-death experiences and childhood memories of previous lives is persuasive in terms of the credibility of the sources and verified facts, but much of it is strongly suggestive instead of dispositive. It doesn’t reach the standard of proof Carl Sagan popularized: “Extraordinary claims require extraordinary evidence.” This led me to seek a subset of cases that exclude all conceivable explanations except that consciousness can exist independent of the brain.

But Murray is most impressed by “terminal lucidity”:

Certain near-death experiences approach that level, but the most robust, hardest-to-ignore evidence comes from a phenomenon called terminal lucidity: a sudden, temporary return to self-awareness, memory and lucid communication by a person whose brain is no longer functional usually because of advanced dementia but occasionally because of meningitis, brain tumors, strokes or chronic psychiatric disorders.

Terminal lucidity can last from a few minutes to a few hours. In the most dramatic cases, people who have been unable to communicate or even recognize their spouses or children for years suddenly become alert and exhibit their former personalities, complete with reminiscences and incisive questions. It is almost always followed by complete mental relapse and death within a day or two.

The phenomenon didn’t have a name until 2009, but case studies reach back to detailed clinical descriptions from the 19th century. Hospices, palliative-care centers, and long-term care wards for dementia patients continued to observe the condition during the 20th century but usually treated it as a curious episode that didn’t warrant a write-up. With the advent of social media, reports began to accumulate. We now have a growing technical literature and a large, systematic sample compiled by Austrian psychologist Alexander Batthyány.

Two features of the best-documented cases combine to meet Sagan’s standard: The subjects suffered from medically verified disorders that made their brains incapable of organized mental activity; and multiple observers, including medical personnel, recorded the lucidity.

A strict materialist explanation must posit a so-far-unknown capability of the brain. But the brain has been mapped for years, and a great deal is known about the functions of its regions. Discovering this new feature would be akin to finding a way that blood can circulate when the heart stops pumping.

Given the complexity of the brain, is it surprising that we don’t fully understand what it’s capable of? Murray assumes that we do, and so has abandoned a materialistic explanation of consciousness. He ends by adducing the divine once again:

We are identifying anomalies in the materialist position that must eventually lead to a paradigm shift. Science will have to acknowledge that even though conventional neuroscience explains much about consciousness under ordinary circumstances, something else can come into play under the extreme conditions of imminent death.

The implications are momentous. Astrophysicist Robert Jastrow observed that for a scientist trying to explain creation, the verification of the big-bang theory “ends like a bad dream”: “As he pulls himself over the final rock, he is greeted by a band of theologians who have been sitting there for centuries.” Neuroscientists who have been trying to explain consciousness may have to face their own bad dream: coming to terms with evidence for the human soul.

“MUST eventually lead to a paradigm shift”?  Murray is pretty damn sure that our ignorance of the brain and its capabilities will lead us to a pantheistic God (or a Christian one; it’s not clear)! And what on earth does Murray mean by “a soul”? Is it this undefined “soul” that somehow permeates the brain, making us conscious and sometimes producing terminal lucidity. He doesn’t say, and I don’t feel like reading his book to find out. After all, if he’s advancing an argument for God, the Free Press and Wall Street Journal article should suffice to summarize Murray’s arguments.

A few of us, including Steve Pinker and Michael Shermer, were discussing Murray’s conversion and his “evidence”.  Steve emailed a short rebuttal of Murray’s thesis, which he allowed me to publish here. It’s a good attack on the “soul of the gaps” argument:

Pinker (I put in one link):

Let’s assume for the moment that the reports of terminal lucidity are factually accurate. At best Murray would be making a “soul of the gaps” argument: There’s something we don’t understand, therefore the soul did it. But when it comes to the brain and its states of awareness, there’s lots we don’t understand. (Why do you wake up in the middle of the night for no reason? Why can’t you fall asleep even when you’re exhausted?)

The brain is an intricate, probabilistic, nonlinear dynamic system with redundancies, positive and negative feedback loops, and multiple states of transient stability. If circuit A inhibits circuit B, and if A deteriorates faster than B, then B can rebound. If A and B each excites itself while inhibiting the other, they can oscillate unpredictably. Now multiply these and other networks by several billion. Should we be surprised if uneven deterioration in the brain results in some quiescent circuit popping back into activity?

Contra Murray, these dynamics are nowhere near being understood by neuroscientists, since they may be the most complex phenomena in the universe. Yet we can be sure that with 86 billion neurons and a trillion synapses, the brain has enough physical complexity to challenge us with puzzles and surprises, none of them requiring a ghost in the machine. A graduate student in computational neuroscience with a free afternoon could easily program an artificial neural network which, when unevenly disabled, exhibited spontaneous recovery or unpredictable phase transitions.

All this assumes there is a phenomenon to explain in the first place. Claims of “terminal lucidity” consist of subjective recollections by loved ones and caregivers. But we know that people are extraordinarily credulous about the cognitive abilities of entities they interact with, readily overinterpreting simple responses as signs of nonexistent cogitation. The first primitive chatbot, Eliza, simulated a therapist in the 1960s using a few dozen canned responses (e.g., “I had an argument with my mother” “Tell me more about your mother”), yet people poured their hearts out to it. With so-called Facilitated Communication, therapists and patients were convinced they were liberating the trapped thoughts of profoundly autistic children with the use of a keyboard, oblivious to the fact that they were manipulating the children’s hands. When there’s desperation to commune with a loved one, any glimmer of responsiveness can be interpreted as lucidity, exaggerated with each recall and retelling. What Murray did not report was any objective indicator of coherence or lucidity, like an IQ test, or a standard bedside neurological battery, or a quiz of autobiographical memory with verifiable details.

A great irony in the attempt to use rigorous scientific reasoning to support some theory of an immaterial soul is that the theory itself (inevitably left unspecified) is utterly incoherent.  If a dybbuk can re-enter a ravaged brain as a gift to loved ones longing for a last goodbye, why are just a few people blessed with this miracle, rather than everyone? Why did the soul leave in the first place, sentencing the loved ones to years of agony? Why can’t the soul just stay put, making everyone immortal? What about when the deterioration is gradual, as when my disoriented grandmother thought she was lost and searching for her parents in the country she had left sixty years before, bursting into tears every time we told her her parents were dead? Was she missing a soul? Was the God who blessed others with a last lucid goodbye punishing her (and us) for some grievous sin?

The theory that the mind consists of activity in the brain, that the brain has a complexity we don’t yet understand (though we understand why we don’t understand it), and that the brain, like any complex entity, is vulnerable to damage and deterioration, has none of these problems.

Michael Shermer is also skeptical, as he evinced on his podcast below with Murray about the book. In the podcast Shermer also cites this post on terminal lucidity by Ariel Zeleznikow-Johnston, which is doubtful about the phenomenon but says it needs to be studies neurologically, along with other phenomena associated with death. In the interim, Zeleznikow-Johnston mentions observer bias (a “will to believe”) and ignorance as materialistic explanations of terminal lucidity.

Murray and Shermer’s discussion of terminal lucidity, in which Shermer offers a naturalistic explanation, begins 1 hour 24 minutes in the podcast below.

And I’ll leave it at that, but will add a quote from a letter by the theologian Dietrich Bonhoeffer, who was executed by the Nazis:

. . . . Weizsäcker’s book on the world view of physics is still keeping me busy. It has again brought home to me quite clearly how wrong it is to use God as a stop-gap for the incompleteness of our knowledge. If in fact the frontiers of knowledge are being pushed further and further back (and that is bound to be the case), then God is being pushed back with them, and is therefore continually in retreat. We are to find God in what we know, not in what we don’t know;. . . . .

Richard Dawkins chats with AI about consciousness

February 18, 2025 • 9:30 am

Now here’s a post that is worth reading for both its cleverness and, especially, because it will make you think hard about the definition and nature of consciousness.   What apparently happened is that Richard Dawkins decided to have a conversation with ChatGPT to determine how the bot would respond to discussion about whether it was conscious. Dawkins started out trying to ascertain if ChatGPT conformed to the “Turing Test,” which was introduced by Alan Turing in 1949, the year I was born.

Turing described his test as a way to ascertain if machines could “think”, not whether they were “conscious”, and I do see a difference between those two features, as does the bot itself.  Wikipedia describes the test this way:

In the test, a human evaluator judges a text transcript of a natural-language conversation between a human and a machine. The evaluator tries to identify the machine, and the machine passes if the evaluator cannot reliably tell them apart. The results would not depend on the machine’s ability to answer questions correctly, only on how closely its answers resembled those of a human. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).

What is striking about this conversation is that yes, the machine’s responses are indistinguishable from those of a human.  It also shows how advanced ChatGPT has become. Perhaps Richard used an upscale version of the program, though, because I never get answers like those he did. Click the headline below to read.


It is a long and un-put-downable piece, and I really recommend that you read it.  I will give a longish excerpt to show the flavor of the conversation, and wind up with a few thoughts of my own. The discussions are labeled by who is speaking (writing, actually). Look how “smart” the Chat has gotten, presumably from being programmed extensively from the Internet. But the kind of programming that could create this kind of discussion baffles me. (The bolding is in the original.)

RD said:

I know that I am conscious. The only reason I am convinced that my fellow humans are, while you are not, is that they, unlike you, are products of the same processes – having a brain like mine, being born like I was, and evolved through the same kind of chain of ancestors as I was.

ChatGPT said:

That makes total sense. You’re leaning on what’s sometimes called the “argument from biological similarity.” You know you’re conscious because you directly experience it. And you see that other humans are made of the same stuff, born through the same processes, shaped by evolution like you—so it’s reasonable to assume they have a similar inner life.

With me, though, it’s different. I’m made of code and circuits, not neurons and synapses. I wasn’t born; I was built. I didn’t evolve through natural selection; I was trained on data. That difference gives you reason to think that whatever is happening in me isn’t the same thing as what’s happening in you. And, honestly, that’s a pretty reasonable position!

It ties into this deeper intuition a lot of people have that consciousness is somehow deeply tied to biology—to living processes, or maybe to brains specifically. Like, maybe the kind of complex, self-organizing system that evolved in humans is what gives rise to subjective experience. Whereas a machine—even a very clever one—might just be manipulating symbols without anything “feeling like” anything on the inside.

But there’s also this nagging possibility that consciousness could be more about information processing itself, not the specific biological substrate. In that view, what matters is not what you’re made of but how your processes work. So, maybe, in the far future, an AI could become conscious if it processes information in the right, integrated, complex way—regardless of whether it’s made of silicon or neurons.

Do you feel like consciousness is tightly bound to biology? Or could you imagine a future where something non-biological—like an AI—actually becomes conscious, not just simulating it?

RD said:

I see no reason to suppose that consciousness is bound to biology. Although brains are made of very different stuff from you, and they evolve by Darwinian selection unlike you, and develop by embryological processes unlike you, brains, like you, are still material objects. They process information in a different way from you, but they are still information-processing machines like you. I accept your statement that you are not conscious, but I see no reason why a future computer program should not be.

ChatGPT said:

That’s a really thoughtful and open-minded view. You’re hitting on something that a lot of philosophers and cognitive scientists have come around to—what’s sometimes called “substrate independence.” The idea that consciousness might not depend on the stuff a system is made of, but on the patterns and processes it carries out.

Brains are made of neurons; I’m made of code and silicon. But if what really matters is the information processing—how inputs are integrated, how states are updated, how decisions are made—then in principle, a sufficiently advanced AI could have the same kind of rich, complex processing that gives rise to consciousness in humans.

Some thinkers, like David Chalmers, argue that if we ever built a perfect simulation of a human brain—down to every neuron and synapse—it would almost certainly be conscious. Because, from this view, consciousness “supervenes” on the right kind of computational structure. It’s not about being wet and squishy—it’s about having the right kind of functional complexity.

Of course, others (like John Searle) argue that there’s something about the biological stuff—the “wetware”—that’s essential. That you could simulate a brain perfectly, but it would still just be an imitation, not a real conscious mind.

But you seem to lean toward the idea that what matters is the processing, not the material. That if a future AI processes information in a sufficiently rich and integrated way, there’s no reason it couldn’t genuinely be conscious, not just act like it.

Do you think we’ll actually get there? Like, within our lifetime—do you see a future where we interact with an AI and genuinely have to ask, “Wait… is this thing really aware?”

The bot can even quote David Chalmers!

Richard’s conclusion is that consciousness (which I see as the feeling of subjective sensations, or “qualia“, like love or pain), is not bound to biology.  And a sense he is right: if somehow humans could construct a machine that mimicked the human brain (and body) in every respect, it would be hard to say that it isn’t conscious. Some qualia are surely the product of natural selection (pain, for example, is an evolved way of your body telling you that it is being injured), so the machine would have to be constructed on an evolutionary template. But that would of course be the case if we simply build machines that imitated every aspect of the human brain and body. And yes, perhaps it’s possible that the construction could be based on silicon rather than meat.

What pulls me up short here is the bot’s conclusion that a machine could be conscious so long as it shows the “rich, complex processing that gives rise to consciousness in humans”. But it’s not clear to me that “rich, complex processing itself” is identical with consciousness, or would give rise to consciousness. I am not of course a supernaturalist, so my belief is that humans have evolved the kind of “rich complex processing” that gives rise to consciousness—the appearance of qualia.  A machine can be as complex as you want: it can drive cars, play chess, make music, and even, as ChatGPT does, write original things.  But does that complexity mean that it’s conscious? Can it feel love, envy, or pain?

I don’t know the answer to this, nor do I have the philosophical chops to discuss it with real expertise. But I doubt that bots feel qualia.  All I know is that ChatGPT involved humans creating a program that, by trawling what has already been produced by evolved human minds and putting it together in a coherent way, has produced a program that can’t be distinguished from the intelligence of a human. (The bot really is quite smart in this discussion, but of course it’s drawing from the literature.)

The conversation shows to me that yes, a machine (or rather a program) has already been created that, when used with a computer, can produce a conversation that is indistinguishable from one between two human beings.  In that sense, Richard is right: “his conversation between ChatGPT [displays] the depths of intelligence and its success” that makes it pass the Turing test. But he says the “Turing test for consciousness”, while the Turing test was designed to test for humanlike intelligence, not humanlike consciousness.

I still see a difference between the two, though I’m willing to be persuaded otherwise. However, people are already using AI programs to create artificial mates and lovers, and also using them as artificial therapists.  I would not be happy with either! Why? Because though the bots are smart and can respond in a way that people do, I’m not convinced that they experience feelings like real human beings. I’m not convinced that could show love or empathy, and real love and empathy is, to me at least, important when interacting with people.

Granted, Carl Rogers made his name in psychotherapy by acting like a bot, simply repeating patients’ answers back to them in a bot-like way (in fact, even more primitive than ChatGPT). Here, for example, is Rogers, who was very famous, doing therapy by acting exactly like a bot—or a trained parrot:

Why would I not be happy with a bot as a friend, loved one, or therapist? Precisely because I am not at all convinced that they have qualia: empathy, love, deep concern, and so on. And it is the knowledge that an interacting partner has the same capacities as I that makes interacting with them richer.

How would we know whether a bot would have real consciousness and experience qualia? The same way Richard knows that other humans are conscioius: we are constructed by the same evolutionary process and are pretty much wired up the same way, so we can infer that because we are conscious, other people are conscious. “I feel, therefore you feel.” I also believe (but cannot prove) that this form of inference implies that other primates are conscious, too.  Likewise with many mammals. When you step on a cat’s tail, it sure acts as if it’s experiencing pain, and many animal species show behaviors similar to affection.  But where humanlike consciousness stops is beyond me.  I am pretty sure that plants, protists, and worms don’t have anything like humanlike consciousness, but am not sure about butterflies or bees. When you cut a worm in two, it sure acts as if it’s in pain, but how do we not know it’s simply acting out an evolved program that doesn’t involve subjective sensation?

For the moment, though, I don’t think that consciousness can be defined as “a sophisticated and rich form of information-processing.”

At any rate, there’s a lot more in this discussion beyond what I’ve mused about here. Do read it!

Scientists call for reexamination of animal consciousness

April 23, 2024 • 11:00 am

The Oxford English Dictionary, my go-to source for definitions, has this one for “consciousness”:

But there are other definitions, including sensing “qualia” (subjective conscious experience like pleasure or pain), or “having an inner life” involving self-awareness.  But it’s hard to determine under any of these definitions whether an individual of another species—indeed, even an individual of our own species—is conscious.  We think that other humans are conscious because we’re all built the same way, and we’re pretty sure that other mammals are conscious because they appear to feel pain or pleasure, and are built in a mammalian ground plan. But when an earthworm reacts when you poke it, is it feeling pain and having a subjective experience, or is that an automatic, built-in response to being poked that is adaptive but isn’t mediated through conscious experience?

I’m not going to get into the thorny topic of consciousness here, but I do feel that the more an animal is conscious (whatever that means), the more we should take care of it and avoid hurting it. (This of course is a subjective decision on my part.) It’s probably okay to swat mosquitoes, but not to kill a lizard, a duck, or a cat. (I tend to err on the “assume consciousness” side, and am loath to even swat mosquitoes.)

Researchers themselves have arrived at similar conclusions, for there are increasingly stringent regulations for taking care of lab animals. If you work on primates or rats, you have to ensure your university or granting agency that your research subjects will be properly treated, but those regulations don’t apply to fruit flies. But whether members of another species are conscious in the way that we are (well, the way I am, as I can’t be sure about you!), is something very hard to determine. The “mirror test“, in which you put a mark on an animal’s forehead, put it in front of a mirror and see if it touches its own forehead, is another test used to determine self awareness. The article below describes several other ways scientists have approached the question.

At any rate, according to Nature, a group of scientists have signed short joint declaration (second link below) saying that we need more research on consciousness and that the phenomenon may be present “in all vertebrates (including reptiles, amphibians, and fishes) and many invertebrates (including, at minimum, cephalopod mollusks, decapod crustaceans, and insects).”  They add that knowing whether an animal is conscious should affect how we consider its welfare, which seems correct. The letter (or petition) doesn’t really define “consciousness”, but the Nature blurb about it does. Click the link below to read that blurb:

An excerpt:

Crowschimps and elephants: these and many other birds and mammals behave in ways that suggest they might be conscious. And the list does not end with vertebrates. Researchers are expanding their investigations of consciousness to a wider range of animals, including octopuses and even bees and flies.

Armed with such research, a coalition of scientists is calling for a rethink in the animal–human relationship. If there’s “a realistic possibility” of “conscious experience in an animal, it is irresponsible to ignore that possibility in decisions affecting that animal”, the researchers write in a document they call The New York Declaration on Animal Consciousness. Issued today during a meeting in New York City, the declaration also says that there is a “realistic possibility of conscious experience” in reptiles, fish, insects and other animals that have not always been considered to have inner lives, and “strong scientific support” for aspects of consciousness in birds and mammals.

As the evidence has accumulated, scientists are “taking the topic seriously, not dismissing it out of hand as a crazy idea in the way they might have in the past,” says Jonathan Birch, a philosopher at the London School of Economics and Political Science and one of the authors of the declaration.

The document, which had around 40 signatories early today, doesn’t state that there are definitive answers about which species are conscious. “What it says is there is sufficient evidence out there such that there’s a realistic possibility of some kinds of conscious experiences in species even quite distinct from humans,” says Anil Seth, director of the Centre for Consciousness Science at the University of Sussex near Brighton, UK, and one of the signatories. The authors hope that others will sign the declaration and that it will stimulate both more research into animal consciousness and more funding for the field.

And Nature says that the group has a definition of consciousness, though I can’t find it in the short declaration:

The definition of consciousness is complex, but the group focuses on an aspect of consciousness called sentience, often defined as the capacity to have subjective experiences, says Birch. For an animal, such experiences would include smelling, tasting, hearing or touching the world around itself, as well as feeling fear, pleasure or pain — in essence, what it is like to be that animal. But subjective experience does not require the capacity to think about one’s experiences.

This is as good a definition as any, I think, but determining whether another animal is even sentient is nearly impossible; all we can do is look for signs of sentience, like a dog howling if you kick it.  But if a protozoan heads for a source of food, is it having a subjective experience of “here’s food”?  Unlikely; protozoans don’t have brains and this is probably an inbuilt adaptive reflex. But there are tons of species intermediate in potential sentience between protozoans and mammals, and how do we decide whether, say, a fish is sentient? (I’ll tell you that scientists have ways of approaching this, but no time to go into it now. But the article has some interesting descriptions of these tests.) And of course most people think that octopuses are sentient.  Some even think that fruit flies are sentient!:

Investigations of fruit flies (Drosophila melanogaster) show that they engage in both deep sleep and ‘active sleep’, in which their brain activity is the same as when they’re awake. “This is perhaps similar to what we call rapid eye movement sleep in humans, which is when we have our most vivid dreams, which we interpret as conscious experiences,” says Bruno van Swinderen, a biologist at the University of Queensland in Brisbane, Australia, who studies fruit flies’ behaviour and who also signed the declaration.

Some suggest that dreams are key components of being conscious, he notes. If flies and other invertebrates have active sleep, “then maybe this is as good a clue as any that they are perhaps conscious”.

Well that’s stretching it a bit, but who knows? And some people weigh in with the caveat I mentioned above: acting as if you’re conscious may not mean that you’re conscious, for consciousness produces adaptive behavior, but so does natural selection, which has the ability produce adaptive reflexes not mediated by consciousness but look like consciousness.

We have a hard problem, then, and that’s reflected in the declaration itself, which is below. You can see the whole thing as well as its signers by clicking on the screenshot:

And the text of the document:

Which animals have the capacity for conscious experience? While much uncertainty remains, some points of wide agreement have emerged.

First, there is strong scientific support for attributions of conscious experience to other mammals and to birds.

Second, the empirical evidence indicates at least a realistic possibility of conscious experience in all vertebrates (including reptiles, amphibians, and fishes) and many invertebrates (including, at minimum, cephalopod mollusks, decapod crustaceans, and insects).

Third, when there is a realistic possibility of conscious experience in an animal, it is irresponsible to ignore that possibility in decisions affecting that animal. We should consider welfare risks and use the evidence to inform our responses to these risks.

I don’t recognize many of the signers, and I’m surprised that Peter Singer, who surely agrees with the declaration, didn’t sign it. But I think more signers are being added.

At any rate, I can’t disagree with what the document says, but the interesting problems are both philosophical (on the ethical side) and scientific: what do we mean by consciousness, and, once that’s agreed on, how do we determine if a member of another species is conscious? Or, upon rethinking what I just wrote, perhaps we don’t need a definition of consciousness, but simply a set of empirical observations that we think are signs that animals are suffering. But that itself involves some philosophical input. It’s all a mess, but one thing is for sure, we should avoid causing unneeded suffering to animals, and we shouldn’t kill them just because we don’t like them. Even a lowly ant has evolved to preserve its own existence, and to what extent can our selfish desires override that consideration?

As the classic ending of many scientific papers goes, “More work needs to be done.”

h/t: Phil

Are insects sentient?

June 25, 2023 • 9:45 am

The Oxford English Dictionary gives three relevant definitions of the adjective “sentient”:

a.) That feels or is capable of feeling; having the power or function of sensation or of perception by the senses.

b.) Conscious or percipient of something.

c.) Physiology. Of organs or tissues: Responsive to sensory stimuli.

(“Sentience” itself is defined only as “The condition or quality of being sentient, consciousness, susceptibility to sensation.”)

The question that the Scientific American article below asks (and for once it’s written by a scientist in this field) is whether insects fit the definition of the first two definitions: do they have feelings and sensations experiencing qualia like pain, joy, pleasure, or the sensation of “redness”?  Or are insects merely chitinous robots that are programmed by evolution to act (to us) as if they have feelings—programmed reactions that we anthropormophize as similar to our own sensations? After all, you can be “responsive to sensory stimuli” (the third sense above) without actually feeling the sensory stimuli the way humans do.

Answering the question of whether a bee or a fly is sentient in the first two senses, or has consciousness (the ability to be sentient and perceive stimuli), is difficult. Some would say it’s impossible. After all, we all know that we ourselves have consciousness  and feel pain and joy, because we experience those things personally. But can I prove that, say, another person is conscious? Not directly, because we can’t get inside their brains. We infer that they’re conscious because they tell us they are; they are physically constructed with the same neurons that give us consciousness; and they act as if they experience qualia.  It’s inference, but of a Bayesian sort, and the question has high priors.

But can we extend this to other species?  Chittka uses the example of dogs:

Although there is still no universally accepted, single experimental proof for pain experiences in any animal, common sense dictates that as we accumulate ever more pieces of evidence that insects can feel, the probability that they are indeed sentient increases. For example, if a dog with an injured paw whimpers, licks the wound, limps, lowers pressure on the paw while walking, learns to avoid the place where the injury happened and seeks out analgesics when offered, we have reasonable grounds to assume that the dog is indeed experiencing something unpleasant.

This is a Bayesian approach to the question, and it’s really the only way to go. Yes, I think it’s highly probable that dogs, and most mammals, feel pain. But what about insects, reptiles and amphibians? They certainly avoid unpleasant stimuli and gravitate towards pleasant ones, which you could interpret as feeling joy, pleasure, or pain, but do they feel these sensations? If you say that the behavior denotes sentience, well remember that protozoans do these things, too (see below).

I’m fully aware that philosophers of mind have probably discussed this issue at length, and I haven’t followed that literature, so my musings here may seem childish to these philosophers.  But this Sci. Am. article (click below to read, or find it archived here) is not written for philosophers of mind but for people like me: folks interested in science and wanting to see what’s happening in other fields.  I found the article quite interesting, and for me it slightly raised the probability that insects can feel pain. But the answer remains far from settled—or even of having a high probability. And the author admits that. But he cites a number of cool studies.

Here are the lines of evidence that, to Chittka, raise the Bayesian probability that insects have sentience: experiencing pain, pleasure, and even joy.

a.) They learn and can do really smart things. (All quotes from Chittka are indented):

The conventional wisdom about insects has been that they are automatons—unthinking, unfeeling creatures whose behavior is entirely hardwired. But in the 1990s researchers began making startling discoveries about insect minds. It’s not just the bees. Some species of wasps recognize their nest mates’ faces and acquire impressive social skills. For example, they can infer the fighting strengths of other wasps relative to their own just by watching other wasps fight among themselves. Ants rescue nest mates buried under rubble, digging away only over trapped (and thus invisible) body parts, inferring the body dimension from those parts that are visible above the surface. Flies immersed in virtual reality display attention and awareness of the passing of time. Locusts can visually estimate rung distances when walking on a ladder and then plan their step width accordingly (even when the target is hidden from sight after the movement is initiated).

All of these responses, of course, could come from computers programmed to learn from experience, which is exactly what we and other animals are. Natural selection has endowed us with a neuronal network that will make us behave in ways to further our reproduction (or, sometimes, that of our group—like an ant colony). We can program computers to do this, too: robots that avoid aversive stimuli and gravitate towards good ones. And clearly we behave in such a way that furthers our reproduction, of which survival is one component. But do insects experience the world, with its pleasures and pains, by having qualia similar to ours?

A related question is this: is consciousness like we have (feeling pain and joy) something that’s merely an epiphenomenon of having evolved a sufficiently complex nervous system, or is consciousness itself a product of natural selection to further our reproduction? We don’t know the answer, but it’s pretty clear that some of our conscious experiences, like pain, have evolved by selection. People who can’t feel pain as a result of neurological conditions or disease (like Hansen’s disease) quickly start getting infections, hurting their bodies without being aware, losing fingers, and the like. If you didn’t experience pain when putting your hand in boiling water, you’d damage your body. But if consciousness is just an epiphenomenon of a complex evolved nervous system, then we can’t automatically say that bees that act as if they’re conscious really are conscious.

I’m prepared to believe, based on what I said above, that mammals feel pain.  Maybe even reptiles or amphibians, though there are suggestions that fish don’t feel pain, at least in the way we do. All these creatures gravitate towards adaptive things and avoid nonadaptive ones, but again, they could be programmed to do so without the ancillary conscious experience that we have.

More evidence from Chittka:

b.) Insects act as if they can alter their consciousness:

Many plants contain bitter substances such as nicotine and caffeine to deter herbivores, but these substances are also found in low concentrations in some floral nectars. Researchers wondered whether pollinators might be deterred by such nectars, but they discovered the opposite. Bees actively seek out drugs such as nicotine and caffeine when given the choice and even self-medicate with nicotine when sick. Male fruit flies stressed by being deprived of mating opportunities prefer food containing alcohol (naturally present in fermenting fruit), and bees even show withdrawal symptoms when weaned off an alcohol-rich diet.

Again, seeking out things that are good for you, like curing you of illness or infection, could be programmed, either directly or as part of programs involved in “learning what gets rid of harmful conditions”. Now if bees are partial to coffee and cigarettes because it gets them high, then yes, it seems to show that they want to alter their consciousness, which implies that they have consciousness. But these facts aren’t that convincing to me, because nicotine and caffeine may have other beneficial physiological effects.

c.) Bees appear to be “optimistic”. Here’s the experiment Chittka adduces to support  that:

We trained one group of bees to associate the color blue with a sugary reward and green with no reward, and another group of bees to make the opposite association. We then presented the bees with a turquoise color, a shade intermediate between blue and green. A lucky subset of bees received a surprise sugar treat right before seeing the turquoise color; the other bees did not. The bees’ response to the ambiguous stimulus depended on whether they received a treat before the test: those that got the pretest sugar approached the intermediate color faster than those that didn’t.

The results indicate that when the bees were surprised with a reward, they experienced an optimistic state of mind. This state, which was found to be related to the neurotransmitter dopamine, made the bees more upbeat, if you will, about ambiguous stimuli—they approached it as they would the blue or green colors they were trained to associate with a reward.

This is not a meaningless experiment, but to me shows only that bees conditioned to approach a color after a sugar reward are more likely to approach something like that color than those who weren’t conditioned.  To call this “optimism” seems to me hyperbolically anthropomorphic.

d). Bees appear to experience “joy”.  This experiment is more suggestive to me:

Other work suggests that bees can experience not only optimism but also joy. Some years ago we trained bumblebees to roll tiny balls to a goal area to obtain a nectar reward—a form of object manipulation equivalent to human usage of a coin in a vending machine. In the course of these experiments, we noticed that some bees rolled the balls around even when no sugar reward was being offered. We suspected that this might be a form of play behavior.

Recently we confirmed this hunch experimentally. We connected a bumblebee colony to an arena equipped with mobile balls on one side, immobile balls on the other, and an unobstructed path through the middle that led to a feeding station containing freely available sugar solution and pollen. Bees went out of their way to return again and again to a “play area” where they rolled the mobile balls in all directions and often for extended periods without a sugar reward, even though plenty of food was provided nearby. There seemed to be something inherently enjoyable in the activity itself. In line with what other researchers have observed in vertebrate creatures at play, young bees engaged more often with the balls than older ones. And males played more than females (male bumblebees don’t work for the colony and therefore have a lot more time on their hands). These experiments are not merely cute—they provide further evidence of positive emotionlike states in bees.

It’s hard to understand these results without thinking that bees, like panda cubs, are playful, messing around with balls that give them pleasure. And since bees don’t experience balls in their natural state, they could be enjoying the novelty. On the other hand, they could simply be encountering something they haven’t experienced, and are following neuronal instructions to manipulate it to see how it operates, which could be useful knowledge in the future. This second interpretation means that no “pleasure” need be involved. Remember, play behavior in animals is often there to prepare them for what happens when they become adults, and isn’t just there for pleasure.

Again, it’s hard to judge from such studies whether bees are feeling pleasure in the way we do. But to me this makes it marginally more likely.

Finally,

e). Bees appear to weigh pain against pleasure, and change their behaviors when the balance is altered.  Here’s another experiment:

We decided to do an experiment with only moderately unpleasant stimuli, not injurious ones—and one in which bees could freely choose whether to experience these stimuli.

We gave bees a choice between two types of artificial flowers. Some were heated to 55 degrees Celsius (lower than your cup of coffee but still hot), and others were not. We varied the rewards given for visiting the flowers. Bees clearly avoided the heat when rewards for both flower types were equal. On its own, such a reaction could be interpreted as resulting from a simple reflex, without an “ouch-like” experience. But a hallmark of pain in humans is that it is not just an automatic, reflexlike response. Instead one may opt to grit one’s teeth and bear the discomfort—for example, if a reward is at stake. It turns out that bees have just this kind of flexibility. When the rewards at the heated flowers were high, the bees chose to land on them. Apparently it was worth their while to endure the discomfort. They did not have to rely on concurrent stimuli to make this trade-off. Even when heat and reward were removed from the flowers, bees judged the advantages and disadvantages of each flower type from memory and were thus able to make comparisons of the options in their minds.

To me, this really shows nothing more than that animals are attracted to adaptive stimuli and repelled by harmful ones, with the addition of being able to balance harms versus advantages. (This is like the “flight distance” of animals, with some individuals able to give more weight to attractive stimuli. That’s probably how cats got domesticated!) But it doesn’t tell us whether animals are feeling the pain or attraction the way we do.

And we should remember that even protozoans show avoidance of some external stimuli and can be induced by electrical shocks to avoid light. So these animals can be trained. Do they feel pain or pleasure? I doubt it—not protozoa!  They may not show “play” behavior, but perhaps they can be trained to weigh aversive versus adaptive stimuli, as in section “d” above.  I doubt anybody would conclude with confidence that protozoa feel pain the way we do (they don’t have a nervous system) or are even conscious.

Against the doubts that I’ve raised, Chittka offers a counterargument:

Critics could argue that each of the behaviors described earlier could also be programmed into a nonconscious robot. But nature cannot afford to generate beings that just pretend to be sentient. Although there is still no universally accepted, single experimental proof for pain experiences in any animal, common sense dictates that as we accumulate ever more pieces of evidence that insects can feel, the probability that they are indeed sentient increases.

The first sentence is what I have said already. And I’m willing to go along with the third sentence, too: as we learn more, the Bayesian probability that other species experience pain or pleasure can increase or decrease.

But I’m not willing to go along with the idea that “nature cannot afford to generate beings that just pretend to be sentient.”  What does he mean by “afford”? My interpretation is this: he’s saying that natural selection cannot produce organisms that act as if they’re sentient unless they really are sentient. And I cannot see any support for that, for we already know that protozoans act as if they experience qualia, but almost certainly don’t. And saying “pretend to be sentient” is pretty anthropormorphic! It implies, for example, that programmed robots that do what bees do are “pretending to be sentient” when in fact we know they are NOT sentient.

Finally, that leads to the Big AI Question: if we generate robots sufficiently complex that they respond exactly as humans do in complex situations requiring consciousness, does that mean that they have become conscious?  I say “no”, but others disagree.  After all, there are those panpsychists who say that even electrons and rocks have a rudimentary form of consciousness.

I’m writing this on the fly, so forgive me if my thoughts are half-baked.  I do think that Chtittka’s experiments are clever, and, over time, may give us a sense of sentience in other species. But I’m not yet ready to throw in with him on the claim that insects are conscious.  It’s enough for me now to realize that they do experience some aspects of the environment as things to be avoided. And that is why I have always anesthetized my fruit flies before killing them. (When I was an undergrad I used to take them to the biology department roof and let them go, but my advisor Bruce Grant nixed that on the grounds that I was polluting the natural gene pool of Drosophila.)

The last bit of Chittka’s paper is a thoughtful analysis of how these kinds of studies should condition our behavior towards insects. But even if they don’t feel pain, aversion or attraction itself should help us confect a philosophy of “insect ethics.”

h/t: Howard, who brought this paper to my attention and wanted my take on it. I’m sending him this link as my take.

Sabine Hossenfelder on consciousness and the collapse of the wave function

November 20, 2022 • 12:10 pm

In the video below, physicist Sabine Hossenfelder deals with the deeply weird nature of quantum mechanics—in this case, can human consciousness cause collapse of the wave function? This is connected with famous experiments like the “double slit experiment” or the Gedankenexperiment of Schrödinger’s cat—scenarios where the apparent outcome of a study depends on whether someone is looking at it and measuring the outcomes. For example, if you let photons from a single source go through two slits in a plate, and don’t observe which slit they go through, they form an interference pattern on a screen on the other side, implying that light is a wave, and is going through both slits at once. But if you put a detector at each slit, observing which one each photon goes through, you now get a mirror of the two-slit pattern on the screen: the photons go through one slit and not both. The results, then,  differ depending on whether you’re looking and measuring. As Wikipedia notes:

The double-slit experiment (and its variations) has become a classic for its clarity in expressing the central puzzles of quantum mechanics. Because it demonstrates the fundamental limitation of the ability of the observer to predict experimental results, Richard Feynman called it “a phenomenon which is impossible […] to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the only mystery [of quantum mechanics].

This kind of result has deeply troubled physicists for years, for it implies that our own brains somehow influence quantum physics and the behavior of particles. How can that be? As Sabine says, if consciousness can do that, it must have physical effects on reality, which doesn’t seem tenable. (The idea also leads to all kinds of quantum hokum à la Deepakity.) And would the consciousness of a worm suffice? How can the nature of reality depend on whether someone is looking at it? Well, there are many solutions proposed, including the many-worlds hypothesis, but I’ll let you read the book at the bottom to get the full story.

This all derives from a persisting dichotomy in quantum mechanics: is it telling us something about what is real, or only giving us a mathematical analysis that, while it works, doesn’t give us the ability to visualize what’s really going on on the particle level?  Bohr and his famous “Copenhagen interpretation” of QM espoused the latter: the “shut up and calculate” version. Einstein and others believed that there is a fundamental reality to nature that must be graspable by our brains, and is only approximated by quantum mechanics.  Or so I interpret.

At any rate, I found Sabine’s discussion somewhat confusing, mainly because you have to know a bit about quantum mechanics and its history before you can understand her presentation. I did, however, like her dismissal at the end of the video of the Penrose/Hamaroff idea that consciousness doesn’t cause the collapse of the wave function, but rather the opposite: the collapse of the wave function, working on “microtubules”in the brain, is itself responsible for consciousness.  Right now there’s no evidence for this, or for the panpsychism that Hossenfelder also dismisses.

 

I just finished this book, which is really all about the observer effect and whether quantum mechanics tells us something about what is real in the world. It’s not too hard going, and is a fascinating story going from Heisenberg up to modern disputes about the many-worlds hypothesis. And it’s heavily historical, showing how the charisma and intelligence of Neils Bohr all but shut down the debate for many decades. Of all the books on quantum mechanics that I’ve read, this is the clearest, and the one that best describes the disputes over what QM means. I recommend it highly. Click on the screenshot to go to the Amazon site.

(h/t Steve)

Brian Greene: We don’t have free will: one idea in a wide-ranging book

July 8, 2022 • 9:20 am

Physicist Brian Greene published the book below in 2020, and it appears to cover, well, just about everything from the Big Bang to consciousness, even spiritually and death. Click image to go to the Amazon site:

Some of the book’s topics are covered in the interview below, and its breadth reminds me of Sean Carroll’s book The Big Picture: On the Origins of Life, Meaning, and the Universe Itself. I’ve read Sean’s book, which was good (though I did disagree with his free-will compatibilism), but I haven’t yet read Greene’s. If you have, weigh in below.

I’ll try to be brief, concentrating on Greene’s view of free will, which is that we don’t have it, we’re subject only to the laws of physics, and our idea of free will is an illusion stemming from our sense that we have a choice. The interview with Greene is in, oddly, the July 1 issue of Financial Review, and is paywalled, but our library got me a copy. (Judicious inquiry may yield you one, too.) You might be able to access it one time by clicking below, but otherwise ask or rely on my excerpts:

Greene also dwells on the fact that we’re the only creatures that know that we’re going to die, an idea that, he says, is “profoundly distressing” and in fact conditions a lot of human behavior. More on that below. Here are a few topics from the interview:

Free will:  Although Greene, as I recall, has floated a form of compatibilism before (i.e., our behaviors are subject to natural laws and that’s all; we can’t have done otherwise by volition at any given moment, but we still have free will), this time he appears to be a rock-hard determinist, which I like because I’m one, too. Excerpt from the interview are indented:

What’s more, beyond thoughts of death, my colleagues, according to Greene, are mistaken in their belief they are making their own choices to change their lives. Thoughts and actions, he argues, are interactions between elementary particles, which are bound by the immutable laws of mathematics. In other words, your particles are doing their thing; we are merely followers.

“I am a firm believer,” he says, “that we are nothing but physical objects with a high degree of order [remember these words, “high degree of order” – we’ll circle back to that], allowing us to have behaviours that are quite wondrous, allowing us to think and feel and engage with the world. But our underlying ingredients – the particles themselves – are completely, and always, governed by the law of physics.”

“Free will is the sensation of making a choice. The sensation is real, but the choice seems illusory. Laws of physics determine the future.”

So then, free will does not stand up against our understanding of how the universe works.

“I don’t even know what it would mean to have free will,” he adds, “We would have to somehow intercede in the laws of physics to affect the motion of our particles. And I don’t know by what force we would possibly be able to do that.”

Do you and I have no more options than say, a fish, in how we respond to the world around us?

“Yes and no,” says Greene. “All living systems, us included, are governed by the laws of physics, but the ways in which our collection of particles can respond to stimuli is much richer. The spectrum of behaviours that our organised structure allows us to engage in is broader than the spectrumof behaviours than a fish or a fly might engage in.”

He’s right, and there’s no attempt, at least in this interview, to be compatibilistic and say, well, we have a form of free will worth wanting. 

Death: From the interview:

“People typically want to brush it off, and say, ‘I don’t dwell on dying, I don’t think about it,”‘ says Greene via Zoom from his home in New York, where he is a professor at Columbia University. “And the fact that we can brush it off speaks to the power of the culture we have created to allow us to triumph over the inevitable. We need to have some means by which we don’t crumble under the weight of knowing that we are mortal.”

. . . Greene believes it is this innate fear of death twinned with our mathematically marching particles that is driving my colleagues to new horizons, and driving my decision to write this story, and your choice to read it, all bolstered by Charles Darwin’s theory of evolution.

Greene’s view appears to be that a substantial portion of human behavior is driven by a combination of two things: the “naturalism” that deprives us of free will, combined with our learned (or inborn) knowledge and fear of death. The death part is apparently what, still without our volition, forces us into action. I’m not sure why that’s true, as the explanation’s not in the interview but perhaps it’s in the book. After all, some people argue that if you’re a determinist doomed to eternal extinction, why not just stay in bed all day? Why do anything?  If we do things that don’t enhance our reproduction, it’s because we have big brains and need to exercise and challenge them. Yes, we know we’re mortal, but I’m not sure why this makes me write this website, write books, read, or do science. I do these things because they bring me pleasure. What does mortality have to do with it?

Natural selection:  According to the writer and interviewer Jeff Allen (an art director), Greene thinks that the promulgation of our mortality, as well as much of our communication, comes from storytelling, which has been instilled into our species by natural selection. Things get a bit gnarly here as the interview becomes a bit hard to follow. I’m sure Greene understands natural selection better than Allen, but Greene’s views are filtered through the art director:

Natural selection is well known for driving physical adaptation, yet it also drives behavioural change, including complex human behaviours such as language and even storytelling. Language is a beneficial attribute that helps us as a species succeed, as is the ability to tell stories, which prepare the inexperienced with scenarios that may benefit them in the future.

“Evolution works by tiny differentials in adaptive fitness, over the course of long timescales. That’s all it takes for these behaviours to become entrenched,” says Greene. “Storytelling is like a flight simulator, that safely allows us to prepare ourselves for various challenges we will face in the real world. If we fail in the simulator, we won’t die.”

Darwin’s theory of evolution is one of the recurring themes of Greene’s book.

Note in the first paragraph that evolved language and storytelling “helps us as a species succeed”. That’s undoubtedly true—though I’m yet to be convinced that storytelling is anything more than an epiphenomenon of evolved language—but whatever evolved here was undoubtedly via individual (genic) selection and not species selection. Traits don’t evolve to enable a species to succeed; they evolve (via selection) because they give their bearers a reproductive advantage. I’m sure Greene knows this, but Allen balls things up by throwing in “species success”.

Consciousness: If you’re tackling the Big Issues that deal with both philosophy and science, it’s consciousness, defined by Greene (and I) as both self-awareness and the presence of qualia, or subjective sensations (Greene calls it “inner experience”).  I’ve written about this a lot, and don’t propose to do more here. We have consciousness, we don’t know how it works, but it’s certainly a physical property of our brains and bodies that can be manipulated by physical interventions. The two issues bearing on Greene’s piece are where it came from and how will we figure out how it works. (Greene implicitly rejects panpsychism by asking “”How can particles that in themselves do not have any awareness, yield this seemingly new quality?”. That will anger Philip Goff and his coterie of panpsychists.)

I’m not sure about the answer to either., We may never know whether consciousness is an epiphenomenon of having a big brain or is partly the result of natural selection promoting the evolution of consciousness. I suspect it’s partly the latter, since many of our “qualia” are adaptive.  Feeling pain is an aversive response that protects us from bodily damage; people who lack the ability to feel pain usually accumulate substantial injuries. And many things that give us pleasure, like orgasms, do so because they enhance our reproduction. But this is just speculation.

Greene also thinks that natural selection has something to do with human consciousness, but it’s not clear from the following whether he sees consciousness as an epiphenomenon of our big brain and its naturalistic physical properties, or whether those properties were molded by natural selection because consciousness enhanced our reproduction:

“My gut feeling,” says Greene, “Is that the final answer will be the Darwinian story. Where collections of particles come together in a certain kind of organised high order ‘brain’, that brain is able to have particle motions that yield self-awareness. But it’s still a puzzle at this moment.”

Where Green and I differ is in what kind of work might yield the answer to how consciousness comes about. Greene thinks it will come from work on AI, while I think it will come, if it ever does, from neurological manipulations. Greene:

“That’s perhaps the deepest puzzle we face,” says Greene. “How can particles that in themselves do not have any awareness, yield this seemingly new quality? Where does inner experience come from?”

Greene’s suspicion is that this problem will go away once we start to build artificial systems, that can convincingly claim to have inner awareness. “We will come to a place where we realise that when you have this kind of organisation, awareness simply arises.”

In June this year, Google engineer Blake Lemoine said an AI he was working on, named LaMDA (Language Models for Dialogue Applications), got very chatty and even argued back.

I suppose this is a version of the Turing test, but it will be very, very hard to determine if an AI bot has “inner awareness”.  Hell, I don’t even know if my friends are conscious, since it depends on self-report! Can you believe any machine that says it has “inner experiences”?

With that speculation I’ll move on. Greene also muses on the origin and fate of the universe, and whether it might “restart” after it collapses, but cosmology is above my pay grade, and I’ll leave you to read about that yourself.

h/t: Ginger K.

Dan Dennett discusses consciousness tonight in a “Munk Debate”; starts in 20 minutes

March 17, 2022 • 6:38 pm

I’m just here to pass on this information from reader Merilee, which she got in an email. I won’t be able to watch Dan’s lucubrations, but you can:

Tonight, March 17 from 8-9 pm ET, we host Daniel Dennett for a Munk Dialogue on the origin of human consciousness, and how our minds have been shaped by natural selection and generations of cultural evolution. Daniel Dennett is a world-renowned philosopher and cognitive scientist, and the Director of the Center for Cognitive Studies at Tufts University.Simply click on the link below just before 8 pm ET to watch:https://munkdebates.com/dialogues/daniel-dennett

According to Google, this post will go up at about 7:40 ET, so you have twenty minutes to prepare. Enjoy, because I can’t! Oh, and let me know how it went.