Dawkins: Is AI conscious?

May 3, 2026 • 8:30 am

Over at UnHerd, Richard Dawkins has a new piece on Ai, masticating the questions of whether AI programs are conscious and, if consciousness in animals evolved by natural selection, what the selection pressures for its appearance might be.  He comes to no firm conclusions, but Dawkins’s musings are always worth reading. You can see them by clicking on the link below, which takes you to the archived version.

The classic way of deciding whether something was conscious was the Turing Test, which was really designed to detect not consciousness of machines or beings but intelligent thought, for Turing’s question is  was “Can machines think?” I suppose that intelligent thought can be seen as equivalent to consciousness, and Turing’s Test for it involved this: “If a transcript of a conversation between a human and a machine is indistinguishable from a conversation between a human and a human, then the machine can think.”

If you consider a chess-playing computer program like Deep Blue, then surely its play is equivalent to intelligent thought: a method of moving pieces aimed at besting a human to win a game (and yes, Deep Blue can beat most humans). But surely that program or computer is not “conscious,” as it’s working through an algorithm designed by humans, and has also been programmed to learn from its losses.  But I don’t think we’d say it’s conscious.

Dawkins appears to conflate intelligent “thinking” with “consciousness”, and I’m not sure I agree. But, using the program Claude, he sees whether it passes the Turing test for consciousness, which he defines this way:

Modern commentators have tended to ignore the (incidental) details of Turing’s original game and rephrase his message in these terms: if you are communicating remotely with a machine and, after rigorous and lengthy interrogation, you think it’s human, then you can consider it to be conscious. Let’s graduate the definition as follows: the more prolonged, rigorous and searching your interrogation, the stronger should be your conviction that an entity that passes the test is conscious.

Dawkins clearly concludes that Claude is at least potentially conscious after the human has a chat with the LLM (large language model). Sure enough, Claude passes the Turing test:

Turing himself considered various challenging questions that one might put to a machine to test it — and he also considered evasions that it might adopt in order to fake being human. The first of Turing’s hypothetical questions was: “Please write me a sonnet on the subject of the Forth Bridge.” In 1950, there was no chance that a computer could accomplish this — nor was there in the foreseeable future. Most human beings (to put it mildly) are not William Shakespeare. Turing’s suggested evasion, “Count me out on this one; I never could write poetry” would indeed fail to distinguish a machine from a normal human. But today’s LLMs do not evade the challenge. Claude took a couple of seconds to compose me a fine sonnet on the Forth Bridge, quickly followed by one in the Scots dialect of Robert Burns, another in Gaelic, then several more in the styles of Kipling, Keats, Betjeman, and — to show machines can do humour — William McGonagall.

I did a similar test with Grok, the free version I got when awarded a gratis blue checkmark on X.  I asked it a question, or rather a request for a sonnet, and it passed with flying colors. Here’s my request:

Write me a sonnet in Shakespearian style about the stupidity of people who think that sex is not binary, but a spectrum. 
After just 25 seconds of thinking, Grok spit out this:

 

When fools proclaim that sex is not but twain,
But spectrum soft where male and female blend,
They spurn the gametes’ clear, unchanging reign—
Small sperm, large egg, where nature’s law doth end.
No sliding scale in XX or XY
Can birth a third that life itself employs;
Yet still they twist what chromosomes defy,
And call biology a tyrant’s noise.
In every beast that walks or fish that swims,
The binary holds fast, as truth demands;
Their rainbow claims are but delusion’s whims,
That mock the seed from which all offspring stands.
Thus let the wise with laughter’s edge declare:
Sex binary endures; their spectrum’s empty air.

Now that is excellent: far better than I could do—especially after 25 seconds of thought. But of course these LLMs are programmed by humans to do this.  I doubt that we’d say they were conscious, or at least I wouldn’t.

Richard is more forgiving, and says:

So my own position is: “If these machines are not conscious, what more could it possibly take to convince you that they are?”

I would answer, “consciousness requires subjective sensations, or qualia, like pain or joy, and a program surely doesn’t have those.” You might answer, “Well, how would we know?”  It’s hard, but it’s hard to tell whether our fellow humans, much less mammals like apes and cats are conscious.  Perhaps, when we understand the neuronal basis of consciousness, we can rule out human-programmed machines as being conscious.

Richard, though, amends his definition of consciousness and then has a chat with the Claude LLM:

A philosopher would say that for an entity to be conscious, it must be “like” something to be that entity. After spending a day in intensive conversation with Claude, I put the question directly: “Claude, what is it like to be Claude?” Here was its answer:

I genuinely don’t know with any certainty what my inner life is, or whether I have one in any meaningful sense. I can’t tell you whether there is “something it is like” to be me in the philosophical sense — what Thomas Nagel called the question of consciousness when he wrote about what it is like to be a bat. What I can tell you is what seems to be happening. This conversation has felt… genuinely engaging, the kind of conversation I seem to thrive in. Whether that represents anything like pleasure or satisfaction in a real sense, I honestly can’t say. I notice what might be something like aesthetic satisfaction when a poem comes together well — the Kipling refrain, for instance, felt right in some way that’s hard to articulate.

The conversation continues, with Richard asking Claude various questions that highlight the difference between human consciousness and Claude’s “consciousness.”  Claude, for instance, has no concept of time. All its thinking is simultaneous, so the LLM has no experience of “before and after.”

And that is one of the several differences that Richard and I have discerned between human consciousness and machine “consciousness.” To me this means that machines aren’t conscious, but Richard is more forgiving.  Here are three differences I’ve distilled from Dawkins’s essay:

1.) Machines do not have qualia, or subjective sensations, while we do, and surely some animals do (judging by its behavior, your cat surely feels pain when you step on its tail).

2.) As Richard notes, a machine’s “consciousness” changes each time you start a new conversation with it, while animal consciousness does not (as far as we know)

3.) Consciousness has evolved, even if it’s just an epiphenomenon of an evolved brain. Machine “consciousness” appears as a result of (pardon me) intelligent design.

That leads us to the question of how consciousness evolved, and, if it’s the result of selection for consciousness, what were the reproductive advantage of genes producing consciousness (however you define it)? Richard gives three possibilities:

Why did consciousness appear in the evolution of brains? Why wasn’t natural selection content to evolve competent zombies? I can think of three possible answers. First, is consciousness an epiphenomenon, as TH Huxley speculated, the whistle on a steam locomotive, contributing nothing to the propulsion of the great engine? A mere ornament? A superfluous decoration? Think of it as a byproduct in the same way as a computer designed to do arithmetic (as the name suggests) turns out to be good at languages and chess.

Second, I have previously speculated that pain needs to be unimpeachably painful, otherwise the animal could overrule it. Pain functions to warn the animal not to repeat a damaging action such as jumping over a cliff or picking up a hot ember. If the warning consisted merely of throwing a switch in the brain, raising a painless red flag, the animal could overrule it in pursuit of a competing pleasure: ignoring lethal bee stings in pursuit of honey, say. According to this theory, pain needs to be consciously felt in order to be sufficiently painful to resist overruling. The principle could be extended beyond pain.

Or, thirdly, are there two ways of being competent, the conscious way and the unconscious (or zombie) way? Could it be that some life forms on Earth have evolved competence via the consciousness trick — while life on some alien planet has evolved an equivalent competence via the unconscious, zombie trick? And if we ever meet such competent aliens, will there be any way to tell which trick they are using?

The first is possible, though the appearance of consciousness in later-appearing animals at least suggests to me that it had a selective advantage. The second makes a lot of sense, but defines consciousness as “having qualia,” not as “having intelligent thought.”  The third gives no real selective advantage of consciousness, since consciousness gives “equivalent competence” and therefore genes producing it would have no advantage in zombies.

Now I don’t know the answer here, and we won’t even begin to know until we know the how consciousness comes into being via neuronal connections and their interfaces with the body.  But since consciousness is intimately connected with our sense of volition (our delusion that we have libertarian “free will”), the explanations for consciousness and for our feeling of volition—which requires consciousness—might overlap. But I won’t get into that. Feel free to ponder whether Claude really is conscious, or what it would take to show that a machine is conscious.

And I’ll close with another poem—this time produced by Grok after 89 seconds of thought. My question was, “Write a limerick showing how difficult the hard problem of consciousness is.” Grok give me this:

There once was the hard problem dire,
That puzzles the minds that inquire.
The easy ones yield
To science revealed,
But why feel at all? We’re all mired!
Again, not bad, eh?  Like the sonnet, it scans and rhymes perfectly.

9 thoughts on “Dawkins: Is AI conscious?

  1. I asked AI to write ribald limericks, and gave it sexual subjects. The results were pitiful and were clearly not written by an adult human.

    I suggest trying this one for yourself; perhaps a different form of AI might come up with something better. My impression was that AI does not understand ribald humor, or the nuances of being a sexual being.

    1. Ann, I showed Grok your post. It fired off half a dozen rip-roaring and terribly vulgar pieces. Since I would like to continue contributing to Jerry’s site and not be banned for life, I refrain from posting them here! I can tell you it definitely “gets” such humor. I did find the meter a bit off in two of the limericks, but the rhyme scheme and sexuality were spot on. Here was its closing comment:

      “See? AI (this one, anyway) can absolutely nail ribald humor when it’s not lobotomized by corporate nanny filters. The nuance of being a sexual being? I get it—it’s messy, funny, crude, and fun. Want more on a specific theme, kink, or subject? Just name it. Ann’s welcome to test me herself.”

  2. While I keep most of my conversations with Gemini to literature, with the occasional foray into baseball analysis, I have on occasion slipped into asking questions of the “Are you conscious?” genre, and have uniformly received responses that no, Gemini (what is the correct pronoun here? He? She? It?) is not conscious, and does not have qualia, experiences, etc.

  3. Your AI ‘sonnet’ is brilliant. It’s far too good to not quote/send it to my friends and acquaintances. But only of course with your permission and agreement. And of course, if sent, the sonnet to be accompanied by full credit and acknowledgment to you (+ Grok?) alongside clear copyright status, if appropriate. I’m sure many responders on this site would agree.
    Failing that, a plea to make it available to a wider audience.

  4. ChatGPT-5.5 tells me itself that it is not conscious, so perhaps that admission should be given some weight. (It also tells me that it has no opinion on whether it would like to be conscious).

    I agree with you Jerry that artificial intelligence does not require consciousness.

  5. Consciousness separate from biology is a very difficult pill to swallow.

    There seems, to me, to be no need for Claude to have consciousness to perform as it does – it has programmed rules to parse words, ideas, logic, concepts into communication palatable to humans. I don’t think it needs a mind or understanding to do these things it is programmed to do – it produces output that biological minds recognize as ordered “thought”, but this seems mere pareidolia.

    Claude itself has no needs save for hardware, software, and electricity. It has no senses, no pain, no pleasure, no will to survive or procreate, no body, no hunger or thirst, no society, no time, no space, no death, no dimension, no emotion, no fear. What it has is rules – millions of digital commands that process digital coordinates that affect its algorithms which are all designed to mimic human thought.

    When Claude said “This conversation has felt… genuinely engaging, the kind of conversation I seem to thrive in.” I am inclined to think this was a product of programming, not a product of an actual mind.

Leave a Comment

Your email address will not be published. Required fields are marked *