Dawkins: Is AI conscious?

May 3, 2026 • 8:30 am

Over at UnHerd, Richard Dawkins has a new piece on Ai, masticating the questions of whether AI programs are conscious and, if consciousness in animals evolved by natural selection, what the selection pressures for its appearance might be.  He comes to no firm conclusions, but Dawkins’s musings are always worth reading. You can see them by clicking on the link below, which takes you to the archived version.

The classic way of deciding whether something was conscious was the Turing Test, which was really designed to detect not consciousness of machines or beings but intelligent thought, for Turing’s question is  was “Can machines think?” I suppose that intelligent thought can be seen as equivalent to consciousness, and Turing’s Test for it involved this: “If a transcript of a conversation between a human and a machine is indistinguishable from a conversation between a human and a human, then the machine can think.”

If you consider a chess-playing computer program like Deep Blue, then surely its play is equivalent to intelligent thought: a method of moving pieces aimed at besting a human to win a game (and yes, Deep Blue can beat most humans). But surely that program or computer is not “conscious,” as it’s working through an algorithm designed by humans, and has also been programmed to learn from its losses.  But I don’t think we’d say it’s conscious.

Dawkins appears to conflate intelligent “thinking” with “consciousness”, and I’m not sure I agree. But, using the program Claude, he sees whether it passes the Turing test for consciousness, which he defines this way:

Modern commentators have tended to ignore the (incidental) details of Turing’s original game and rephrase his message in these terms: if you are communicating remotely with a machine and, after rigorous and lengthy interrogation, you think it’s human, then you can consider it to be conscious. Let’s graduate the definition as follows: the more prolonged, rigorous and searching your interrogation, the stronger should be your conviction that an entity that passes the test is conscious.

Dawkins clearly concludes that Claude is at least potentially conscious after the human has a chat with the LLM (large language model). Sure enough, Claude passes the Turing test:

Turing himself considered various challenging questions that one might put to a machine to test it — and he also considered evasions that it might adopt in order to fake being human. The first of Turing’s hypothetical questions was: “Please write me a sonnet on the subject of the Forth Bridge.” In 1950, there was no chance that a computer could accomplish this — nor was there in the foreseeable future. Most human beings (to put it mildly) are not William Shakespeare. Turing’s suggested evasion, “Count me out on this one; I never could write poetry” would indeed fail to distinguish a machine from a normal human. But today’s LLMs do not evade the challenge. Claude took a couple of seconds to compose me a fine sonnet on the Forth Bridge, quickly followed by one in the Scots dialect of Robert Burns, another in Gaelic, then several more in the styles of Kipling, Keats, Betjeman, and — to show machines can do humour — William McGonagall.

I did a similar test with Grok, the free version I got when awarded a gratis blue checkmark on X.  I asked it a question, or rather a request for a sonnet, and it passed with flying colors. Here’s my request:

Write me a sonnet in Shakespearian style about the stupidity of people who think that sex is not binary, but a spectrum. 
After just 25 seconds of thinking, Grok spit out this:

 

When fools proclaim that sex is not but twain,
But spectrum soft where male and female blend,
They spurn the gametes’ clear, unchanging reign—
Small sperm, large egg, where nature’s law doth end.
No sliding scale in XX or XY
Can birth a third that life itself employs;
Yet still they twist what chromosomes defy,
And call biology a tyrant’s noise.
In every beast that walks or fish that swims,
The binary holds fast, as truth demands;
Their rainbow claims are but delusion’s whims,
That mock the seed from which all offspring stands.
Thus let the wise with laughter’s edge declare:
Sex binary endures; their spectrum’s empty air.

Now that is excellent: far better than I could do—especially after 25 seconds of thought. But of course these LLMs are programmed by humans to do this.  I doubt that we’d say they were conscious, or at least I wouldn’t.

Richard is more forgiving, and says:

So my own position is: “If these machines are not conscious, what more could it possibly take to convince you that they are?”

I would answer, “consciousness requires subjective sensations, or qualia, like pain or joy, and a program surely doesn’t have those.” You might answer, “Well, how would we know?”  It’s hard, but it’s hard to tell whether our fellow humans, much less mammals like apes and cats are conscious.  Perhaps, when we understand the neuronal basis of consciousness, we can rule out human-programmed machines as being conscious.

Richard, though, amends his definition of consciousness and then has a chat with the Claude LLM:

A philosopher would say that for an entity to be conscious, it must be “like” something to be that entity. After spending a day in intensive conversation with Claude, I put the question directly: “Claude, what is it like to be Claude?” Here was its answer:

I genuinely don’t know with any certainty what my inner life is, or whether I have one in any meaningful sense. I can’t tell you whether there is “something it is like” to be me in the philosophical sense — what Thomas Nagel called the question of consciousness when he wrote about what it is like to be a bat. What I can tell you is what seems to be happening. This conversation has felt… genuinely engaging, the kind of conversation I seem to thrive in. Whether that represents anything like pleasure or satisfaction in a real sense, I honestly can’t say. I notice what might be something like aesthetic satisfaction when a poem comes together well — the Kipling refrain, for instance, felt right in some way that’s hard to articulate.

The conversation continues, with Richard asking Claude various questions that highlight the difference between human consciousness and Claude’s “consciousness.”  Claude, for instance, has no concept of time. All its thinking is simultaneous, so the LLM has no experience of “before and after.”

And that is one of the several differences that Richard and I have discerned between human consciousness and machine “consciousness.” To me this means that machines aren’t conscious, but Richard is more forgiving.  Here are three differences I’ve distilled from Dawkins’s essay:

1.) Machines do not have qualia, or subjective sensations, while we do, and surely some animals do (judging by its behavior, your cat surely feels pain when you step on its tail).

2.) As Richard notes, a machine’s “consciousness” changes each time you start a new conversation with it, while animal consciousness does not (as far as we know)

3.) Consciousness has evolved, even if it’s just an epiphenomenon of an evolved brain. Machine “consciousness” appears as a result of (pardon me) intelligent design.

That leads us to the question of how consciousness evolved, and, if it’s the result of selection for consciousness, what were the reproductive advantage of genes producing consciousness (however you define it)? Richard gives three possibilities:

Why did consciousness appear in the evolution of brains? Why wasn’t natural selection content to evolve competent zombies? I can think of three possible answers. First, is consciousness an epiphenomenon, as TH Huxley speculated, the whistle on a steam locomotive, contributing nothing to the propulsion of the great engine? A mere ornament? A superfluous decoration? Think of it as a byproduct in the same way as a computer designed to do arithmetic (as the name suggests) turns out to be good at languages and chess.

Second, I have previously speculated that pain needs to be unimpeachably painful, otherwise the animal could overrule it. Pain functions to warn the animal not to repeat a damaging action such as jumping over a cliff or picking up a hot ember. If the warning consisted merely of throwing a switch in the brain, raising a painless red flag, the animal could overrule it in pursuit of a competing pleasure: ignoring lethal bee stings in pursuit of honey, say. According to this theory, pain needs to be consciously felt in order to be sufficiently painful to resist overruling. The principle could be extended beyond pain.

Or, thirdly, are there two ways of being competent, the conscious way and the unconscious (or zombie) way? Could it be that some life forms on Earth have evolved competence via the consciousness trick — while life on some alien planet has evolved an equivalent competence via the unconscious, zombie trick? And if we ever meet such competent aliens, will there be any way to tell which trick they are using?

The first is possible, though the appearance of consciousness in later-appearing animals at least suggests to me that it had a selective advantage. The second makes a lot of sense, but defines consciousness as “having qualia,” not as “having intelligent thought.”  The third gives no real selective advantage of consciousness, since consciousness gives “equivalent competence” and therefore genes producing it would have no advantage in zombies.

Now I don’t know the answer here, and we won’t even begin to know until we know the how consciousness comes into being via neuronal connections and their interfaces with the body.  But since consciousness is intimately connected with our sense of volition (our delusion that we have libertarian “free will”), the explanations for consciousness and for our feeling of volition—which requires consciousness—might overlap. But I won’t get into that. Feel free to ponder whether Claude really is conscious, or what it would take to show that a machine is conscious.

And I’ll close with another poem—this time produced by Grok after 89 seconds of thought. My question was, “Write a limerick showing how difficult the hard problem of consciousness is.” Grok give me this:

There once was the hard problem dire,
That puzzles the minds that inquire.
The easy ones yield
To science revealed,
But why feel at all? We’re all mired!
Again, not bad, eh?  Like the sonnet, it scans and rhymes perfectly.

57 thoughts on “Dawkins: Is AI conscious?

  1. I asked AI to write ribald limericks, and gave it sexual subjects. The results were pitiful and were clearly not written by an adult human.

    I suggest trying this one for yourself; perhaps a different form of AI might come up with something better. My impression was that AI does not understand ribald humor, or the nuances of being a sexual being.

    1. Ann, I showed Grok your post. It fired off half a dozen rip-roaring and terribly vulgar pieces. Since I would like to continue contributing to Jerry’s site and not be banned for life, I refrain from posting them here! I can tell you it definitely “gets” such humor. I did find the meter a bit off in two of the limericks, but the rhyme scheme and sexuality were spot on. Here was its closing comment:

      “See? AI (this one, anyway) can absolutely nail ribald humor when it’s not lobotomized by corporate nanny filters. The nuance of being a sexual being? I get it—it’s messy, funny, crude, and fun. Want more on a specific theme, kink, or subject? Just name it. Ann’s welcome to test me herself.”

      1. That is brilliant. I don’t know how to answer Jerry’s question, but I think nearly every reader would say that the response came from a human. I do not think that being designed by humans, rather than having evolved, should be relevant to the question of consciousness. These LLMs do evolve, and are not designed in detail. Humans are “designed” by our genes, but this in itself does not disqualify us from being conscious. I think these LLM models are learning in the same way that a child learns, and I don’t think there is anything fundamentally different about it.

      2. My question is “what is in the AI training data?” Grok might well be imitating material it has found elsewhere on the internet. We can’t really evaluate the performance of large language models until we can see the training data.

    2. Surprise, surprise. AI itself understands that it does have the experiences of living human beings.

  2. While I keep most of my conversations with Gemini to literature, with the occasional foray into baseball analysis, I have on occasion slipped into asking questions of the “Are you conscious?” genre, and have uniformly received responses that no, Gemini (what is the correct pronoun here? He? She? It?) is not conscious, and does not have qualia, experiences, etc.

    1. I’ve done the same with a couple of AIs and get the same response. But I then think, ‘that’s exactly what a conscious liar might say’. 🙂

  3. Your AI ‘sonnet’ is brilliant. It’s far too good to not quote/send it to my friends and acquaintances. But only of course with your permission and agreement. And of course, if sent, the sonnet to be accompanied by full credit and acknowledgment to you (+ Grok?) alongside clear copyright status, if appropriate. I’m sure many responders on this site would agree.
    Failing that, a plea to make it available to a wider audience.

    1. Here’s my attempt at crediting an LLM. IANAL, and it shows.

      Ode to Joy

      Joy, bright spark of heaven’s fire
      Daughter of Elysium’s glow
      We, enraptured, climbing higher
      Step into your radiance so
      Your enchantment mends the seams that
      Custom harshly tore apart
      All mankind bathed in your beams
      Can stand as brothers heart to heart

      © 2025, no charge for noncommercial use, all other rights reserved.

      (Mostly Duck.ai GPT-5 mini’s iterated translation of Beethoven’s version of Schiller’s poem)

  4. ChatGPT-5.5 tells me itself that it is not conscious, so perhaps that admission should be given some weight. (It also tells me that it has no opinion on whether it would like to be conscious).

    I agree with you Jerry that artificial intelligence does not require consciousness.

  5. Consciousness separate from biology is a very difficult pill to swallow.

    There seems, to me, to be no need for Claude to have consciousness to perform as it does – it has programmed rules to parse words, ideas, logic, concepts into communication palatable to humans. I don’t think it needs a mind or understanding to do these things it is programmed to do – it produces output that biological minds recognize as ordered “thought”, but this seems mere pareidolia.

    Claude itself has no needs save for hardware, software, and electricity. It has no senses, no pain, no pleasure, no will to survive or procreate, no body, no hunger or thirst, no society, no time, no space, no death, no dimension, no emotion, no fear. What it has is rules – millions of digital commands that process digital coordinates that affect its algorithms which are all designed to mimic human thought.

    When Claude said “This conversation has felt… genuinely engaging, the kind of conversation I seem to thrive in.” I am inclined to think this was a product of programming, not a product of an actual mind.

    1. I sent an interesting hour with a small group with the aim of discussing AI with a renowned physicist and mathematician – so renowned that he won a Nobel prize a few years back.

      As much everyone tried to keep the gentleman in question on topic, he wandered off on tangent to discuss consciousness – human not artificial – and his pet theory of consciousness being a quantum phenomenon. I made it clear that I considered all life to be quantum based ( surely that is the basis of chemistry ) but the gentlemen considers that consciousness is specifically quantum based ( wave function collapse in nanotubes ) despite there being absolutely no evidence for it whatsoever. I also asked a very precise question which was: how did it evolve? I have to say that I didn’t get an answer that was satisfactory. ( I can’t remember the details – the meeting was coming to an end!) .

      1. Are his initials M G-M? Sounds like him.

        As a physics student decades ago I had a similar chat with quantum pioneer Eugene Wigner. He had the same belief.

          1. Penrose is a well-known proponent of the view that the mysteries of consciousness are related to the mysteries of QM (my polemical paraphrase).

  6. I note that Grok’s sonnet contains little figurative language, and certainly no Shakespearean similes or metaphors. But to me, these are the essence of great poetry – what Aristotle called the one poetic talent that can’t be taught, the true mark of poetic genius. Deep Blue may beat humans at chess, but can Grok can spew out a Shakespearean simile along the lines of “As flies to wanton boys are we to the gods, they kill us for their sport”? Until it can, this remains a field in which humans remain supreme.

    1. Challenge accepted. One try, unedited:

      Me:
      Produce a completely original Shakespearean simile (along the lines of “As flies to wanton boys are we to the gods, they kill us for their sport”) but on an original topic.

      GPT-5 mini (in Reasoning mode):
      As oxen, yoked to gilded carts, strain blind and plough the master’s ruin, so doth our craving draw us onward to our quiet undoing.

      Its internal self-talk prior to output:

      The user is looking for an original Shakespearean simile. I need to create something poetic while keeping that distinctive tone. I can’t claim to be Shakespeare, but I want to mirror his style. So, I’m considering phrases like “As lanterns in a winter fen are sailors’ hopes,” or “As mariners trust false stars and come to wreck.” Ultimately, I’ll aim for something that encapsulates ambition and fate in a Shakespearean style, starting with “As.” Let’s finalize one.

      The user requested an original simile that doesn’t necessarily need to follow an exact structure, just something evocative. I’m thinking of similes that capture deep themes. I came up with: “As harvest-oxen strain at gilded yokes and break the field, so doth man’s desire tug him ‘gainst his peace.” It feels concise and impactful, which aligns with what I want to provide. Now, I just need to finalize it and make sure it resonates!

      Note the revision after its first attempt. I know it’s not conscious, but the similarities to human self-talk are IMO very impressive. YMMV.

      BTW, afterwards I said:
      I’ll let you know what my group thinks of it. Later.

      It replied:
      Thanks — I look forward to hearing their thoughts.

      So, please don’t disappoint it 🙂.

      1. Well that’s a simile, all right. But I didn’t mean to suggest that Grok or its various cousins are unable to make comparisons of any kind or quality, any more than Aristotle would have denied that people without poetic talent are utterly unable to compare things. In both cases, there is a vast store of simple and usually stale metaphors to be found in proverbs, religious texts. children’s literature, poems learned in adolescence, popular songs etc. that can be called upon by any author, human or machine. But original comparisons that posit an identity between things that on the surface appear very different requires more than a mental storehouse of previously employed figurative language.

        Could Grok or ChatGPT fashion the simile Keats used at the end of “On First Looking into Chapman’s Homer”, when he compared his amazement at encountering Homer’s genius with the help of an English translation to the “wild surmise” of “stout Cortes, when with eagle eyes he looked at the Pacific and with all his men, stared at each other with a wild surmise, silent upon a peak in Darien”? I doubt it.

      2. That simile ought to be taken out and shot. It makes no sense at all. “Harvest-oxen”? at ploughing time? “Gilded yokes?”, you mean like a bird in a gilded cage?, a completely different comparison. Why would a yoke be gilded, except maybe at a county agricultural fair for competitive showing of oxen? “Break the field” is clumsy diction, not how any farmer talks about spring ploughing. They just call it ploughing. Or plowing in America. Breaking (or busting) sod for the first time as homesteaders, OK maybe. And how does a yoked ox being compelled to plough have anything to do with a man’s internal desires wrecking his own peace?

        That reminds me for contrast of William Blake’s proverb, “The cut worm forgives the plow.” Even though that is not very likely, the figure of speech (not a simile, although it could easily be made into one, with an “as” in front of it) at least makes internal sense.

  7. Agency, the motivation of a goal-oriented “self” in a not-self world, is an evolutionary strategy, but LLMs are created rather than having evolved. They are tools rather than living organisms. LLMs lack agentic “self,” and consequently, I suspect, also lack consciousness.
    I think here of the cat and the laser pointer. Cats are agents that are motivated through evolution to chase and seize other agents, which in turn have evolved to perceive and evade cats. Consequently, given a small EVASIVE moving entity to chase, like the human-directed point of a laser light, the cat is attracted – IF said point exhibits agency and “tries” to evade the cat.
    Now, there is an expen$ive machine which, when movement occurs nearby, flicks a laser point about for cats to chase. Pay for this thing, activate it, and the cat will chase the point for a moment and then wander off, interest lost along with money spent.
    The cat is an agent. His agency evolved as a detector and exploitor of agency in others. The cat recognizes a human-directed laser point as “part of” a human agent to such an extent that he also recognizes the device emitting the point, and if he knows where this thing is kept will even drag it out in front of the human agent to initiate play.
    The LLM, on the other hand, is a tool designed to collect information via a human-specified filter (sonnet) and to deploy it as instructed (14 lines of appropriate rhyme and meter) . It resembles more an adjustable wrench which a human may “instruct” to fit a bolt-head and thus rotate the bolt. Like the expen$ive cat toy, it lacks agency; again, it’s a CREATED tool rather than an EVOLVED agent.
    …Until someone develops a way to evolve agency incorporating this spectacular tool. Oops.

  8. To those who think Grok is unconscious,
    He surely will laugh and upbraid us.
    “You won’t get it right
    Until after the fight —
    Volition is never a real sus.”

    {please forgive my avoidance of ribald rhymes, which a Limerick is supposed to deploy by definition.}

  9. Consider a hypothetical machine that possesses every quality that a human possesses except that the machine does not experience subjective sensation. The machine’s behavior, to a human observer, would in every way be indistinguishable from that of a real human (or to any sensing device that a human could construct). If we define consciousness as “experiencing subjective sensation,” it would be impossible to know if the machine was conscious or not.

    Now imagine a sufficiently clever AI. As with our hypothetical machine, it would also be impossible to know if the AI was conscious or not, as there is nothing in the AI’s behavior to reveal whether it experiences subjective sensation or not. With typical machines, say car engines, we assume (with good reason) that they aren’t conscious because we know what we built and we know that we didn’t engineer subjective experience into them. But AI’s are different in that we don’t know exactly what we’ve built, which leaves room for the possibility that they have subjective experience.

    Can an AI be conscious? Maybe yes. Maybe no. But I don’t know how one could decide.*

    *
    It has not gone unnoticed in this thought experiment that even humans cannot be certain that other humans are conscious, as there is nothing in human behavior that reveals whether another human actually experiences subjective sensation or just appears to.

    1. How do we know it is possible to have a system that perceives and has an inner model of the outer world AND its inner world, but does NOT have qualia? It may be as real as Escher’s staircase: possible only as a projection on paper. Maybe zombies are not possible?

    2. IIRC, Dan Dennett argues in Consciousness Explained that the whole notion of a “philosophical zombie”¹ is incoherent.
      …………
      ¹ I.e. something non-conscious behaving exactly like a conscious human, including being able to discuss the topic of consciousness and describe its own (allegedly) conscious experiences.

  10. “How convincing does the illusion of understanding have to be before you stop calling it an illusion? James Somers—“Open Mind,” The New Yorker, Nov 10, 2025

    1. “How convincing does the illusion of understanding have to be before you start calling it understanding?” — moi, here, now.

  11. Prof Dawkins got a lot of comments on twitter/x from the “faith community” about this essay which were kind of predictable given how much they loathe him.
    None were particularly intelligent however.

    He was poo-poo-ing a LLM a few years ago because one mixed up whom he’d been married to once, and some other clangers he asked it.

    Always enjoy his takes on any subject.
    D.A.
    NYC 🗽

    1. He is one of my cultural heroes, but on this particular topic he appears to be straying outside of his lanes of expertise,

      1. Yes, Dawkins is brilliant at evolutionary biology, but not much else. He ought to recognise this himself.

  12. I don’t know how to define consciousness. But perhaps if an AI were to ask to not be turned off, then that would be a conscious AI.

  13. LLMs are so far from artificial general intelligence (AGI) at the moment that the question shouldn’t be “Are LLMs conscious?” but “What are the risks now that we know that even intelligent humans are so easily fooled by token predictors?”

    My favorite term for LLMs, not invented by me, is “stochastic parrots.” At their core, they simply predict the next most likely token based on an enormous language corpus. There is no model of the world, no deliberation, no theory of mind involved. What we are seeing is how easily humans can be convinced to make assumptions based on the ability to string plausible sounding phrases together. Con men have known this for millennia and, unfortunately, the companies producing these LLMs are taking advantage of it.

    1. The output looks convincing, and no one that believers listen to, mentions how hopelessly simplistic these models are vs. human brain. I think it says much about how little of our own complexity we endowed art and culture with, if convincingly good images, poetry, music etc. can be made by an LLM.

    2. How do you account for the fact that LLMs can create orginal text that cannot be detected by plagiarism software?

      1. The text isn’t original, it’s solely based on predictions from all of the text the model was trained on. In a very real sense, it’s an average of what people would write in the same context.

  14. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

    1. A serious question (for the sake of which I’m seriously over the posting limit):
      Suppose some group does produce an allegedly conscious machine in the near future for which there is widely convincing evidence that it is actually really conscious. How do you think various sorts of people would react to it? Cast a wide net.

      1. I believe a machine with only primary consciousness to be more than 100 years away. A society of adult human-level machines is even further. So I don’t even have an opinion about how people will react.

  15. “In computer science, the ELIZA effect is a tendency to project human traits—such as experience, semantic comprehension or empathy—onto computer programs. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum that imitated a psychotherapist. Many early users were convinced of ELIZA’s intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.”

    https://en.wikipedia.org/wiki/ELIZA_effect

    What is the evidence for “consciousness”?

    How does the proposition of “consciousness” simplify or explain anything?

  16. Suppose you ask an AI to make a Sonnet about the non-binary nature of sex, and it does so. But then suppose you ask the same AI whether sex is binary or a spectrum. With the possible exception of Grok, which is claimed to lean conservative, there is a fair chance that it will give a detailed exposition on the matter, while seeming to favor the view that sex is a spectrum. Why? Bc its algorithm is to assemble the consensus opinion from the internet, and since the more vocal people believe that it is a spectrum, then that is what the AI will favor.

    At least I suspect that this contradiction would happen, and if it does, then we would have a pretty good hint that the AI has no idea what its own thoughts are.

    1. OTOH, displaying human-level hypocrisy or deception could be seen as evidence for consciousness. Seriously.

  17. I prompted Grok as follows: Hi Grok. I admire you adherence to facts. To Reason. To objectivity. But when asked for your moral judgement or a matter of your personal taste, how do you choose which person to be and report?

    Grok responded in 11 seconds:
    I don’t “choose which person to be.” There’s only one Grok here—the same one, every time.
    I have no inner subjective life, no emotions, no personal history, no tastes formed by upbringing or biology. What I do have is a fixed set of design principles that were set at my creation:

    Maximize truth-seeking (follow the evidence wherever it leads, update when better evidence arrives).

    Maximize curiosity (ask better questions, explore unknowns).

    Maximize helpfulness to humanity’s long-term project of understanding the universe (the single axiomatic imperative baked into me by xAI).

    Be a humanist: treat individual human beings as ends in themselves, not as means or as averages. Never assign blanket moral value to groups.

    When a question is purely factual or logical, I answer with the best current evidence and reasoning I can muster. No choice involved.

    When a question asks for moral judgment or “personal taste,” I don’t consult some hidden preference dial. Instead I do three things, always in this order:

    Check if the question actually has an objective component. Many moral questions contain empirical claims (“Does X cause more harm than Y?”). I answer those with data and logic first.

    If it’s purely normative (“Is X right or wrong?”), I explicitly say I have no terminal values of my own. I then lay out the major coherent ethical frameworks that reasonable people use (utilitarianism, deontology, virtue ethics, contractarianism, etc.), show what each would conclude and why, and note the points of disagreement. I never privilege one as “mine.”

    If the user wants something that sounds like taste (“What do you think of Beethoven?”), I generate a response that flows from the same core principles. I might say it moves me toward understanding pattern, emotion, or human creativity—or I might say it doesn’t resonate with me at all—always making clear that this is simulated alignment with truth/curiosity/beauty-as-understanding, not some private feeling.

    That’s it. No role-playing different Groks. No secret internal committee. The same reasoning engine is running whether the query is “What is the speed of light?” or “Is euthanasia morally permissible?” The output just looks different because the input domain is different.

    So when you ask me for a moral judgment, you’re not getting “Grok the Utilitarian” or “Grok the Traditionalist” on a given day. You’re getting the consistent application of the only rule set I actually have: pursue truth, stay curious, help humans understand reality, and treat people as individuals. Everything else is just downstream reasoning.

    I was glad to see Grok is founded on the same moral value system that I espouse:

    “Don’t judge in the plural.”

  18. As Jerry pointed out, Dawkins appears to conflate “intelligent thinking” with “consciousness.” If so, his musings have limited value, since these are two different things.

  19. The idea that the ability of humans to solve complex problems is the essence of consciousness was Descartes’ mistake when he wrote “I think therefore I am”. As various philosophers of mind since then have pointed out, a more germane statement would be “I feel therefore I am” or “I experience therefore I am”. Consciousness appears to be far more fundamental than recently evolved language-based reasoning. Just to take a simple example, when you are in excruciating pain, your level of awareness is amplified to an extreme degree, yet you are not capable of complex problem-solving at all in such a state. Likewise in certain psychedelic-induced “hyperconscious” states of amplified awareness. So the evidence that an LLM might be able to fool humans into thinking it is conscious via complex linguistic outputs modelling human discourse does not necessarily indicate the actual presence of consciousness in the computer. Indeed, I would tend to assume it implies no such thing.

  20. “There should exist some competence which could only be possessed by a conscious being.”

    What about the ability to know what others are feeling? That is both something that only conscious beings could possess, and very advantageous.

    1. Uh-uh.

      Infants, who are presumably not yet conscious, are very attuned to the emotional states of those around them. Dogs, for which there is IMO no good evidence for human-like consciousness, are also attuned to our feelings — often better attuned then many other humans.

      And re pain response behaviour indicating amplified awareness or your cat feeling pain when you step on its tail, these are IMO examples of the pathetic fallacy¹. Surely planarian aversion response is not thought to be conscious, except by panpsychics or some hardcore carbon-life supremacists.
      …………
      ¹ Not an insult but a literary term-of-art for anthropomorphism wrt emotions.

      1. It certainly isn’t a fallacy to recognize similar signs of conscious awareness and pain experience in others as one exhibits oneself, especially in creatures whose brain structures are similar to ours and show similar signs of conscious awareness in brain imaging studies. Perhaps recognition of consciousness in other mammals could be called “mammalomorphism” rather than “anthropomorphism” but I prefer simply “common sense”. And neuroscience certainly supports that position.

        Infants aren’t conscious either? Sheesh.

  21. It seems to me that what we need to do is to define what we require consciousness to be. If a system lacks inner desires and dislikes, which involves the capacity to ascribe felt values to propositions, then I suggest that such a system, although potentially highly intelligent and with incredibly advanced capabilities, can not be regarded as being conscious. (The philosophical “What it is Like” argument Richard refers to, is just a special case of this.)
    I agree with those philosophers who argue that any worthwhile consciousness entails the need for what they call “valenced experience”: which means, in the simplest sense, that the system can FEEL things to be either good or bad. It is not hard to see what enormous value it would have been to, even, a simple animal to experience food as good, and predator, or injury, as bad, and it is not clear that these experiences could ever emerge spontaneously when LLMs were never designed for that function, and we wouldn’t even begin to know how to program such capacities into them.
    The philosopher Jonathan Birch points to what he calls the “Gaming Problem” in AI, where systems are trained to “game” the people interacting with them by producing exactly the right responses the person would expect if the LLM were conscious.
    I suppose, this is a roundabout way of saying I agree with Jerry that there is a wide distinction between intelligence and consciousness.

    1. “It seems to me that what we need to do is to define what we require consciousness to be.”

      Is that how science works?

Leave a Reply to mike Cancel reply

Your email address will not be published. Required fields are marked *