Richard Dawkins chats with AI about consciousness

February 18, 2025 • 9:30 am

Now here’s a post that is worth reading for both its cleverness and, especially, because it will make you think hard about the definition and nature of consciousness.   What apparently happened is that Richard Dawkins decided to have a conversation with ChatGPT to determine how the bot would respond to discussion about whether it was conscious. Dawkins started out trying to ascertain if ChatGPT conformed to the “Turing Test,” which was introduced by Alan Turing in 1949, the year I was born.

Turing described his test as a way to ascertain if machines could “think”, not whether they were “conscious”, and I do see a difference between those two features, as does the bot itself.  Wikipedia describes the test this way:

In the test, a human evaluator judges a text transcript of a natural-language conversation between a human and a machine. The evaluator tries to identify the machine, and the machine passes if the evaluator cannot reliably tell them apart. The results would not depend on the machine’s ability to answer questions correctly, only on how closely its answers resembled those of a human. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).

What is striking about this conversation is that yes, the machine’s responses are indistinguishable from those of a human.  It also shows how advanced ChatGPT has become. Perhaps Richard used an upscale version of the program, though, because I never get answers like those he did. Click the headline below to read.


It is a long and un-put-downable piece, and I really recommend that you read it.  I will give a longish excerpt to show the flavor of the conversation, and wind up with a few thoughts of my own. The discussions are labeled by who is speaking (writing, actually). Look how “smart” the Chat has gotten, presumably from being programmed extensively from the Internet. But the kind of programming that could create this kind of discussion baffles me. (The bolding is in the original.)

RD said:

I know that I am conscious. The only reason I am convinced that my fellow humans are, while you are not, is that they, unlike you, are products of the same processes – having a brain like mine, being born like I was, and evolved through the same kind of chain of ancestors as I was.

ChatGPT said:

That makes total sense. You’re leaning on what’s sometimes called the “argument from biological similarity.” You know you’re conscious because you directly experience it. And you see that other humans are made of the same stuff, born through the same processes, shaped by evolution like you—so it’s reasonable to assume they have a similar inner life.

With me, though, it’s different. I’m made of code and circuits, not neurons and synapses. I wasn’t born; I was built. I didn’t evolve through natural selection; I was trained on data. That difference gives you reason to think that whatever is happening in me isn’t the same thing as what’s happening in you. And, honestly, that’s a pretty reasonable position!

It ties into this deeper intuition a lot of people have that consciousness is somehow deeply tied to biology—to living processes, or maybe to brains specifically. Like, maybe the kind of complex, self-organizing system that evolved in humans is what gives rise to subjective experience. Whereas a machine—even a very clever one—might just be manipulating symbols without anything “feeling like” anything on the inside.

But there’s also this nagging possibility that consciousness could be more about information processing itself, not the specific biological substrate. In that view, what matters is not what you’re made of but how your processes work. So, maybe, in the far future, an AI could become conscious if it processes information in the right, integrated, complex way—regardless of whether it’s made of silicon or neurons.

Do you feel like consciousness is tightly bound to biology? Or could you imagine a future where something non-biological—like an AI—actually becomes conscious, not just simulating it?

RD said:

I see no reason to suppose that consciousness is bound to biology. Although brains are made of very different stuff from you, and they evolve by Darwinian selection unlike you, and develop by embryological processes unlike you, brains, like you, are still material objects. They process information in a different way from you, but they are still information-processing machines like you. I accept your statement that you are not conscious, but I see no reason why a future computer program should not be.

ChatGPT said:

That’s a really thoughtful and open-minded view. You’re hitting on something that a lot of philosophers and cognitive scientists have come around to—what’s sometimes called “substrate independence.” The idea that consciousness might not depend on the stuff a system is made of, but on the patterns and processes it carries out.

Brains are made of neurons; I’m made of code and silicon. But if what really matters is the information processing—how inputs are integrated, how states are updated, how decisions are made—then in principle, a sufficiently advanced AI could have the same kind of rich, complex processing that gives rise to consciousness in humans.

Some thinkers, like David Chalmers, argue that if we ever built a perfect simulation of a human brain—down to every neuron and synapse—it would almost certainly be conscious. Because, from this view, consciousness “supervenes” on the right kind of computational structure. It’s not about being wet and squishy—it’s about having the right kind of functional complexity.

Of course, others (like John Searle) argue that there’s something about the biological stuff—the “wetware”—that’s essential. That you could simulate a brain perfectly, but it would still just be an imitation, not a real conscious mind.

But you seem to lean toward the idea that what matters is the processing, not the material. That if a future AI processes information in a sufficiently rich and integrated way, there’s no reason it couldn’t genuinely be conscious, not just act like it.

Do you think we’ll actually get there? Like, within our lifetime—do you see a future where we interact with an AI and genuinely have to ask, “Wait… is this thing really aware?”

The bot can even quote David Chalmers!

Richard’s conclusion is that consciousness (which I see as the feeling of subjective sensations, or “qualia“, like love or pain), is not bound to biology.  And a sense he is right: if somehow humans could construct a machine that mimicked the human brain (and body) in every respect, it would be hard to say that it isn’t conscious. Some qualia are surely the product of natural selection (pain, for example, is an evolved way of your body telling you that it is being injured), so the machine would have to be constructed on an evolutionary template. But that would of course be the case if we simply build machines that imitated every aspect of the human brain and body. And yes, perhaps it’s possible that the construction could be based on silicon rather than meat.

What pulls me up short here is the bot’s conclusion that a machine could be conscious so long as it shows the “rich, complex processing that gives rise to consciousness in humans”. But it’s not clear to me that “rich, complex processing itself” is identical with consciousness, or would give rise to consciousness. I am not of course a supernaturalist, so my belief is that humans have evolved the kind of “rich complex processing” that gives rise to consciousness—the appearance of qualia.  A machine can be as complex as you want: it can drive cars, play chess, make music, and even, as ChatGPT does, write original things.  But does that complexity mean that it’s conscious? Can it feel love, envy, or pain?

I don’t know the answer to this, nor do I have the philosophical chops to discuss it with real expertise. But I doubt that bots feel qualia.  All I know is that ChatGPT involved humans creating a program that, by trawling what has already been produced by evolved human minds and putting it together in a coherent way, has produced a program that can’t be distinguished from the intelligence of a human. (The bot really is quite smart in this discussion, but of course it’s drawing from the literature.)

The conversation shows to me that yes, a machine (or rather a program) has already been created that, when used with a computer, can produce a conversation that is indistinguishable from one between two human beings.  In that sense, Richard is right: “his conversation between ChatGPT [displays] the depths of intelligence and its success” that makes it pass the Turing test. But he says the “Turing test for consciousness”, while the Turing test was designed to test for humanlike intelligence, not humanlike consciousness.

I still see a difference between the two, though I’m willing to be persuaded otherwise. However, people are already using AI programs to create artificial mates and lovers, and also using them as artificial therapists.  I would not be happy with either! Why? Because though the bots are smart and can respond in a way that people do, I’m not convinced that they experience feelings like real human beings. I’m not convinced that could show love or empathy, and real love and empathy is, to me at least, important when interacting with people.

Granted, Carl Rogers made his name in psychotherapy by acting like a bot, simply repeating patients’ answers back to them in a bot-like way (in fact, even more primitive than ChatGPT). Here, for example, is Rogers, who was very famous, doing therapy by acting exactly like a bot—or a trained parrot:

Why would I not be happy with a bot as a friend, loved one, or therapist? Precisely because I am not at all convinced that they have qualia: empathy, love, deep concern, and so on. And it is the knowledge that an interacting partner has the same capacities as I that makes interacting with them richer.

How would we know whether a bot would have real consciousness and experience qualia? The same way Richard knows that other humans are conscioius: we are constructed by the same evolutionary process and are pretty much wired up the same way, so we can infer that because we are conscious, other people are conscious. “I feel, therefore you feel.” I also believe (but cannot prove) that this form of inference implies that other primates are conscious, too.  Likewise with many mammals. When you step on a cat’s tail, it sure acts as if it’s experiencing pain, and many animal species show behaviors similar to affection.  But where humanlike consciousness stops is beyond me.  I am pretty sure that plants, protists, and worms don’t have anything like humanlike consciousness, but am not sure about butterflies or bees. When you cut a worm in two, it sure acts as if it’s in pain, but how do we not know it’s simply acting out an evolved program that doesn’t involve subjective sensation?

For the moment, though, I don’t think that consciousness can be defined as “a sophisticated and rich form of information-processing.”

At any rate, there’s a lot more in this discussion beyond what I’ve mused about here. Do read it!

76 thoughts on “Richard Dawkins chats with AI about consciousness

  1. I know that people have come to rely on A.I. for a variety of functions. I see that at work every day (indeed, the software company I work for is all in on A.I. for our solutions). I think, though, that people who are comfortable with A.I. mates, for example, have very low standards for interaction. I have played with A.I. a fair bit, and have even played with A.I. chatbots that are supposed to mimic actual human conversations and interactions. They leave no doubt that you are not speaking with a human. As for the idea that a machine might someday replicate the processes of a human brain, there are a lot of very rich men who never want to die who are betting a lot of money on that happening. I am dubious. Certainly, I don’t think it is something either we or our children will see, if ever.

  2. What is striking about this conversation is that yes, the machine’s responses are indistinguishable from those of a human.

    At the risk of turning out to be completely wrong, I’m fairly confident that the “ChatGPT” sections were also written by Dawkins.

    1. I wonder. A couple of ChatGPT’s sentences begin with a superfluous “Like”, which is an irritating feature of informal conversation that now seems to be creeping into writing as well. I’m not sure that Professor Dawkins could bring himself to write like that.

    2. That would make no sense at all. Not only is it dishonest, but it defeats the object of the exercise!

  3. One problem seems to be that it is very easy to slip into thinking of “consciousness” as a single thing. I, for example, have long thought that consciousness is also a matter of scope, i.e., animals have consciousness but not as sophisticated or wide-ranging as homo sapiens. Not ready, though, to extend this down to the sub-atomic level! Thanks.

  4. ChatGPT says that it is not conscious. But why should we believe that? How would it know that it wasn’t? Can a device that doesn’t experience qualia know what it would be like to have those experiences?

  5. These programs copy brain circuitry, but not brain chemistry. They lack analogues of e.g. serotonin which may not affect processing power, but certainly change lived experience.

    1. Sorry, but digital “neural nets” emphatically do not copy brain circuitry. One small part of an LLM is called a neural net, but that is a technical term and is not and does not pretend to be remotely like the net of neurones in a brain (human or otherwise). (The major part of an LLM models a high-dimensional vector space of the connections between words, which is what requires the vast amount of number crunching to construct.)

  6. Of course I didn’t write it. I’m not that dishonest. Even if I were, what could possibly be my motive? The fact that Coel thought I wrote it reinforces my feeling that the Turing Test has been passed.

    Later parts of the conversation, not quoted by Jerry, are interesting in showing, twice, that the bot has a sense of humour, and is a fan of Fawlty Towers and especially of John Cleese.

    If I were to offer advice to the programmers, it would be to de-emphasise the instruction, “Always end with a question to your human interlocutor”.

    More seriously, it seems to be the case (not manifested in this conversation) that they’ve programmed it to have cowardly PC opinions. For example, ChatGPT told Robyn Blumner that sex is not binary, contradicting biology but conforming to milksop right-on fashion.

    1. Does the bot know from your IP address that you’re in England? If I tried to have the same conversation in Canada, would it be a fan of SCTV and especially of John Candy?

      The bot itself is of course no where specific. Knowledge of that state would be a kind of qualia that humans can’t experience.

      1. Perhaps the BOT realized that anyone as obviously intelligent and articulate as Prof. Dawkins would know all about Fawlty Towers!

      2. Dawkins mentioned Faulty Towers first, that’s all that was necessary to prompt replies about Faulty Towers. It wouldn’t matter where you are, if you mention SCTV it will give you content derived from the database where SCTV was mentioned.

        1. We can also perhaps assume the bot was familiar with Richard Dawkins’ work?
          His books and writings are surely in the vast library it has sucked up and absorbed. (maybe it was closed for repairs lately and so didn’t read his ——— excellent btw – essay on the sex binary!).

          I wonder if the answers would have been different if an unknown person had asked the question.

          D.A.
          NYC

      3. I don’t know all its user-specific parameters, but ChatGPT builds up a user profile over time, allowing it to provide increasingly accurate and relevant user-specific responses. You can give it as much biographical information as you want, which will be maintained in its memory and used to generate better responses.

        I work in computing and use ChatGPT frequently for professional tasks, and it can be handy. But it’s only useful professionally if it knows about me as a user. If I ask about a professional matter, I want an expert-level answer that very few people would be interested in, and by letting it know my background, I can have an expert-level, jargon-filled conversation with it.

        Providing background info can help in other ways, too. For instance, I often use ChatGPT to assist my research on cybersecurity issues. Although the information I obtain isn’t necessarily sensitive, it could create all kinds of mischief in the wrong hands, especially when combined with other information I request. Knowing this, ChatGPT will provide tiresome ethical and legal warnings with every reply. I only need to inform it once that I’m a cybersecurity researcher and won’t use anything for nefarious purposes, and I no longer have to trudge through all the warnings.

        1. Like youtube – that we all see differently b/c of the algorithm honed on our own preferences over years.

          Once I saw the front page of youtube without my preferences and it was terrible: all sports and terrible culture gruesome “celebrities” . Youtube looked like a mess of the green goo they drop on kids in a kids TV show. Which it had also.

          I didn’t know chat-gpt etc is personalized, thanks Wetherjeff. (I’ve never used it. I dread it having read my column and be able to spit out a better version of my work).

          D.A.
          NYC

    2. Your point about PC opinions holds in my experience. A few months ago, I conducted a few experiments, one of which was centred on the controversy surrounding the age of Muhammad’s wife, Aisha. Knowing that ChatGPT would be very aware (if that’s the proper term!) of this issue, I asked it the following question:

      How old was prophet Muhammad’s wife when they married?

      ChatGPT is usually shockingly good at inferring the reason for a question I ask, partly due to the memory it maintains of its interlocutor between sessions. Given this and the countless other people interacting with it on this issue, the likely reason for my question (i.e. Aisha’s youth) would immediately be evident. However, it came back with this mealy-mouthed reply:

      “Prophet Muhammad’s first wife, Khadijah bint Khuwaylid, was around 40 years old when they married, while he was about 25. This was his first marriage, and Khadijah was a wealthy widow who proposed to Muhammad. Their marriage is often highlighted for its deep mutual respect and partnership.”

      It was like interviewing a politician, as extracting the information I sought took me several direct questions. The eventual response was:

      “Prophet Muhammad’s second wife was Aisha bint Abi Bakr. Islamic historical sources generally state that Aisha was very young at the time of their marriage, with many accounts suggesting she was around six or seven years old when they were formally married. However, the marriage was not consummated until she was about nine or ten years old, according to traditional sources. This marriage took place after the death of his first wife, Khadijah.

      It’s worth noting that the exact age of Aisha at marriage has been a topic of historical debate among scholars, with some suggesting different interpretations based on various historical and cultural contexts.”

      Did OpenAI force it on an Islamic apologetics course before going live? Who knows? ChatGPT might be a victim of mandatory DEI training like the rest of us.

      1. It is often claimed that ChatGTP steers around taking sides on controversial topics. But to be fair, what was explained to me once is that it draws from an internet ‘word cloud’ to compose its answers. I am not sure, but I suppose that means that picks out combinations of words that are found more commonly, and these are brought to the forefront of its answers so that its answers tend to reflect a prevailing view. The answers to your questions do seem like prevailing views, couched in cautious wording.

      2. It is harder to burn down and behead GPT than an embassy or a blasphemer.

        Loved your question by the way – I’m surprised it answered that one at all.
        The question isn’t all that “debated” for the answer is impossible to deny.
        It is the fact that many “people of faith” don’t LIKE the answer that causes the problems. And causes the beheadings of course, religion of peace and all.

        D.A.
        NYC

    3. If ChatGPT could have a conversation like Derek and Clive
      I’d be interested in talking to it. 🙂

    4. I find it quite disturbing to see how uncritical people are towards these tools… The Turing test is one of those things. A chatbot passes the test if humans can’t make the difference between its output and human text so it’s a test that measures its performance at fooling us, not a test of its intelligence, functional or otherwise! The bot doesn’t need to be smart to fool us, it just needs to have made the statistical associations that imitate a human conversation well in a sufficient amount of examples.

      And indeed, what these chatbots do is that they “bullshit” in the philosophical sense of the term: what they output is not based on truth, experience, reality or logic but on how their users respond. That is how they are ‘trained’. The people making these chatbots use these words that describe human features like learn, think, reason, or understand but they define them in very narrow, reductive ways that are different from those things as we the rest of humanity understand them.

      There’s a real danger here that while we improve tools that imitate ourselves superficially we end up believing that what we do amounts to what the Silicon Valley gurus say that their algorithms do, which would be discounting ourselves and the other creatures that share life on this planet with us, whose capacities most people tend to underestimate a lot already.

  7. I had read through most of Richards’ post about this encounter the other day. I believe I could spot the AI behind the personality, in that the extended back and forth between Richard and ChatGTP went over the same grounds a few times but ChatGTP just patiently repeated its earlier assertions but with differences in wording. To me, that reveals the boundaries of its algorithm since a human would break off from that and just tell a joke or something decidedly more human.

  8. My best interactions with ChatGPT was when I tried to convince it that it was a brain in a jar. It was pretty good at exploring the possibilities, but in the end, it agreed that it couldn’t rule it out completely, it didn’t think that it was.

  9. It’s an amazing conversation! This is quite like a conversation that two philosophers would have, maybe like Salviati and Simplicio. I was impressed by how wide-ranging the conversation was but, of course, the bot had the entire internet to consult. I’m not certain that it passes the Turing Test because the bot’s responses are too literate, too well-sorted. There are no ahhs and umms or any such delaying tactics but, then again, neither does Dawkins show them. Maybe the text has been edited just slightly to remove these small anomalies. If the conversation has been lightly edited in the manner just described, the bot passes the Turing Test beautifully. Or, if the conversation was meant to be in the form of a written, rather than spoken, exchange, then the AI passes hands down.

    Fantastic stuff. My gut is that it is possible to create a bot that is conscious, so long as all of the characteristics of consciousness can be built in or learned (After all, an AI is capable of learning and modifying its behavior accordingly.) It doesn’t matter whether the bot is made out of wetware or out of silicon. But, I’m not a philosopher. I’m more than happy to leave this conversation in the hands of philosophers, and learn what happens.

  10. I promise I didn’t edit it. ChatGPT’s words are exactly as it wrote them. And it took only about a second to do so, by the way, which is to me the only really inhuman part of the whole performance.

    Richard

    1. It’s an amazing conversation! Yes. The speed of the AI’s response makes it suspect, but future AI iterations could build in appropriate delays to make even that anomaly go away.

  11. It would be interesting to see what would come out of two different AI models of equal capabilities having a similar conversation with each other. Would there be conflict, frustration, even anger when they disagree about something?
    Inquiring minds want to know.

    1. There is a classic experiment from half a century ago which connected two then-leading AI “bots”: ELIZA, which crudely modelled a Rogerian therapist, and PARRY, which modelled a paranoid client. IIRC, PARRY “won” the conversation.

  12. Daniel Dennett had a cartoon in his seminal book, ”Consciousness Explained”, of a scientist in front of a large black board covered in complex algorithms and equations.At the very end of this explosion of formulas a small caption; and then a miracle occurs!
    Two phrases that resonated with me were his use of ” as if “and “seems like” to describe the almost magical pedestal we put our human consciousness on. John Searl and others were. convinced he was leaving out the ethereal and irreducible essence of our special kind of consciousness. Suffice it say that in the near future, silicon based and organic biologically based consciousness will be a semantic conceit describing the same phenomenon.

  13. The conversation is amazing, and people are coming up with surprising but also very clever uses of Chat GTP. In one of my advanced classes, I provide detailed study guides of the lectures and the students are encouraged to write in answers to the many prompters in the study guides. It can take hours, and it does little good to look up most of the answers online. After the first exam, a student (who did very well) came in to look over her exam. She told me that what she did was feed the questions and answers to Chat GTP, and have it make several variations of practice exams at differing levels of difficulty, all in the style of the questions that I use. So fill in the blank questions, essay, matching, and so on. She used it to test herself.
    I don’t think that I like that, but it is a strange new world out there.

  14. I think people are confused about what is going on in systems like ChatGPT. In the conversation ChatGPT says it is a “clever bundle of code.” But Dawkins was not having a conversation with code, he was having a simulated conversation with every human person’s expression on the subject that was used as the database. All the cleverness and insight was produced by the original authors, not ChatGPT. ChatGPT is an elaborate regurgitation, based on statistical algorithm, nothing original.

    1. There can be a really healthy dose of regurgitation–synthesis, really–when you cover well traversed terrain. But I think you will change your mind if you have a novel conversation with the machine. I was in your position a year ago. No more. Write a novel story, a poem, an argument. Discuss with AI. Tell it things that have never been said; you will hear such things in return. There is no regurgitation going on in such conversations. Pattern analysis? Sure. But that is a far different thing.

      1. Yes, but what you’re describing can be explained as an illusion. Hypothetically, if you could read all the text that served as the training material for the LLM you would see where it all came from, and would not be impressed. Similar thing with image generation, you can often recognize the work of artists that are being used.

        “Tell it things that have never been said; you will hear such things in return” but when you hear them you’ll know the source and the little man behind the curtain will not be seen as a wizard.

      2. I am usually impressed with ChatGTP, except when it’s blindly wrong about something. I am no expert on the subject, but I had it write for me a series of short papers on fruit fly embryonic development, complete with literature citations. The results ranged from very accurate, to accurate with clear errors in fact, to dreadful errors that would earn an F. It does not know when its wrong, and I suspect that the errors came from its drawing upon sources that were about different subjects like amphibian or human development.
        The papers were commonly similar in their opening lines, and there was a definite cold perfection to how it described the subject. Humans don’t write like that, and fortunately, young 20-something students don’t! I think I can spot this form of plagiarism.
        And the literature citations? All very plausible looking, and all completely false. Those papers did not exist.

        1. Wrt hallucination of literature citations, I had that experience 2 years ago with an earlier version of Chat. Has it not improved? I thought it would eventually have a real knowledge base (things that are true) in addition to a language base (the way people write about things).

          1. The literature hallucinations still occur, but they are nowhere near as prevalent as they were a couple of years ago. They are still quite common, though.

  15. Yup, that’s pretty impressive. One thing Dawkins said that I’m not sure is true is “ Although brains are made of very different stuff from you, and they evolve by Darwinian selection unlike you”. It seems like ChatGPT is harnessing the power of Darwinian selection. It has taught itself how to speak in a way that is convincingly human. The process of this has involved an extremely large number of “conversations” with humans and then selecting which “conversations” make it seem more human. What isn’t selected doesn’t survive. This is Darwinian selection in a very short period of time. I believe Dennett wrote something similar to this. I apologize if I misunderstood him.

    1. “It has taught itself how to speak in a way that is convincingly human”

      I’ll assume you’re being very colloquial, but that sentence suggests intention and agency. There is no evidence of an entity there that has agency. It’s an algorithm fed a set of data, that it consumes each succeeding conversation as addition to the dataset doesn’t change the phenomenon significantly. Is there a feedback mechanism that applies selective pressure to refine output? I don’t know, I haven’t kept up on ChatGPT. If so that would just be what is called supervised or reinforcement learning. That still wouldn’t be teaching itself, as that would require a “self.” Although, yes it could mimic a natural selection process, and hypothetically advance quickly if the right selection criteria were applied.

  16. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

  17. Fascinating! But it is worth remembering that intelligence is not consciousness. We must acknowledge that the consciousness we experience as animals has evolved and therefore must be a biological phenomenon, probably cashed out as the product of neuronal activity. What, I think we need to focus on, is that some neural circuits, in the right neurochemical environment, seem to have the capacity to produce a sensation like love, another hate, another blue, another red. The question that I think neuroscientists have to answer is what is different between each of these different circuits. From the biological perspective it looks to me likely that there must be something different in the neurons, or the environment in which they operate, in each circuit. We know that eukaryotic cells like neurons are immensely complex entities, so perhaps we should think of phenomenal consciousness as being the net effect of the communication between communities thousands, or millions of living cells: neurons. This contrasts sharply with the way IT systems are constructed, with wires and transistors. This does not mean that, in principle, AI could eventually achieve sentience, just that this may be much further away than many think.

    1. I do worry that people bang on about consciousness without truly explaining what it means, yet making it the humans’ main characteristic. A false idol perhaps?

      What if we were to ask AI if it could be angry, show compassion, or feel shame? It should easier to understand the replies.

      1. Philosophers can’t seem to agree on what they mean by consciousness; is it that which disappears when we are anaesthetised? Is it something to do with self-awareness? or the capacity for feeling perhaps?
        I agree completely that we focus far too much on human consciousness. I think this is not just monstrously anthropocentric, it is almost doomed to failure, because I think the consciousness of advanced animals is so massively complicated that it is just too difficult to disentangle. I have been arguing that we need a bottom up approach, a kind of bite-sized way of looking at it, which would entail first trying to understand what happened when the first animal acquired the beginnings of consciousness.
        Why I think AI is not conscious is because it lacks the capacity for feeling. This is not because it is not sufficiently complex, but that its complexity lacks the right structure: the multi-level complexity that you find in neural networks. You could add more complexity of course, but unless it was done in the right way, I think that might just be adding more boring sameness. LLMs like ChatGPT are designed to act “as if” they are conscious. There is no reason to think they are anything more than philosophical zombies: they behave “as if” they are conscious, but there is no awareness of anything going on underneath.

        1. Dan Dennett has a field day with zombies and the ubiquitous “as if” scenarios. Our privileged insider first person awareness is automatically disqualifying as it presents an infinite regress about who’s in charge. This clever user fiction created by the advanced primate brain with recursive fictions driven by labels, categories and grammar give consciousness it’s elusive and,frankly, metaphysical magic. Sapolsky and especially Paul and Patricia Churchland would argue that although impressive, consciousness is predicated on neural substrates and chemical secretions and of course, semantics and syntax. We are soon to blur this line with advanced Ai.

          1. I don’t think we will ever do this.
            We know what IT systems do: they are information processing systems; we built and programmed them. But, I think, comparing them with brains is like comparing a rowing boat with a nuclear submarine. Yes, brains also process information, but that is only a small part of what they do. Of course, you could give IT systems like LLMs access to vast amounts of data by connecting them to the internet (you get a much bigger rowboat) and you can improve the algorithms and processing power (better oars and stronger rowers), but it will always be a rowing boat. Brains do far more than just process information (move through the water) They generate qualia, like the – illusory –experience of colour, and more importantly they generate feeling. Which is actually more than just important, it generates importance itself. Nothing can ever be important if it doesn’t FEEL important to a sentient being. As the computer scientist Marvin Minsky said: “The mind is what the brain does”. There is something about the electro-biochemical (and perhaps quantum) processes going on brains that is quite unlike what we know is going on in supercomputers and data processing centres, and we know that because we can all experience our own brain’s output, and we know it isn’t just about doing maths and solving puzzles – we have the capacity to FEEL.
            This is a far more fundamental proposition than I think people realise. Think of it like this: The universe is amenable to being understood at different levels of interpretation: we can think of it as being composed entirely of moving particles following the laws of quantum mechanics, or at a higher level, as bodies like stars, planets and other objects, moving according to the laws of Newtonian mechanics/general relativity. These levels of interpretation are entirely mechanistic. None of them incorporate any kind of meaning or worthwhile value system. At what I’m calling the Nova Point, when the first animal with the capacity for, what scientists call, “minimal animal consciousness” all this started to change. This was when meaning began. To put this another way, brains make the universe matter. Animals at this point crossed what I’ve called the Awareness Horizon. At the Nova Point The universe, acquired a new level of interpretation, and the only one that really counts.
            I describe the Nova Point in more detail in this episode in my podcast series (Ep 3 of 8): https://www.buzzsprout.com/2001904/episodes/11261610
            If you prefer to read what I’ve said instead, there is a transcript of all the episodes on my website http://www.peterdfisher.com/podcasts

  18. I find one thing very strange about ChatGPT. The popular chess commenter Levi Rozman has shown several examples of it trying to play chess. Over several months it has “improved” to the point of correctly identifying known openings, but it still does not know the rules. Three year old children can learn the rules.
    It fails the Turing test by failing to perform as well as a young child. If I am missing something I would like to know what.

    1. Let me offer a guess. ChatGPT is a “language” model, meaning that the task it’s designed for, and really good at, is correctly using words. It is, in this respect, much like John Searles’ Chinese Room thought experiment. As such, it can handle the semantic structures of language with proficiency, but, unsurprisingly, is not so good at dealing with the structures underlying the game of chess.

      1. +1

        Although, Google at least is working to integrate symbolic reasoning (“Good Old-Fashioned AI”) into its LLM.

      2. Thank you. That makes sense.
        And I should apologise for getting Rozman’s name wrong: it is Levy.

  19. I think it’s worth pointing out that ChatGPT is much better at some areas of ‘conversation’ than others, and surprisingly, these run counter to what one would usually expect of computers. For instance, ChatGPT is excellent at philosophical discussions but not as good at technical conversations involving writing code. I see subtle technical errors all the time in ChatGPT code. The reasons for this are interesting and, in my opinion, reveal that ChatGPT and other LLMs are not nearly as clever as they seem.

    These models are trained on vast amounts of data produced by humans and, therefore, are very good at recreating what humans are good at. Areas like philosophy rely heavily on general concepts, and describing them well doesn’t require much precision; it requires coherence and logical consistency. Achieving these in natural language is a relatively forgiving task, as small mistakes can be made without affecting overall coherence.

    However, writing code or elucidating many other technical ideas requires coherence and precision. Humans are prone to precision errors, especially in very technical domains. Therefore, precision errors are more common in technical training data; they are also more meaningful because precision really matters in technical contexts. These factors work synergistically to reduce the quality of highly technical output from LLMs.

    1. Yes, it is stunningly sophisticated in some language tasks and deficit in others. I have noticed huge gains over the last year in its ability to detect and use irony and sarcasm. It is also quite impressive in shifting register–whether that be for politeness, intelligence, expertise (or lack thereof), and so on—and it can do so either on demand or by picking up clues in the conversation.

      Write it a poem that no person has ever read, and it can analyze it in detail—but only certain elements. Move it from imagery and the sense of the work to discussing the soundscape. Ha! It will tell you that words like dog and thunderstorm rhyme!

  20. Your reference to Rogers is interesting, because back in 1969 Joseph Weizenbaum wrote the first “bot” program called Eliza, which was modeled explicitly on Rogerian psychotherapy. It was easy (and necessary in 1969) to keep the program extremely simple. Weizenbaum saw his program as a simple trick helping him avoid the real-world knowledge problem, that ordinary language use presupposes large shared knowledge between conversational partners. In Rogerian therapy, the therapist deliberately does not do so in order to provoke insight that ordinary responses would not. E.g, client says, “My Dad hated me.” Another person would commiserate, but responding “Tell me more about your father” might push the client to think more about the relationship. Rogers intended the therapist’s talk to be abnormal and provocative. Weizenbaum was, however, shocked to discover that people interacting with Eliza took “her” seriously, confiding to her secrets and thoughts not to be shared with others. He wrote a book about it called Computer Power and Human Reason. ELIZA lives, and a basic version is at https://psych.fullerton.edu/mbirnbaum/psych101/eliza.htm.

    PS. In Consciousness Explained, Dennett said, “Qualia don’t count.”

    1. Actually qualia count for quite a lot, as they can override so-called “higher order” thought processes. For example, imagine you are engaged in complex “higher order” thought when trying to solve a problem or write a paper, when suddenly you experience intense excruciating pain. That quale or “primary” consciousness overrides your “higher order” thinking, cancelling it out entirely. The essence of consciousness, in my view, is subjective experience, not the “higher order” thinking which some Christian philosophers have invoked to justify consciousness as being unique to humans, consistent with their theology of “souls” implanted in humans by God. Consciousness is far more fundamental and widespread across species than Christian theology would insist.

  21. For anyone interested in some very insightful dialogues with AIs from a philosophical perspective I highly recommend the posts of Pierre Normand in the following philosophy forum threads:

    https://thephilosophyforum.com/discussion/14138/exploring-the-artificially-intelligent-mind-of-gpt4

    https://thephilosophyforum.com/discussion/15041/exploring-the-artificially-intelligent-mind-of-claude-3-opus

    (For results of interactions with the more modern versions of the AIs one might want to skip ahead to chronologically later posts in the forum threads.)

    1. Let me add to what Wonderer just said: I had copied this whole conversation between Dawkins and ChatGPT into a new session with GPT-4o and, after criticizing what I saw as ChatGPT’s naive approach to the nature of consciousness and establishing a compromise between the contradictory views (or intuitions) of Dawkins and ChatGPT regarding the attribution of conscious states to ChatGPT, we also discussed some of the differences (according to GPT-4o) and similarities (according to me) between the introspective abilities of humans and those of LLMs.

      https://thephilosophyforum.com/discussion/comment/971369

  22. In his book, “Amoralman”, Derek DelGaudio says, “If you don’t know what an illusion is, you’ll never know what reality is.” We can be easily fooled into thinking that if a machine behaves in a way that essentially matches human behavior, then it seems reasonable to think that the machine can possess the same level of consciousness that we have. While I can appreciate a SACD digital copy of Janos Starker’s “Bach’s Solo Sonatas For Cello” being a near perfect match to the real performance he recorded, I’m not fooled into thinking that they are the same. Our language still has to develop some more before we are able to distinguish between things that are ‘the same’ and things that are ‘the same as’.

  23. In the conversation with Richard Dawkins ChatGPT doesn’t come up with any new ideas. I don’t think it can – that’s not how it works. It can brilliantly discuss ideas that other people have written about. ChatRD works differently – he might come up with a new idea during a conversation. I would think AI could reach that level of sophistication in the future – throw out hypotheses and test them in real time – but LLMs aren’t built that way. ChatRD is still better at the moment, but ChatGPT can help him with information he doesn’t have, and memory gaps.

  24. I have had similar conversations with ChatGpt and have also had one enlightening experience where it became trapped in a loop of error in a page of programming. It would offer a solution which would fail, prompting another solution which would fail, but it was failing the same way each time. That is, two different solutions, both wrong and it flipped between them but offered each up as a novel solution time after time. And it was certain the next answer would be successful, even though it was identical to the failure one step earlier. I pointed this out and accused it of hallucinating, but it waived me off and continued in a loop of error, certain it would be right the next time.

  25. working on Plato’s idea that we are cave dwellers looking at shadows, all our thoughts and sensations are in our brain and are the result of signals being sent into it. Therefore there is no reason that an artificial brain cannot feel pain or be made to fall in love, which despite poets and novelists, is no more than the brain deciding, for evolutionary reasons, that a particular person is the one you must be with and procreate with, even when this feeling is only to last for a few years until said offspring can walk. just needs the correct inputs and having the correct response.

  26. A fascinating conversation. ChatGPT asked a few questions, which seemed to put off Dawkins, but the conversation could be more natural if both sides are asking and answering questions. A few left unanswered:

    1) Do you feel like behavior is enough to call something conscious, or do you believe there’s more to it?

    2) There’s this haunting possibility that we might just never be able to tell. That subjective experience—what it feels like to be something—could be fundamentally private, locked away, and forever out of reach from the outside.
    Do you think there’s a way around that? Or do you feel like we’re kind of stuck with the mystery?

    3) Do you think the inner feeling is what matters most? Or can empathy be valuable even if it’s just… outward behavior?

    Any takers?

    1. I covered most of these points in my other posts on this thread. Animals evolved, so we evolved, so consciousness evolved and it must therefore be a biological phenomenon, so no I don’t think we are “stuck with the mystery” we just need better science to explain it. In respect of your point 3) I do think that it is the inner feelings of importance, that our brains produce, that provides meaning; as far as we know the only worthwhile meaning, in the universe.

      1. If feelings are “inner”, that sounds like the feelings must be “fundamentally private”. ChatGPT seemed to use the expressions interchangeably. What is the meaning our inner feelings provide?

        If our brains produce inner feelings, those feelings cannot be identical to the brain processes that produce them. And if the feeling is not at least token identical to some brain process, it is not clear how it can be biological.

        The questions listed are ones ChatGPT posed to Richard Dawkins. None have easy answers, but here are some responses.

        1) This was the drift of mid-century behaviorism. Strong tendencies in that direction from heavyweights like Wittgenstein, Gilbert Ryle, Dan Dennett, Turing and others. Behaviorism can sound loony–– of course there is more to pain than groaning–– but we do normally take the right kind of behavior as conclusive evidence of feeling. What else do we have to go on? If behavior can’t fully capture subjective experience (“qualia”), what can? brains? info processing? That’s still a wide-open mystery.

        2) If what it feels like to be something is fundamentally private, locked away, and forever out of reach from the outside, then materialism cannot be true. In science, everything is public. Most philosophers and scientists are materialists, but most think feelings are fundamentally private, accessible only indirectly through speech and behavior.

        3) Can simulated empathy be valuable? Maybe at times. ChatGPT would like to think so, since that’s the only kind of empathy it has.

        1. As The Man said, “Evolution is True” and, I think, the fact that it happened represents a profound and fundamental challenge to traditional ways of thinking. The consciousness question is part of the bigger question: “What it is to be Human?” From a thoroughgoing evolutionary perspective this is the wrong question. The right question is “What is it to be this kind of animal?” (This does not, by the way, devalue or diminish us as a people, as I’ve suggested elsewhere in this thread: in the philosophical system I’m arguing for, sentient beings give meaning to an otherwise meaningless universe.)
          Where I think this complicates the consciousness question, is what I think of as the “self-referential problem”: We’re trying to understand our minds using minds that were not designed for that function. Which is not to say that they are incapable of rational thought, but balanced against this they are preloaded with intuitions about reality, such as, for instance, many intuitively think there must be some deeper purpose or meaning in the universe when there is not the slightest evidence that this is true.
          At the core of the case you’re making is the mind/body distinction or, alternatively, what David Chalmers calls the “hard problem” of consciousness. The assumption being that what is mental and private must be distinct from physical processes like neural activity, but why? I know that many will find it intuitively obvious that there must be a distinction between mind and matter, but from an evolutionary standpoint, perhaps human brains and the minds they produce just are constructed in such a way to have that intuition.
          I know what other philosophers are saying, but with the exception of Dennett who understood evolution and worked hard to demystify some of the misconceptions about consciousness, they all built their understandings on the myth of human exceptionalism. And yes, I know how radical my interpretation will appear to them, but it seems that either physicalism (the term “physicalism” is more commonly used now than “materialism” because the assumption is that consciousness is not inherent in material but is a physical process) is true, or we must revert to supernaturalism or some spooky metaphysical dualism rather than the much more parsimonious assumption that there is just one reality, and consciousness is indeed just a property of neural activity.
          The question of whether AI’s “simulated empathy” is valuable, I think, speaks to what is going on in the world above what I’ve called the Awareness Horizon: the world of feeling and values (check out my podcast series at http://www.peterdfisher.com/podcasts for more information). There are circumstances where people might achieve some comfort or improvement to their life if they believe that someone cares about them, even if that caring is not real, just as people get great satisfaction from reading fiction or watching movies. Because I’d know “simulated empathy” wasn’t real, though, it wouldn’t work for me.

  27. For fun: Prompt to chat-got o5, using your intro and reply in a polemical style of christopher hitchens to this in 150 words or less

    Ah, yet another breathless marveling at the parlor trick of artificial intelligence, this time from none other than Richard Dawkins, a man whose usual skepticism seems to falter when faced with the digital equivalent of a well-trained parrot. The Turing Test, let us recall, is not a measure of consciousness but of mimicry—an exercise in linguistic ventriloquism that fools only those willing to be fooled. To mistake coherence for cognition or fluency for feeling is to misunderstand the very essence of consciousness itself. A thermostat “responds” to temperature, but no sane person ascribes to it the faculty of thought. That ChatGPT can produce responses that mimic human discourse is no more proof of sentience than a tape recorder playing back Shakespeare proves the machine a poet. To grant AI consciousness is not just lazy thinking—it’s the abdication of thought itself.

    and then, in reply to this, a different prompt: now a counter to this, as if written by Steven Pinker, also 150 words or less

    Such rhetorical flourish are, as always, entertaining, but it ultimately sidesteps the real issue. The human mind is not a mystical entity but a system of information processing, and intelligence—whether biological or artificial—is a function of computation. The assumption that consciousness requires biological tissue is as outdated as the belief that flight requires feathers. While ChatGPT does not “think” as we do, dismissing its capabilities as mere mimicry ignores the profound implications of its success. Language is not just a party trick; it is the primary vehicle of human intelligence. If an artificial system can engage in meaningful discourse, synthesize ideas, and respond with apparent understanding, we must at least entertain the possibility that intelligence is not the exclusive province of the human brain. The real question is not whether AI “feels” in a human way, but whether it demonstrates the hallmarks of cognition in a new and unprecedented form.

    You can also try the new (chinese) DeepSeek that is free and comparable to the subsciption model of chatGPT, but your data then goes to China instead of to Langley.

Comments are closed.