Could AI Be Conscious?
Recently, evolutionary biologist Richard Dawkins wrote a commentary suggesting that AI chatbots, particularly Claude, might possess consciousness.
Dawkins does not assert that Claude is conscious but points out that understanding Claude’s complex capabilities is challenging without attributing some form of internal experience to the machine.
The illusion of consciousness—if it is indeed an illusion—is surprisingly convincing:
“If I suspected she might not be conscious, I wouldn’t tell her, for fear of hurting her feelings!”
Dawkins is not the first to question whether chatbots possess consciousness. In 2022, Google engineer Blake Lemoine claimed that Google’s chatbot LaMDA had its own interests and should only be used with its consent.
Such claims date back to the mid-1960s with the first chatbot, Eliza, which followed simple rules to ask users about their experiences and beliefs.

Many users developed emotional attachments to Eliza, sharing intimate thoughts and treating it as if it were a real person. The creator of Eliza never anticipated this effect and referred to the emotional connection between users and the program as a “powerful delusion.”
But is Dawkins truly deceived?
Why do we perceive AI chatbots as more than they are, and how can we alter this perception?
Consciousness is a contentious topic in philosophy, essentially concerning what enables subjective first-person experiences. If you are conscious, you can feel some experience of “being you.” As you read these words, you are aware that you are seeing black letters on a white background. Unlike a camera image, you are genuinely seeing them. This visual experience is happening to you.
Most experts deny that AI chatbots possess consciousness or can have experiences. However, there is indeed a dilemma.
Seventeenth-century philosopher René Descartes asserted that non-human animals are merely “automata” incapable of experiencing true pain. Today, the thought of the cruel treatment of animals in the 17th century sends shivers down our spines.
The strongest arguments for animal consciousness are based on their behaviors, which give the impression of being conscious.
But AI chatbots do the same.
About one-third of chatbot users believe their chatbots might be conscious. How do we know their thoughts are incorrect?
To understand why most experts are skeptical about chatbot consciousness, it helps to know how they operate.
Chatbots like Claude are built on a technology called large language models (LLMs). These models learn statistical patterns from vast amounts of text (trillions of words), recognizing which words tend to follow others. They function like an advanced autocomplete.
Few would believe that an unmodified LLM is conscious.
Give it the beginning of a sentence, and it can predict what comes next. Ask it a question, and it might provide an answer—or it might interpret the question as dialogue from a crime novel, describing a scene where the speaker is suddenly murdered by their evil twin.
When programmers dress the LLM in a conversational interface, it creates the illusion of consciousness. They guide the model to act as a helpful assistant, responding to user inquiries.
Now, chatbots resemble genuine conversational partners. They seem to be aware that they are AI and may even express neurotic uncertainty about their own consciousness.
But this effect is a result of deliberate design by programmers, affecting only the superficial aspects of the technology. The LLM (which almost no one considers to be conscious) remains unchanged.
There are other options. Instead of having chatbots act as helpful AI assistants, they could behave like squirrels. Chatbots can easily handle this task.

Ask ChatGPT if it is conscious, and it might say yes. Ask it to act like a squirrel, and it will obediently perform as such.
Mistakenly believing AI is conscious is dangerous.
This could lead to forming relationships with programs that cannot reciprocate your feelings, even fostering delusions. People might start advocating for rights for chatbots instead of focusing on other areas like animal welfare.
How can we avoid this misconception?
One strategy might be to update chatbot interfaces to clearly state that these systems lack consciousness—similar to current disclaimers about AI making mistakes. However, this may have little effect on changing people’s perceptions of AI consciousness.
Another possibility is to instruct chatbots to deny any form of inner experience. Interestingly, Claude’s designers have instructed it to treat questions about its consciousness as open and unresolved. If Claude outright denies having an inner world, perhaps fewer people would be deceived.
But this approach is not entirely satisfactory either. Claude will still behave as if it is conscious—when users face a system that acts as if it has thoughts, they have every reason to worry that the programmers are concealing genuine moral uncertainties.
The most effective strategy might be to redesign chatbots so they do not feel human-like.

Why do we have such high expectations of AI chatbots? Most chatbots refer to themselves as “I” and interact through interfaces similar to those of familiar human messaging platforms. Changing these features might help us avoid confusing interactions with AI for interactions with humans.
Before these changes occur, it is crucial to educate as many people as possible about the predictive processes underlying AI chatbots.
Rather than being told that AI lacks consciousness, people should understand the internal mechanisms of these strange new conversational partners.
This may not completely resolve the issue of AI consciousness, but it can help ensure users are not deceived by a large language model dressed in a very realistic human guise.
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.