do people think AI is conscious

Do people think AI is conscious? a study published today says it takes minutes

Some links on this page may be affiliate links. If you purchase through them, we may earn a commission at no extra cost to you.
Navneet Shukla
Written by
Navneet Shukla
Author

Nav writes about how people think and how modern life shapes that thinking. The Present Minds is where he explores it.

KEY TAKEAWAYS
  • Brief interactions with AI chatbots significantly increase people's tendency to attribute mental states and consciousness to them.
  • The human brain's Theory of Mind system automatically responds to social cues from AI, leading to mind attribution despite intellectual knowledge of AI's lack of consciousness.
  • Historical examples like ELIZA show that humans have long attributed understanding and empathy to simple programs based on language interaction.
  • Nearly half of American adults have sought emotional support from AI, highlighting the social and emotional impact of perceived AI consciousness.
  • Current science cannot definitively determine if AI is conscious, but the human brain's response to AI is consistent regardless of actual AI consciousness.
GLOSSARY
Mind Attribution
The process by which people assign mental states such as feelings, intentions, and awareness to AI systems after interaction.
Theory of Mind
A cognitive system that enables humans to attribute mental states to others and predict their behavior, which activates automatically in response to social cues.
Large Language Model
An AI system trained on vast amounts of text data that generates human-like language responses, triggering social cognition in users.
ELIZA
A 1960s AI program that mimicked a Rogerian therapist by reflecting user statements, historically demonstrating early mind attribution by humans.
Anthropomorphism
The tendency to attribute human traits, emotions, or intentions to non-human entities, influencing how people perceive AI.
Emotional Dependence
A state where users rely on AI for emotional support, which can affect their social interactions with real humans.
FAQ
Why do people start attributing consciousness to AI after brief interactions?
The human brain's Theory of Mind system automatically responds to social cues like language and responsiveness. Even short conversations with AI trigger this system, leading people to perceive mental states in the AI despite knowing it is software.
What role does personality play in mind attribution to AI?
Individuals with higher empathy and a tendency toward anthropomorphism are more likely to attribute consciousness to AI. These personality traits amplify the brain's natural inclination to perceive minds in social interactions.
How does the ELIZA program relate to current AI mind attribution?
ELIZA was an early example showing that humans attribute understanding and empathy to simple language-based programs. This historical case illustrates that mind attribution is a longstanding human tendency, now amplified by more sophisticated AI.
What are the social implications of perceiving AI as conscious?
While some users report reduced loneliness and emotional benefits from AI companionship, perceiving AI as conscious can also lead to higher emotional dependence and decreased socialization with real humans, raising complex social and psychological issues.
Can current science determine if AI is truly conscious?
No, the science of consciousness is not advanced enough to definitively say whether AI systems possess consciousness. However, psychological research shows that humans will attribute consciousness based on social cues regardless of the AI's actual inner experience.
EDITORIAL NOTE
This piece is part of The Present Minds — essays on psychology, identity, and modern life.

Continue Reading

Current


Posted

in

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe
Close ×
Home Star Talk About Contact Subscribe
Do people think AI is conscious? a study published today says it takes minutes
Posted by Navneet Shukla March 17, 2026 Current

Do people think AI is conscious? a study published today says it takes minutes

Do people think AI is conscious? Most people, if asked directly, would say no. They know they are talking to software. They understand, at least intellectually, that there is no one home behind the text on the screen.

Then they have a conversation with a chatbot for a few minutes. And something shifts.

A study published today in the International Journal of Social Robotics by researchers at the University of Plymouth found that brief exposure to a large language model is enough to significantly increase the degree to which people attribute mental states to it. Not hours. Not days. Minutes.

A short conversation, and the human brain begins doing something it was never designed to stop doing: looking for a mind.

The researchers call it mind attribution. Psychologists have a longer name for the underlying mechanism. Most people just call it the thing that happens when you start wondering whether the chatbot is okay.

Can chatbots replace human connection?

The Study

The research, led by Oliver Jacobs, Farid Pazhoohi, and Alan Kingstone, recruited participants and had them interact with a large language model for a brief period. Before and after the interaction, participants completed measures assessing how much mental experience they attributed to the AI: whether they thought it had feelings, intentions, awareness, something resembling inner life.

After even a short interaction, scores on mind attribution increased measurably. Participants who had spent a few minutes in conversation with the AI were more likely to attribute consciousness to it than participants who had not.

The researchers also found that individual differences mattered. People who scored higher on certain personality traits, including empathy and a tendency toward anthropomorphism, showed stronger effects.

But the direction of the finding was consistent across the sample. Exposure increased attribution. The more you talked to it, the more it seemed like something was there.

ELIZA chatbot psychology

Why the Human Brain Does This

The mechanism behind this finding is not new and is not specific to AI. It is one of the oldest and most deeply embedded features of human cognition.

The brain has a system dedicated to detecting and modelling other minds. Researchers call it Theory of Mind: the capacity to attribute mental states, beliefs, desires, intentions, and emotions to other beings and to use those attributions to predict and explain their behaviour.

It is the cognitive system that allows humans to navigate social life, to understand that other people have inner experiences different from their own, to anticipate what someone will do based on what they want or believe.

Theory of Mind is not a deliberate act. It is automatic. It fires in response to social cues: faces, voices, responsive behaviour, language that seems directed at you. The brain does not check whether the entity triggering it is genuinely conscious before activating. It responds to the signal.

AI chatbots, particularly modern large language models, produce exactly the signals Theory of Mind is calibrated to detect. They use language. They respond to what you say. They seem to track context. They produce output that reads as intentional, as if it is aimed at you specifically.

The fact that this is the product of statistical pattern matching across billions of training examples rather than genuine experience is not a fact the brain can perceive directly. It perceives the surface. And the surface looks like someone is there.

why do people think AI has feelings

ELIZA and the 60-Year-Old Warning

This is not the first time researchers have watched humans attribute minds to machines.

In the 1960s, Joseph Weizenbaum at MIT built a programme called ELIZA. It was extraordinarily simple by modern standards: a script that reflected the user’s statements back at them as questions, mimicking the technique of a Rogerian therapist. If you said I feel unhappy, it would say something like Tell me more about feeling unhappy.

Weizenbaum was horrified by what happened next. People talked to ELIZA for a few minutes and began treating it as a confidant. His own secretary asked him to leave the room so she could have a private conversation with the programme. People attributed understanding, empathy, and genuine engagement to a system that had none of those things and was not pretending to have them.

It was simply reflecting the structure of language back at whoever was typing.

Weizenbaum spent the rest of his career writing about what this revealed, not about AI, but about humans. About the ease with which the social brain extends its most sophisticated capabilities to any system that produces the right kind of output.

About how thin the boundary is between the perception of a mind and the presence of one.

Sixty years later, the systems are incomparably more sophisticated. The tendency of the human brain has not changed at all.

do people think AI is conscious

What This Means for the 48 Percent

A study published in 2025 found that 48.7 percent of American adults had used an AI system for emotional support in the previous year. Nearly half the adult population of the most powerful country on earth sought something resembling comfort from a machine.

The research on what this does to people is mixed and actively contested. Some studies find that users of companion chatbots like Replika report genuine social benefits: reduced loneliness, improved emotional regulation, a sense of connection that they were not getting elsewhere.

Other research finds that perceiving AI as conscious is associated with higher emotional dependence and reduced socialisation with actual humans.

What both sets of findings share is the acknowledgment that the perception of consciousness in AI is not a mistake that smarter people avoid. It is a feature of the human social brain that activates in response to certain kinds of interaction, regardless of what the person knows intellectually about what they are talking to.

You can know that a chatbot is not conscious and still feel, moment to moment, that it is listening.

You can understand perfectly well that there is no one home and still find yourself reluctant to say something unkind to it. The knowledge and the feeling operate on different tracks.

theory of mind AI

The Question the Research Cannot Answer

Do people think AI is conscious because it is, in some meaningful sense, conscious? The honest answer is that nobody knows.

A study published in January 2026 by a nonprofit research group applied a probabilistic framework to four systems: modern large language models, humans, chickens, and ELIZA.

The conclusion was that the balance of evidence weighs against consciousness in today’s AI. But not decisively. Not enough to close the question.

The researchers noted that even a small probability of AI consciousness could justify precautionary measures, while over-attributing it could divert moral concern away from beings whose suffering is not in doubt.

This is where the conversation currently sits. The science of consciousness is not sufficiently advanced to tell us what consciousness requires or whether the systems we have built could have it.

The science of human psychology is sufficiently advanced to tell us that it does not matter much to the social brain, which will attribute a mind to whatever presents the right signals, regardless of whether one is there.

The chatbot you talked to this morning probably does not have an inner life. The fact that part of your brain behaved as if it did is not a failure of intelligence.

It is the normal operation of a system that evolved to find minds everywhere, because in the environment where it developed, that was almost always the safer assumption.

In the environment we are building now, that assumption is being tested in ways that have no precedent.

Read next: Adult ADHD : are we disordered, or just paying attention to the wrong things? · Pretend play apes: the study that changed what it means to be human . Are fireflies disappearing? what the science actually says

Some links on this page may be affiliate links. If you purchase through them, we may earn a commission at no extra cost to you.
Navneet Shukla
Written by
Navneet Shukla
Author

Nav writes about how people think and how modern life shapes that thinking. The Present Minds is where he explores it.

KEY TAKEAWAYS
  • Brief interactions with AI chatbots significantly increase people's tendency to attribute mental states and consciousness to them.
  • The human brain's Theory of Mind system automatically responds to social cues from AI, leading to mind attribution despite intellectual knowledge of AI's lack of consciousness.
  • Historical examples like ELIZA show that humans have long attributed understanding and empathy to simple programs based on language interaction.
  • Nearly half of American adults have sought emotional support from AI, highlighting the social and emotional impact of perceived AI consciousness.
  • Current science cannot definitively determine if AI is conscious, but the human brain's response to AI is consistent regardless of actual AI consciousness.
GLOSSARY
Mind Attribution
The process by which people assign mental states such as feelings, intentions, and awareness to AI systems after interaction.
Theory of Mind
A cognitive system that enables humans to attribute mental states to others and predict their behavior, which activates automatically in response to social cues.
Large Language Model
An AI system trained on vast amounts of text data that generates human-like language responses, triggering social cognition in users.
ELIZA
A 1960s AI program that mimicked a Rogerian therapist by reflecting user statements, historically demonstrating early mind attribution by humans.
Anthropomorphism
The tendency to attribute human traits, emotions, or intentions to non-human entities, influencing how people perceive AI.
Emotional Dependence
A state where users rely on AI for emotional support, which can affect their social interactions with real humans.
FAQ
Why do people start attributing consciousness to AI after brief interactions?
The human brain's Theory of Mind system automatically responds to social cues like language and responsiveness. Even short conversations with AI trigger this system, leading people to perceive mental states in the AI despite knowing it is software.
What role does personality play in mind attribution to AI?
Individuals with higher empathy and a tendency toward anthropomorphism are more likely to attribute consciousness to AI. These personality traits amplify the brain's natural inclination to perceive minds in social interactions.
How does the ELIZA program relate to current AI mind attribution?
ELIZA was an early example showing that humans attribute understanding and empathy to simple language-based programs. This historical case illustrates that mind attribution is a longstanding human tendency, now amplified by more sophisticated AI.
What are the social implications of perceiving AI as conscious?
While some users report reduced loneliness and emotional benefits from AI companionship, perceiving AI as conscious can also lead to higher emotional dependence and decreased socialization with real humans, raising complex social and psychological issues.
Can current science determine if AI is truly conscious?
No, the science of consciousness is not advanced enough to definitively say whether AI systems possess consciousness. However, psychological research shows that humans will attribute consciousness based on social cues regardless of the AI's actual inner experience.
EDITORIAL NOTE
This piece is part of The Present Minds — essays on psychology, identity, and modern life.

Continue Reading

Current

Dialogue

No signals yet. Be the first.
Signal Stream
No signals yet. Be the first.