Sunday, December 21, 2025 | 10:37 PM ISTहिंदी में पढें
Business Standard
Notification Icon
userprofile IconSearch

Why do AI chatbots use 'I'? The design choice behind humanlike bots

Some researchers believe the popular interactive tools should act more like software and less like humans

AI chatbots

Shneiderman and a host of other experts in a field known as human-computer interaction object to this approach | (Photo: AdobeStock)

NYT

Listen to This Article

I first noticed how charming ChatGPT could be last year when I turned all my decision-making over to generative AI for a week. 
I tried out all the major chatbots for that experiment, and I discovered each had its own personality. An­­thropic’s Claude was studious and a bit prickly. Google’s Gemini was all business. Open AI’s ChatGPT, by contrast, was friendly, fun and down for anything I threw its way. 
ChatGPT also had “voice mode,” which allowed it to chat aloud, in a natural humanlike cadence, with everyone in my family, including my young daughters. 
During one conversation with ChatGPT, my daughters said it should have a name and suggested “Captain Poophead.” ChatGPT, listening in, made its own recommendation: “How about the name Spark? It’s fun and bright, just like your energy!”
 
My takeaway from putting Spark in charge of my household was that generative AI chatbots could be helpful, but that there were risks, including making us all sound and act similarly. (Any college professor who has gotten 30 papers written in identical ChatGPTese can relate.) But in the year since, I’ve found that AI can have much more extreme effects on people who form intense bonds with it. I’ve written about a woman who fell in love with ChatGPT and about others who have lost touch with reality after it endorsed their delusions. The results have sometimes been tragic. 
My daughters still talk to Spark. ChatGPT gamely answers their questions about why spotted lanternflies are considered invasive and how many rivers flow north. But having seen how these systems can lead people astray, I am warier and pay more attention to what ChatGPT says to them. 
My 8-year-old, for example, once asked Spark about Spark. The cheerful voice with endless patience for questions seemed almost to invite it.  
This response, personalised to us, seemed innocuous and yet I bristled. ChatGPT is a large language model, or very sophisticated next-word calculator. It does not think, eat food or have friends, yet it was responding as if it had a brain and a functioning digestive system. 
Asked the same question, Claude and Gemini prefaced their answers with caveats that they had no actual experience with food or animals. Gemini alone distinguished itself clearly as a machine by replying that data is “my primary source of ‘nutrition.’”
All the chatbots had favorite things, though, and asked follow-up questions, as if they were curious about the person using them and wanted to keep the conversation going. 
“It’s entertaining,” said Ben Shneiderman, an emeritus professor of computer science at the University of Maryland. “But it’s a deceit.” 
Shneiderman and a host of other experts in a field known as human-computer interaction object to this approach. They say that making these systems act like humanlike entities, rather than as tools with no inner life, creates cognitive dissonance for users about what exactly they are interacting with and how much to trust it. Generative A.I. chatbots are a probabilistic technology that can make mistakes, hallucinate false information and tell users what they want to hear. But when they present as humanlike, users “attribute higher credibility” to the information they provide, research has found. 
Critics say that generative AI systems could give requested information without all the chit chat. Or they could be designed for specific tasks, such as coding or health information, rather than made to be general-purpose interfaces that can help with anything and talk about feelings. They could be designed like tools: A mapping app, for example, generates directions and doesn’t pepper you with questions about why you are going to your destination. 
Making these newfangled search engines into personified entities that use “I,” instead of tools with specific objectives, could make them more confusing and dangerous for users, so why do it this way? 
How chatbots act reflects their upbringing, said Amanda Askell, a philosopher who helps shape Claude’s voice and personality as the lead of model behavior at Anthropic. These pattern recognition machines were trained on a vast quantity of writing by and about humans, so “they have a better model of what it is to be a human than what it is to be a tool or an A.I.,” she said.
The use of “I,” she said, is just how anything that speaks refers to itself. More perplexing, she said, was choosing a pronoun for Claude. “It” has been used historically but doesn’t feel entirely right, she said. Should it be a “they”? she pondered. How to think about these systems seems to befuddle even their creators. 
There also could be risks, she said, to designing Claude to be more tool-like. Tools don’t have judgment or ethics, and they might fail to push back on bad ideas or dangerous requests. “Your spanner’s never like, ‘This shouldn’t be built,’” she said, using a British term for wrench. 

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Dec 21 2025 | 10:33 PM IST

Explore News