Wednesday, December 03, 2025 | 07:48 AM ISTहिंदी में पढें
Business Standard
Notification Icon
userprofile IconSearch

­OpenAI plans to add safeguards to ChatGPT for teens, others in distress

Move came after a California teenager spent months on ChatGPT discussing plans to end his life

Sam Altman, OpenAI

OpenAI said it planned to introduce new features intended to make its chatbot safer, including parental controls, “within the next month”.

NYT

Listen to This Article

Don't want to miss the best from Business Standard?

By Kashmir Hill
 
ChatGPT is smart, humanlike and available 24/7. That has attracted 700 million users, some of whom are leaning on it for emotional support.
 
But the artificially intelligent chatbot is not a therapist — it’s a very sophisticated word prediction machine, powered by math — and there have been disturbing cases in which it has been linked to delusional thinking and violent outcomes. Last week, Matt and Maria Raine of California sued OpenAI, the company behind ChatGPT, after their 16-year-old son ended his life after months in which he discussed his plans with ChatGPT.
 
On Tuesday, OpenAI said it planned to introduce new features intended to make its chatbot safer, including parental controls, “within the next month.” Parents, according to an OpenAI post, will be able to “control how ChatGPT responds to their teen” and “receive notifications when the system detects their teen is in a moment of acute distress.”
 
 
This is a feature that OpenAI’s developer community has been requesting for more than a year. 
Other companies that make AI chatbots, including Google and Meta, have parental controls. What OpenAI described sounds more granular, similar to the parental controls introduced by Character.AI, a company with role-playing chatbots, after it was sued by a Florida mother, Megan Garcia, after her son’s suicide. 
On Character.AI, teenagers must send an invitation to a guardian to monitor their accounts; Aditya Nag, who leads the company’s safety efforts, told The New York Times in April that use of the parental controls was not widespread.
 
Robbie Torney, a director of AI programs at Common Sense Media, a nonprofit that advocates safe media for children, said parental controls were “hard to set up, put the onus back on parents and are very easy for teens to bypass.”
 
“This is not really the solution that is going to keep kids safe with A.I. in the long term,” Mr. Torney said by email. “It’s more like a Band-Aid.”
 
For teenagers and adults indicating signs of acute distress, OpenAI also said it would “soon begin” to route those inquiries to what it considers a safer version of its chatbot — a reasoning model called GPT-5 thinking. Unlike the default model, GPT-5, the thinking version takes longer to produce a response and is trained to align better with the company’s safety policies. It will, the company said in a different post last week, “de-escalate by grounding the person in reality.” A spokeswoman said this would happen “when users are exhibiting signs of mental or emotional distress, such as self-harm, suicide and psychosis.”
 
In the post last week, OpenAI said it planned to make reaching emergency services and getting help easier for distressed users. Human reviewers already look at conversations that look like someone plans to harm others and may refer them to law enforcement.
 
Jared Moore, a Stanford researcher who has studied how ChatGPT responds to mental health crises, said OpenAI had not provided enough details about how these interventions will work.
 
“I have a lot of technical questions,” he said. “The trouble with this whole approach is that it is all vague promises with no means of evaluation.”
 
The easiest thing to do in the case of a disturbing conversation, Moore said, would be to just end it.
 
©2025 The New York TimesNews Service

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Sep 03 2025 | 10:42 PM IST

Explore News