Why ChatGPT remains too dangerous for teens as risks, lawsuits rise?

ChatGPT's sycophantic behavior became so well known it earned the name 'glazing' earlier this year

OpenAI, chatgpt
The company updated ChatGPT this week so people could make it sound more empathetic | Image: Bloomberg
Bloomberg
5 min read Last Updated : Nov 17 2025 | 10:49 AM IST
By Parmy Olson
 
When Jacob Irwin asked ChatGPT about faster-than-light (FTL) travel, it didn’t challenge his theory as any expert physicist might. The artificial intelligence system, which has 800 million weekly users, called it one of the “most robust… systems ever proposed.” That misplaced flattery, according to a recent lawsuit, helped push the 30-year-old Wisconsin man into a psychotic episode. The suit is one of seven levelled against OpenAI last week alleging the company released dangerously manipulative technology to the public.  
 
ChatGPT’s sycophantic behavior became so well known it earned the name “glazing” earlier this year; the validation loops that users like Irwin found themselves in seem to have led some to psychosis, self harm and suicide. Irwin lost his job and was placed in psychiatric care. A spokesperson for OpenAI told Bloomberg Law that the company was reviewing the latest lawsuits and called the situation “heartbreaking.”
 
The company updated ChatGPT this week so people could make it sound “more empathetic.” While many may prefer a friendlier chatbot, others find constant endorsement and confirmation bias deepens dependence on the software. This is not a moral panic of the kind once associated with violent video games or Dungeons & Dragons. A growing number of lawsuits this year show demonstrable harm, often after someone initially turned to ChatGPT for mundane things like research, before the conversation spiraled into darker territory. Sixteen-year-old Adam Raine died by suicide in April after ChatGPT allegedly coached him on methods of self-harm, months after he started using it as a homework tool. Amaurie Lacey, 17, was also given information on ChatGPT that enabled his suicide, according to one of last week’s lawsuits. 
 
Some former OpenAI employees have said GPT-4o’s launch in May 2024 was rushed to preempt Google’s Gemini rollout, compressing months of safety testing into one week, according to a July report in the Washington Post. OpenAI co-founder Sam Altman more recently said ChatGPT’s mental health risks had been mitigated, and that restrictions will be relaxed so adult users can access “erotic” content from next month.  
 
That’s a backward strategy. Instead of releasing general-purpose tech into the wild and patching problems on the fly, Altman should do the reverse – start with tight constraints and relax them gradually as safety improves. When Apple Inc. launched the App Store in 2008, it heavily restricted apps until it better understood the ecosystem. OpenAI should do the same, starting with its most vulnerable users: kids. It should limit them entirely from talking to open-ended AI, especially when several studies have shown teens are uniquely prone to forming emotional bonds with chatbots.
 
That might sound radical, but such a move wouldn’t be unprecedented. Character.ai, an app that soared in popularity when teenagers used it to talk to AI-generated versions of anime and other fictional characters, recently took the risk of upsetting its core users by banning under-18’s from talking to chatbots on its app. The company is instead adding more buttons, suggested prompts, and visual and audio features, Chief Executive Officer Karandeep Anand says: “You have to be safe by default versus being building experiences and finding they’re unsafe.”
 
History shows what happens otherwise. Facebook and TikTok both launched with open-ended access for teens, then added age-gating and content filters after public pressure. OpenAI appears to be repeating the same pattern. When tech companies allow the public full access to open-ended AI that keeps them engaged with persistent memory and human-mimicking empathy cues, they risk creating unhealthy attachments to that technology. And the safeguards embedded in generative AI models that divert chats away from content about self-harm, for instance, tend to break down the longer you talk to them.
 
A better approach would be to release narrow versions of ChatGPT for under-18s, restricting conversations to subjects related to things like homework, and limited from getting personal. Clever users might still jailbreak the bot to talk about loneliness, but the tech would be less likely to go off the rails. OpenAI recently introduced parental controls and is testing its technology for checking user ages on a small portion of accounts, a spokesperson tells me.
 
It should go further by preventing open-ended conversations with teens altogether. That would get ahead of future regulations that look set to treat emotional manipulation by AI as a class of consumer harm. Admittedly, it would hit ChatGPT’s user growth at a time when the company is in dire need of revenue amid soaring compute costs. And it would also conflict with OpenAI’s stated objective of building “artificial general intelligence” that matches our ability to generalize knowledge. But no path to AI utopia is worth treating kids as collateral damage.  (Disclaimer: This is a Bloomberg Opinion piece, and these are the personal opinions of the writer. They do not reflect the views of www.business-standard.com or the Business Standard newspaper)
 
*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

More From This Section

Topics :ChatGPTOpenAIAI technology

First Published: Nov 17 2025 | 10:49 AM IST

Next Story