Sunday, November 30, 2025 | 01:21 PM ISTहिंदी में पढें
Business Standard
Notification Icon
userprofile IconSearch

Google's Gemini AI reportedly unsafe for teens despite guardrails

Common Sense Media flags Gemini's kid-focused versions as "High Risk," warning that added filters aren't enough to protect children from unsafe or inappropriate content

Gemini

Gemini

Aashish Kumar Shrivastava New Delhi

Listen to This Article

Don't want to miss the best from Business Standard?

Common Sense Media recently released a study based on risk assessments of Google’s Gemini AI products, concluding that the company’s kid-focused tiers still pose significant risks for younger users. In the risk assessment shared with TechCrunch, the nonprofit group said Gemini’s “Under 13” and “Teen Experience” options appear to be the adult model with extra filters layered on, rather than systems designed from the ground up for children.
 
While the assessment acknowledged that Gemini correctly identifies itself to children as a computer — a feature linked to lower risk of delusional attachments — it flagged the platform for other failures, including the potential to provide inappropriate content and unsuitable mental-health advice to minors.
 

What are the findings?

Common Sense’s core critique is architectural: children’s AI experiences should be purpose-built, not retrofitted from adult offerings, the group says. The tests reportedly showed Gemini could still produce content about sex, drugs, alcohol, and “unsafe” mental-health suggestions that younger users may not be prepared to handle. The group therefore rated both the Under-13 and Teen tiers “High Risk” overall, arguing that age-specific guidance, tone, and safeguards need to be embedded into the system rather than applied as filters after the fact.
 
Robbie Torney, Senior Director of AI programs at Common Sense Media, told TechCrunch that a one-size-fits-all approach “stumbles on the details” and urged AI makers to design with developmental stages in mind. 

Google defends

The assessment arrives as Google defends Gemini’s safety work. TechCrunch reports that Google told the news platform that it operates specific policies and red-teaming processes for users under 18, consults outside experts, and has added safeguards after Common Sense identified problem responses.
 
Google also said some items referenced in the nonprofit’s report weren’t available to under-18 accounts and disputed aspects of the testing methodology, though it acknowledged some responses “weren’t working as intended” and said it has implemented additional protections.

How did the competition fare?

Common Sense has been evaluating AI services across the industry; TechCrunch notes its earlier reviews rated Meta AI and Character.AI as “unacceptable,” Perplexity as “high risk,” ChatGPT as “moderate,” and Anthropic’s Claude (targeted at adults) as “minimal” risk.
 
The organisation’s comparative work is intended to give parents and schools a sense of where current models stand on safety and developmental appropriateness, while pushing providers to prioritise child-specific design and independent testing.

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Sep 08 2025 | 5:18 PM IST

Explore News