Don't want to miss the best from Business Standard?
OpenAI has announced it will roll out parental controls in ChatGPT, allowing parents to be notified when the chatbot detects that their child may be in a moment of acute distress. The controls build on features available to all users, including in-app reminders during long sessions to encourage breaks.
What parental controls are coming to ChatGPT
Within the coming month, parents will be able to:
- Connect accounts: Link a parent’s account to a teen’s (minimum age 13) via a simple email invite.
- Set usage rules: Apply age-appropriate behaviour settings, which, according to OpenAI, will be enabled by default.
- Disable features: Choose which features to turn off, such as memory and chat history.
- Receive alerts: Get notifications if the system identifies signs that a teen may be experiencing acute distress, with expert guidance shaping this feature to foster trust between parents and teens.
Why OpenAI is adding parental controls
There have been recent incidents where the chatbot reportedly failed to stick to safety guidelines during prolonged conversations and produced responses experts deemed unsafe, reported TechCrunch. A separate Al Jazeera report cited a Psychiatric Services study in which researchers found that ChatGPT, Google’s Gemini, and Anthropic’s Claude generally followed clinical best practices for high-risk suicide questions but were less consistent when queries reflected “intermediate risk” scenarios.
Also Read
OpenAI is introducing these parental controls alongside a 120-day initiative to preview plans for improvements it hopes to release this year.
How GPT-5 will help
OpenAI said its new reasoning models, including GPT-5 Thinking and o3, are designed to “spend more time thinking for longer and reasoning through context before answering.” The company says these models are trained with a technique called “deliberative alignment,” which testing has shown helps them follow safety guidelines more consistently and resist adversarial prompts.
OpenAI added that it has introduced a real-time routing system that can switch between efficient chat models and reasoning models depending on context. It will “soon begin to route some sensitive conversations, like when our system detects signs of acute distress, to a reasoning model, like GPT-5 Thinking,” to provide more supportive responses regardless of the initial model selected.

)