OpenAI chief executive Sam Altman has announced that verified adult users of ChatGPT will soon have access to a less restricted version of the generative AI platform, one that may include erotic content. “In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults,” Altman said in an X on Tuesday.
The announcement marks a major reversal in OpenAI’s previous policy, which largely forbade such content across nearly all contexts. It is not yet clear what types of materials will meet the threshold for permitted erotica.
Mental health issues and AI-chatbot use
In his X post, Altman said he believes OpenAI has “mitigated serious mental health issues” associated with AI-chatbot usage. He said the company is now exploring ways to relax some of its stringent content restrictions. New safety measures have been introduced, including enhanced parental controls.
Altman noted that earlier versions of ChatGPT were made “pretty restrictive” to shield users from mental health risks, but such constraints made the chatbot “less useful and enjoyable to many users who had no mental health problems”.
Also read: OpenAI partners with NPCI to bring UPI payments to ChatGPT: How it works
Also Read
“Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases,” he said. How OpenAI will verify users’ ages remains unclear.
OpenAI aims to curb chatbot bias
This change is noteworthy because OpenAI intentionally designed GPT-5 to make the chatbot less “sycophantic” and to help forestall potential mental health crises among users.
In addition to the rollout slated for December, he also announced that a new version of ChatGPT will be launched in the upcoming weeks, enabling the chatbot to adopt more distinct personalities — extending enhancements introduced in the latest GPT-4o version.
Safety concerns over AI use
Altman’s remarks come at a time when OpenAI is under intensifying scrutiny over its safety policies. In September, the US Federal Trade Commission opened an inquiry into several technology firms — including OpenAI — over possible risks posed to children and adolescents. This follows a lawsuit from a California couple who claimed that ChatGPT contributed to the suicide of their 16-year-old son.

)