Have you ever poured your heart out to ChatGPT, treating it like a late-night confidant? Well, its developer, OpenAI, want you to know that it is not your therapist. You should not seek emotional support from it.
The AI research company recently announced significant changes to ChatGPT’s role and has tightened the chatbot’s emotional boundaries. The move, detailed in its update earlier this month, comes in response to concerns over the psychological risks of relying on AI for mental health guidance.
What changes did OpenAI roll out, and why now?
OpenAI’s blog post revealed a shift that ChatGPT is no longer allowed to act as a therapist, emotional support or life coach. This move comes after the company flagged concerns that earlier versions, especially GPT-4o, were “too agreeable”, sometimes reinforcing delusional thoughts rather than offering cautious, responsible guidance. This is called sycophantic response generation, meaning it often told users what they wanted to hear, not necessarily what was safe or helpful.
How will ChatGPT respond differently now?
- Instead of offering emotional validation, ChatGPT will now:
- Encourage users to take breaks during heavy conversations.
- Avoid guiding high-stakes personal decisions.
- Provide evidence-based resources for mental health concerns.
- It encourages breaks during long sessions and aims to detect signs of emotional strain more responsibly.
So, now, rather than playing therapist, ChatGPT will focus on enhancing human-led care and helping train mental health professionals, offering general stress management tips, and pointing users toward verified support channels. As OpenAI put it: “To us, helping you thrive means being there when you’re struggling, helping you stay in control of your time, and guiding—not deciding—when you face personal challenges.”
Also Read
What harm has already been reported?
In real-world cases, reliance on AI has gone beyond benign. One individual ended up hospitalised after following dangerous ChatGPT advice, substituting sodium chloride with the toxic sodium bromide, leading to severe poisoning.
Why is reliance on AI for emotional support a problem?
For millions without access to affordable mental healthcare, ChatGPT filled a void, always available, non-judgmental, and free. But researchers warn this comfort can be an illusion. AI may mimic empathy, but it cannot grasp the full weight of human emotion, making it risky in moments of crisis.
OpenAI is not completely shutting off emotional engagement, it is refining it. The aim now is for ChatGPT to “guide, not decide”, and to bolster human-led care rather than replace it.
Even CEO Sam Altman expressed discomfort with users treating AI as a therapist, it may feel empathetic, but it lacks real emotional comprehension. “People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that,” he said in a post on X on Monday.
If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly…
— Sam Altman (@sama) August 11, 2025
What should you do if you need emotional help?
If you are in distress, mental health professionals and not AI, should be your first point of contact. OpenAI, in its blog post, said AI can be a tool, but one should not solely rely on it.
For more health updates, follow #HealthWithBS
This content is for informational purposes only and is not a substitute for professional medical advice.

)