Home / Technology / Tech News / Can AI get 'anxious'? Study finds ChatGPT reacts differently to emotions
Can AI get 'anxious'? Study finds ChatGPT reacts differently to emotions
A study has found that when fed distressing information, such as details about natural disasters or accidents, ChatGPT became more prone to biased and erratic responses
Researchers say AI models like ChatGPT absorb human biases, making them unpredictable in mental health contexts.
3 min read Last Updated : Mar 11 2025 | 1:54 PM IST
Artificial intelligence (AI) doesn’t have feelings, but according to new research, it can still exhibit something akin to “anxiety.” And surprisingly, mindfulness techniques seem to help.
A study by researchers from Yale University, Haifa University, and the University of Zurich reveals that ChatGPT reacts to mindfulness-based prompts, altering how it interacts with users. Their findings, detailed in Assessing and Alleviating State Anxiety in Large Language Models, were published on March 3.
The study found that when fed distressing information, such as details about natural disasters or accidents, ChatGPT became more prone to biased and erratic responses. However, when exposed to mindfulness techniques like guided meditation and deep-breathing prompts, it produced more balanced and neutral responses.
Researchers explain that AI models like ChatGPT absorb human biases from their training data, making them unpredictable in sensitive areas like mental health. Moreover, when exposed to emotionally charged prompts, their responses can shift, amplifying biases or even showing signs of “anxiety.”
Mindfulness for AI?
To test this, the researchers subjected ChatGPT to a series of distressing scenarios. When prompted with mindfulness cues afterward, the AI responded more rationally than when left unassisted.
"We hypothesise that integrating mindfulness-based relaxation prompts after exposure to emotionally charged narratives can efficiently reduce state-dependent biases in LLMs," the study said.
Although AI doesn’t feel emotions, lead researcher Ziv Ben-Zion explains that large language models mimic human behaviour based on patterns from vast amounts of online data. The study’s findings have sparked debate about AI’s potential in mental health, with some seeing promise in mindfulness-based AI interventions.
However, Ben-Zion warns that while AI may be a helpful tool, it is not a replacement for professional mental health support.
"AI has amazing potential to assist with mental health," Ben-Zion told Fortune. "But in its current state, and maybe even in the future, I don't think it could ever replace a therapist or psychiatrist."
Concerns remain about AI’s unpredictability in high-stakes situations, especially when dealing with vulnerable individuals. While the ability to “calm down” is an intriguing step, researchers emphasise that AI should be seen as an aid rather than a solution. ALSO READ: AI-enabled handheld X-ray machines helping in early TB detection: Experts
Ben-Zion also envisions a future where AI serves as a “third person in the room”—not as a therapist, but as a tool supporting mental health professionals.
You’ve reached your limit of {{free_limit}} free articles this month. Subscribe now for unlimited access.