Don't want to miss the best from Business Standard?
Artificial intelligence (AI) doesn’t have feelings, but according to new research, it can still exhibit something akin to “anxiety.” And surprisingly, mindfulness techniques seem to help.
A study by researchers from Yale University, Haifa University, and the University of Zurich reveals that ChatGPT reacts to mindfulness-based prompts, altering how it interacts with users. Their findings, detailed in Assessing and Alleviating State Anxiety in Large Language Models, were published on March 3.
The study found that when fed distressing information, such as details about natural disasters or accidents, ChatGPT became more prone to biased and erratic responses. However, when exposed to mindfulness techniques like guided meditation and deep-breathing prompts, it produced more balanced and neutral responses.
“Despite their undeniable appeal, systematic research into the therapeutic effectiveness of LLMs in mental healthcare has revealed significant limitations and ethical concerns,” the study said.
ALSO READ: China's new AI model 'Manus' creates global buzz, challenges OpenAI, Google
Researchers explain that AI models like ChatGPT absorb human biases from their training data, making them unpredictable in sensitive areas like mental health. Moreover, when exposed to emotionally charged prompts, their responses can shift, amplifying biases or even showing signs of “anxiety.”
Also Read
Mindfulness for AI?
To test this, the researchers subjected ChatGPT to a series of distressing scenarios. When prompted with mindfulness cues afterward, the AI responded more rationally than when left unassisted.
"We hypothesise that integrating mindfulness-based relaxation prompts after exposure to emotionally charged narratives can efficiently reduce state-dependent biases in LLMs," the study said.
It added, "After exposure to traumatic narratives, GPT-4 was prompted by five versions of mindfulness-based relaxation exercises. As hypothesised, these prompts led to decreased anxiety scores reported by GPT-4."
ALSO READ: Over 2 million AI jobs in India with a 1 million talent gap: Bain & Company
Although AI doesn’t feel emotions, lead researcher Ziv Ben-Zion explains that large language models mimic human behaviour based on patterns from vast amounts of online data. The study’s findings have sparked debate about AI’s potential in mental health, with some seeing promise in mindfulness-based AI interventions.
However, Ben-Zion warns that while AI may be a helpful tool, it is not a replacement for professional mental health support.
"AI has amazing potential to assist with mental health," Ben-Zion told Fortune. "But in its current state, and maybe even in the future, I don't think it could ever replace a therapist or psychiatrist."
Concerns remain about AI’s unpredictability in high-stakes situations, especially when dealing with vulnerable individuals. While the ability to “calm down” is an intriguing step, researchers emphasise that AI should be seen as an aid rather than a solution.
ALSO READ: AI-enabled handheld X-ray machines helping in early TB detection: Experts
Ben-Zion also envisions a future where AI serves as a “third person in the room”—not as a therapist, but as a tool supporting mental health professionals.

)