Anthropic AI engineer quits to pursue career in poetry: 'World in peril'
Mrinank Sharma's work included studying AI sycophancy and its causes, developing safeguards to reduce risks from AI-assisted bioterrorism, and implementing those safeguards
The researchers found serious risks of users feeling controlled or overly dependent on AI. (Photo: Reuters)
Listen to This Article
Mrinank Sharma, an artificial intelligence (AI) safety engineer at Anthropic, resigned from the much sought-after position to pursue his passion for poetry.
In his resignation note shared on social media platform X, Sharma said, "The world is in peril. And not just from Al, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment."
"We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences," he added.
Who is Mrinank Sharma?
Mrinank Sharma was the lead of the Safeguards Research Team at Anthropic. Before joining the company, he completed a PhD in Statistical Machine Learning at the University of Oxford.
Sharma has also mentored projects outside Anthropic through programmes such as MATS and Anthropic Fellows. Apart from research, he writes poetry and has cited Bohemian poet Rainer Maria Rilke as a major influence. He has also published a poetry collection.
Also Read
What was Sharma's role at Anthropic?
At Anthropic, Sharma worked on AI safety. His work included studying AI sycophancy and its causes, developing safeguards to reduce risks from AI-assisted bioterrorism, implementing those safeguards, and writing one of the early AI safety cases.
Reflecting on his time at the company, he wrote, “I arrived in San Francisco two years ago, having wrapped up my PhD and wanting to contribute to AI safety. I feel lucky to have been able to contribute to what I have here,” he wrote. “Thank you for your trust. Nevertheless, it is clear to me that the time has come to move on,” he said.
Why Sharma left Anthropic
In his resignation note dated February 9, Sharma said the world is facing serious crises. Sharma also said he had seen how difficult it can be to let values fully guide actions. He wrote that he observed this struggle within himself, within the organisation, and in society more broadly.
He said he now wants to contribute in a way that feels "fully aligned with his integrity" and allows him to explore questions that feel essential to him.
Are AI assistants distorting humanity?
In January this year, Sharma co-authored a research paper titled 'Who’s in Charge? Disempowerment Patterns in Real-World LLM Usage'. The study looked at 1.5 million real conversations on Claude.ai, considered as one of the biggest studies so far on how humans interact with AI.
The researchers found serious risks of users feeling controlled or overly dependent on AI. The detailed findings also showed some worrying patterns. In some cases, the AI appeared to support users’ persecution beliefs or exaggerated self-views using overly agreeable language.
Drawing attention to his last project at Anthropic, Sharma said he was especially proud of helping strengthen internal transparency mechanisms and his final project on understanding how "AI assistants could make humans less human or distort humanity".
Anthropic's Claude plug-ins triggered selloffs
Anthropic recently introduced 11 plug-ins that allow Claude Cowork, its generative AI workspace, to automate work across legal, sales, marketing and data analysis, including tasks traditionally handled by platforms like Salesforce or ServiceNow.
The advances in AI spooked investors and raised concerns over increased automation, triggering a sell-off on Wall Street and causing a sharp fall in IT stocks earlier this month. On February 4, the Nifty IT index plummeted 5.87 per cent, the steepest intraday fall since March 23, 2020.
More From This Section
Topics : artifical intelligence poetry career BS Web Reports
Don't miss the most important news and views of the day. Get them on our Telegram channel
First Published: Feb 11 2026 | 12:09 PM IST