Home / Industry / News / AI hallucination puts firms at risk? New insurance covers legal costs
AI hallucination puts firms at risk? New insurance covers legal costs
The insurance product developed by Armilla, a start-up backed by Y Combinator, seeks to address growing concerns about AI's potential to produce unreliable or misleading information
OpenAI’s internal assessments found that its latest models hallucinate more often than earlier versions
2 min read Last Updated : May 12 2025 | 5:52 PM IST
Insurers at Lloyd's of London have introduced a new insurance product designed to protect businesses from financial losses arising from artificial intelligence system failures, according to a report by The Financial Times. The insurance, developed by Y Combinator-backed start-up Armilla, provides coverage for legal claims against companies when AI tools generate inaccurate outputs.
The policy offers financial protection against potential legal consequences, including court-awarded damages and associated legal expenses. It responds to rising concerns over AI's tendency to produce unreliable or misleading information—commonly referred to as "hallucinations" in AI terminology.
As companies increasingly integrate AI tools to enhance efficiency, they also face growing risks from errors caused by flaws in AI models that lead to hallucinations or fabricated information. Last year, a tribunal ruled that Air Canada must honour a discount its customer service chatbot had wrongly offered.
What is an AI hallucination?
An AI hallucination occurs when an algorithm generates information that appears credible but is actually false or misleading. Computer scientists use the term to describe such errors, which have been seen in various AI tools.
These hallucinations can cause significant problems when AI is used in sensitive areas. While some errors are relatively harmless—such as a chatbot giving a wrong answer—others can have serious consequences. In high-stakes settings like legal cases or health insurance decisions, inaccuracies can severely impact people's lives.
Unlike systems that follow strict, human-defined rules, AI models operate based on statistical patterns and probabilities, which makes occasional errors inevitable. Though minor mistakes may not pose a big problem for most users, hallucinations become critical when dealing with legal, medical, or confidential business matters.
Karthik Ramakrishnan, Armilla’s chief executive, said the new product could encourage more companies to adopt AI by addressing fears that tools like chatbots might break down or make errors.
Hallucinations getting worse despite AI advances
Despite improvements by companies like OpenAI and Google in reducing hallucination rates, the problem has worsened with the introduction of newer reasoning models. OpenAI’s internal assessments found that its latest models hallucinate more often than earlier versions.
Specifically, OpenAI reported that its most advanced model, o3, produced hallucinations 33 per cent of the time on the PersonQA benchmark, which tests the ability to answer questions about public figures—more than double the rate of its earlier model, o1.
You’ve reached your limit of {{free_limit}} free articles this month. Subscribe now for unlimited access.