Home / Health / Google removes some AI health summaries after wrong advice sparks alarm
Google removes some AI health summaries after wrong advice sparks alarm
Can AI search answer put your health at risk? Google has removed 'AI Overviews' from some medical searches after a report by The Guardian flagged misinformation with potentially dangerous consequences
Google has rolled back its 'AI Overviews' for some health searches after concerns over misleading medical information were flagged. (Photo: AdobeStock)
4 min read Last Updated : Jan 13 2026 | 12:05 PM IST
This copy has been updated. People often turn to Google for quick answers about their health. But what happens when those answers are wrong? Google has removed AI-generated summaries from some health-related searches after an investigation by The Guardian found that the feature was offering misleading medical information.
In its investigation, The Guardian found that 'AI Overviews', Google’s generative AI feature that offers a ready-made summary at the top of search results, were providing over-simplified or incorrect medical guidance for certain health queries, sometimes in ways that could falsely reassure patients or even lead to harmful decisions.
One example involved searches such as “what is the normal range for liver blood tests” or “normal range for liver function tests”. The AI summaries reportedly presented neat numerical ranges, without explaining that liver test values depend on factors such as age, sex, ethnicity, medications, and underlying health conditions.
Doctors warn that without this context, people could wrongly assume their results are normal, delaying diagnosis or treatment.
Which health searches did Google remove AI Overviews from?
Following the report, The Guardian observed that AI Overviews no longer appeared for some direct queries, including “What is the normal range for liver blood tests?”, “What is the normal range for liver function test?”
However, the rollback appeared selective rather than comprehensive. When reporters tested slightly altered phrases such as “LFT reference range” or “LFT test reference range”, AI Overviews initially continued to appear, suggesting the system was still vulnerable to being triggered through minor wording changes.
Why are doctors worried about AI answers in health searches?
Medical professionals stress that clinical interpretation is rarely one-size-fits-all. The Guardian also reported cases where AI Overviews gave dangerous dietary advice, including telling people with pancreatic cancer to avoid high-fat food, the opposite of what is often medically recommended and something that could worsen outcomes.
Health experts argue that even when information is partially correct, missing nuance in medicine can cause real harm, especially when users trust AI summaries as authoritative.
What has Google said in response?
Responding to The Guardian's report, Google declined to comment on specific removals from Search. A spokesperson said, “We do not comment on individual removals within Search. In cases where AI Overviews miss some context, we work to make broad improvements, and we also take action under our policies where appropriate.”
Google added that an internal team of clinicians reviewed the examples shared with the company and concluded that, in many cases, the information was not inaccurate and was supported by high-quality websites.
However, critics say that accuracy alone is not enough when context is critical.
Are AI Overviews still showing medical information?
While some liver-related queries no longer trigger AI Overviews, The Guardian noted that AI-generated summaries are still available for other medical topics, including cancer and mental health.
Google told the publication these were not removed because they linked to well-known and reputable sources.
This is not the first time the feature has landed Google in trouble. Soon after its launch in May last year, AI Overviews went viral for bizarre and incorrect advice, including recommending glue on pizza to stop cheese sliding off, or suggesting people eat a small rock each day for vitamins. The feature was briefly pulled before being reintroduced with changes.
What has Google said in response?
In a statement emailed to Business Standard, a company spokesperson said the tech giant invests heavily in the quality of its AI-generated summaries, particularly on sensitive topics such as health.
“We invest significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information,” the spokesperson said. “Our internal team of clinicians reviewed what’s been shared with us and found that in many instances, the information was not inaccurate and was also supported by high-quality websites. In cases where AI Overviews miss some context, we work to make broad improvements, and we also take action under our policies where appropriate.”
Google also told The Guardian that its internal clinical review concluded that several of the examples cited were not factually wrong, even if they lacked nuance. Critics, however, argue that in medicine, accuracy without adequate context can still be dangerous, particularly when AI summaries are presented prominently and perceived by users as authoritative.