Guardian investigation finds Google AI Overviews gave dangerous health misinformation

Tombstone icon

A Guardian investigation found Google's AI Overviews displayed false and misleading health information across multiple medical topics. AI summaries gave incorrect liver function test ranges sourced from an Indian hospital chain without accounting for nationality, sex, or age. The feature advised pancreatic cancer patients to avoid high-fat foods, which experts said could increase mortality risk. Stanford and MIT researchers called the absence of prominent disclaimers a critical danger. Google removed some AI Overviews for health queries after the investigation, but many remained active.

Incident Details

Severity:Facepalm
Company:Google
Perpetrator:Search Product
Incident Date:
Blast Radius:Potentially millions of Google users served incorrect medical information including dangerous advice for cancer patients and liver disease
Advertisement

What AI Overviews Are

Google's AI Overviews appear at the top of search results - above the links, above the ads, above everything else on the page. When a user types a health question into Google, the AI generates a summary response that looks authoritative and complete. The summary pulls from web results that Google's page ranking algorithm considers high quality, then an AI model synthesizes the information into a paragraph that reads like a confident, definitive answer.

The key design decision: AI Overviews are built on the assumption that highly ranked pages contain accurate information. Google's ranking algorithm feeds web results to the AI model, and the model summarizes them with an authoritative tone. If the source material is wrong, the AI presents wrong information with the same confidence it presents correct information. If the source material is right but context-dependent, the AI strips the context and presents the raw data as universal fact.

On January 2, 2026, the Guardian published an investigation by Andrew Gregory showing exactly what happens when that design encounters health queries.

The Liver Function Tests

The Guardian found that searching Google for liver function test reference ranges triggered an AI Overview filled with numbers pulled from an Indian hospital chain's website. The AI presented those values - ALT, AST, ALP, and other liver enzyme ranges - with no accounting for the patient's nationality, sex, ethnicity, or age. All of those factors affect what a normal reference range actually is.

The AI Overview displayed the numbers in bold, listed by test name, formatted to look like a clinical reference chart. A patient whose doctor had just ordered liver function tests - which is exactly the person most likely to Google "liver function test reference range" - would see those numbers and compare them to their own results.

Vanessa Hebditch, director of communications and policy at the British Liver Trust, said the AI Overviews presented "masses of numbers, little context" and that it would be "very easy for readers to miss that these numbers might not even be the right ones for their test." After the investigation was published, Google removed the AI Overview for that specific query. But the Guardian found that slight variations of the query - "lft reference range" or "lft test reference range" - still triggered AI Overviews with the same kind of decontextualized medical data.

Hebditch said this was the bigger concern: "it is nit-picking a single search result and Google can just shut off the AI Overviews for that but it's not tackling the bigger issue of AI Overviews for health."

The Pancreatic Cancer Advice

In what experts described as a "really dangerous" finding, the Guardian investigation showed that Google's AI Overview advised people with pancreatic cancer to avoid high-fat foods.

For a healthy person, reducing dietary fat is unremarkable nutritional advice. For a pancreatic cancer patient, it can accelerate death. Pancreatic cancer frequently causes malnutrition and wasting - the body loses the ability to properly digest food, and patients desperately need calorically dense foods, including fats, to maintain body weight and survive treatment. Telling a pancreatic cancer patient to cut fat from their diet is the opposite of what oncology nutrition guidelines recommend.

The AI Overview didn't explain any of that context. It presented the dietary advice as if it applied universally, the same way it might present the advice for a general wellness query. The source pages the AI pulled from may have contained accurate information about fat and cancer in a different context entirely - the AI model flattened the nuance and served the conclusion without the qualifications that made it safe.

Mental Health Misinformation

The investigation extended beyond physical health. The Guardian found that AI Overviews for mental health conditions including psychosis and eating disorders contained information that mental health professionals called dangerous.

Stephen Buckle, head of information at the UK mental health charity Mind, told the Guardian that AI Overviews offered "very dangerous advice" about eating disorders and psychosis, and that the summaries were "incorrect, harmful or could lead people to avoid seeking help."

People searching for information about psychosis or eating disorders are often searching because they or someone they know is in crisis. The first thing they see when they search Google is an AI-generated summary that may tell them something incorrect about a condition where incorrect information can delay treatment or reinforce harmful behaviors. After the investigation, Mind launched a formal inquiry into AI and mental health.

The Disclaimer Problem

A separate Guardian investigation published on February 16, 2026, examined how Google handles health disclaimers on AI Overviews. The findings added another layer to the problem.

The health disclaimer - "This is for informational purposes only" - appears only if users click through for further details after seeing the initial summary and then navigate to the very end of the expanded AI Overview. When users first see the medical advice at the top of their search results, there is no disclaimer visible. Google did not deny this design choice when asked.

Pat Pataranutaporn, an assistant professor at MIT specializing in AI and human-computer interaction, told the Guardian that "the absence of disclaimers when users are initially served medical information creates several critical dangers."

Sonali Sharma, a researcher at Stanford University's Centre for AI in Medicine and Imaging (AIMI), identified the positioning problem: "The major issue is that these Google AI Overviews appear at the very top of the search page and often provide what feels like a complete answer to a user's question at a time where they are trying to access information and get an answer as quickly as possible."

That's the design working as intended: AI Overviews are supposed to be at the top, and they're supposed to look like complete answers. The problem is that for medical queries, a "complete answer" without a visible disclaimer, without context about the limitations of the data, and without prompting the user to consult a doctor is functionally indistinguishable from medical advice.

Google's Response

Google's official response followed a familiar pattern. A spokesperson told the Guardian that the company invests "significantly in the quality of AI Overviews, particularly for topics like health" and that "the vast majority provide accurate information."

After the initial investigation, Google said an internal team of clinicians reviewed the examples and "found that in many instances, the information was not inaccurate and was also supported by high-quality websites." The company removed AI Overviews for some of the specific health queries the Guardian flagged, but left many others active.

When asked why those remaining AI Overviews hadn't been removed, Google said they "linked to well-known and reputable sources and informed people when it was important to seek out expert advice." The company said AI Overviews only appear "for queries where it has high confidence in the quality of the responses."

This response frames the problem as isolated bad results in an otherwise accurate system. The investigation framed it differently: the system's design - summarizing web results without medical context, displaying them at the top of search results, and hiding disclaimers - creates the conditions for wrong answers to do real harm regardless of how often the system gets things right.

The Pattern

The Guardian investigation wasn't the first time AI Overviews delivered bad information. In May 2024, the feature famously told users to put glue on pizza (sourced from a satirical Reddit comment) and to eat rocks for nutritional value. Those errors were amusing and went viral. Wrong liver function test ranges and dangerous cancer diet advice don't go viral. They just quietly put people at risk.

The feature had become unpopular enough by early 2026 that users had discovered a workaround: inserting profanity into search queries disabled AI Overviews entirely. That a significant number of people preferred to swear at Google rather than receive AI-generated answers says something about how the feature was being received by the people Google claims it helps.

The liver tests, the cancer diet, the mental health misinformation, and the buried disclaimers combine into a consistent picture: an AI system given the most prominent position in the most widely used search engine, answering medical questions with confident authority and incomplete information, while health professionals and researchers pointed out that the system's design was doing exactly what they warned it would do.

Discussion