How Google’s AI Overviews Are Impacting Public Health: Ri...
Tech Beetle briefing GB

How Google’s AI Overviews Are Impacting Public Health: Risks and Realities

Essential brief

How Google’s AI Overviews Are Impacting Public Health: Risks and Realities

Key facts

Google’s AI Overviews provide quick, AI-generated summaries above search results but can contain inaccurate medical information.
Inaccurate health advice from AI Overviews has included dangerous recommendations, such as improper dietary guidance for cancer patients.
AI Overviews often cite mixed-quality sources like YouTube, creating misleading medical authority without adequate context.
The feature’s single-answer format discourages users from critically evaluating or researching further, increasing risk of harm.
Ongoing improvements and transparency are essential to ensure AI-generated health information is accurate and safe for public use.

Highlights

Google’s AI Overviews provide quick, AI-generated summaries above search results but can contain inaccurate medical information.
Inaccurate health advice from AI Overviews has included dangerous recommendations, such as improper dietary guidance for cancer patients.
AI Overviews often cite mixed-quality sources like YouTube, creating misleading medical authority without adequate context.
The feature’s single-answer format discourages users from critically evaluating or researching further, increasing risk of harm.

For over two decades, people have relied on Google to answer their medical questions by providing links to trusted websites. However, since May 2024, Google introduced AI Overviews, an AI-powered feature that summarizes information directly above traditional search results. This shift represents the most significant update to Google Search in 25 years, aiming to deliver quick, conversational answers to queries including health-related ones. By mid-2025, AI Overviews had expanded globally, serving 2 billion users monthly across 200 countries and 40 languages. While Google’s CEO Sundar Pichai praised the rapid rollout and performance of AI Overviews, experts have raised serious concerns about the accuracy and safety of medical information presented by these AI-generated summaries.

AI Overviews use generative AI to synthesize information from multiple web sources, but they do not always verify the correctness of those sources. Early on, users noticed factual errors in various topics, including history and health. Google acknowledged occasional inaccuracies but emphasized the scale and complexity of processing billions of queries daily. However, when it comes to health, experts argue that precision and context are critical. Investigations revealed that some AI Overviews provided dangerously incorrect medical advice, such as recommending pancreatic cancer patients avoid high-fat foods—contrary to medical guidelines—and misrepresenting liver function test results, potentially leading patients to underestimate their illness severity.

Despite Google’s initial defense of the feature’s reliability, the company removed some problematic health-related AI Overviews after public scrutiny. Nonetheless, health advocates remain concerned that these removals address only isolated cases rather than the systemic issues inherent in AI-generated health summaries. A significant worry is that AI Overviews often cite sources like YouTube, a platform not designed for medical publishing and featuring content from both qualified professionals and untrained creators. This reliance on mixed-quality sources can create a misleading sense of medical authority, as users receive a single confident answer rather than a range of perspectives to evaluate critically.

Experts highlight that AI Overviews may not adequately differentiate between high-quality evidence, such as randomized controlled trials, and weaker observational studies. They may also omit important caveats, leading to misinterpretation of medical facts. Furthermore, the dynamic nature of AI Overviews means answers can change over time even when the underlying science remains constant, potentially confusing users. The presentation style discourages users from conducting further research or applying critical thinking, which is essential for understanding complex health information.

Google maintains that AI Overviews link to supporting web content, allowing users to explore topics in depth. However, the single-block summary format tends to reduce users’ motivation to verify information independently. The overarching concern among healthcare professionals and patient advocates is that inaccurate or misleading AI-generated health information could influence patient behaviors and decisions, sometimes with life-threatening consequences. As AI continues to integrate into search engines, ensuring the accuracy, transparency, and contextual integrity of health information remains a vital challenge.

In summary, while Google’s AI Overviews represent a technological advancement in delivering quick answers, their current implementation poses significant risks to public health. The confident authority projected by AI-generated summaries can mask inaccuracies, leading to potentially harmful outcomes. Addressing these challenges requires ongoing refinement of AI systems, better source vetting, and clear communication to users about the limitations of AI-generated health information.