Do I have the flu or Covid? Why do I wake up tired? What is causing the pain in my chest? For more than two decades, entering medical questions into the world’s most popular search engine has yielded a list of links to websites with the answers. Google these health queries today and the answer will likely be written by artificial intelligence.
Sundar Pichai, Google’s chief executive, first outlined the company’s plans to integrate AI into its search engine at its annual conference in Mountain View, Calif., in May 2024. Starting that month, he said, U.S. users would see a new feature, AI Overviews, that would provide summaries of information on top of traditional search results. The change marks the biggest shake-up of Google’s core product in a quarter of a century. By July 2025, the technology had expanded to more than 200 countries in 40 languages, with 2 billion people benefiting from AI presentations every month.
With the rapid rollout of AI previews, Google is racing to protect its traditional search business, which generates around $200 billion (£147 billion) a year, before its new AI competitors can derail it. “We are at the forefront of AI and shipping at an incredible pace. » Pichai said last July. The AI previews in particular “worked well,” he added.
But previews carry risks, experts say. They use generative AI to provide snapshots of information on a topic or question, adding conversational answers on top of traditional search results in the blink of an eye. They can cite sources, but don’t necessarily know when that source is incorrect.
A few weeks after the feature launched in the United States, users encountered untruths across a range of topics. One AI Overview said Andrew Jackson, the seventh American president, graduated from university in 2005. Elizabeth Reid, head of search at Google, responded to criticism in a blog post. She admitted that “in a small number of cases,” AI Overviews had misinterpreted the language of web pages and presented inaccurate information. “At the scale of the web, with billions of requests received every day, there are bound to be quirks and errors,” she writes.
But when these questions concern health, accuracy and context are essential and non-negotiable, experts say. Google faces scrutiny over its AI previews for medical queries after a Guardian investigation found that people were at risk of harm from false and misleading health information.
The company says the AI insights are “reliable“. But the Guardian found some medical summaries contained inaccurate health information and put people at risk. In one case, which experts called “really dangerous,” Google wrongly advised people with pancreatic cancer to avoid foods high in fat. Experts said this was the exact opposite of what should be recommended and could increase patients’ risk of death from the disease.
In another “alarming” example, the company provided false information about crucial liver function tests, which could lead people with severe liver disease to falsely believe they were healthy. What AI Overviews considers normal could differ significantly from what was actually considered normal, experts said. The summaries could lead seriously ill patients to mistakenly think they got a normal test result and not bother to show up for their follow-up appointments.
AI insights into women’s cancer tests were also provided”completely false» information that experts say could lead people to ignore real symptoms.
Google at the start sought to minimize the Guardian’s findings. Based on what its own clinicians have been able to assess, the company said, the AI insights that alarmed experts were linked to reputable sources and recommended seeking expert advice. “We invest significantly in the quality of AI insights, particularly on topics like health, and the vast majority provide accurate insights,” a spokesperson said.
However, within a few days, the company was deleted some of the insights into AI for health queries reported by the Guardian. “We do not comment on individual deletions in the context of research,” a spokesperson said. “In cases where AI insights lack context, we work to make broad improvements and also take action through our policies where appropriate. »
While experts welcome the removal of some AI summaries for health queries, many remain concerned. “Our biggest concern with all of this is that it’s about cherry-picking a single search result and Google can just turn off AI previews for that, but it’s not addressing the bigger problem with AI previews for health,” says Vanessa Hebditch, director of communications and policy at the British Liver Trust, a liver health charity.
“There are still too many examples of Google AI previews giving people inaccurate health information,” adds Sue Farrington, president of the Patient Information Forum, which promotes evidence-based health information to patients, the public and healthcare professionals.
A new study caused further concern. When researchers analyzed responses to more than 50,000 health-related searches in Germany to determine which sources AI insights rely on most, one result immediately stood out. The most cited domain was YouTube.
“It’s important because YouTube is not a medical publisher” the researchers wrote. “It is a general-purpose video platform. Anyone can upload content to it (e.g. board-certified doctors, hospital chains, but also wellness influencers, life coaches and creators without any medical training).”
In medicine, experts say, it’s not just where the answers come from, or their level of accuracy, but also how they are presented to users. “With AI Overviews, users are no longer confronted with a multitude of sources that they can compare and critically evaluate,” explains Hannah van Kolfschooten, researcher in AI, health and law at the University of Basel. “Instead, they are presented with a unique, confident, AI-generated response that demonstrates medical authority.
“This means that the system does not just reflect health information online, but actively restructures it. When this response relies on sources never designed to meet medical standards, such as YouTube videos, this creates a new form of unregulated medical authority online.
Google says AI previews are created to surface information supported by the best web resultsand include links to web content that supports the information presented in the summary. People can use these links to dig deeper into a topic, the company told the Guardian.
But AI Overviews’ single blocks of text, combining health information from multiple sources, can cause confusion, says Nicole Gross, associate professor of business and society at the National College of Ireland.
“Once the AI summary appears, users are much less likely to investigate further, meaning they are deprived of the opportunity to critically evaluate and compare information, or even use common sense when it comes to health-related questions.”
Experts raised other concerns with the Guardian. Even if and when AI insights provide accurate facts on a specific medical topic, they may not distinguish between strong evidence from randomized trials and weaker evidence from observational studies, they say. Some also overlook important caveats about this evidence, they add.
Listing these claims next to each other in an AI overview can also make some appear more established than they actually are. Answers may also change as insights into AI evolve, even if the science hasn’t changed. “This means people get a different answer depending on when they search, and that’s not enough,” says Athena Lamnisos, chief executive of cancer charity Eve Appeal.
Google told the Guardian that the links included in the AI previews were dynamic and changed based on the most relevant, useful and timely information for a given search. If AI Overviews misinterpreted web content or missed certain context, the company would use those errors to improve its systems and would also take action where appropriate, he said.
The biggest concern is that false and dangerous medical information or advice contained in the AI insights “ends up translating into a patient’s daily practices, routines and life, even in tailored forms,” Gross says. “In health care, it can become a matter of life and death. »
