Affiliate links on Android Authority may earn us a commission. Learn more.
Google removes AI Overviews results that gave 'alarming' medical advice
14 hours ago

- Google has quietly removed AI Overviews for liver test queries after they were found to give incorrect medical advice.
- An investigation found the feature gave oversimplified liver test ranges without critical context like age, sex, or ethnicity.
- Google only removed AI summaries for specific flagged phrases, not health searches as a whole.
Google has quietly removed AI Overviews for certain liver test searches after a report found the feature was giving out wrong medical advice.
AI Overviews are designed to make Google Search faster. When you ask a question, you see a short, confident summary at the top of the page instead of a list of links. This should make things easier, but health-related searches have shown some serious problems.
A recent investigation by The Guardian found that for important health questions, the AI has given information that experts describe as “alarming” and “dangerous.” For example, the AI listed standard ranges for liver function tests without saying that these numbers can change a lot depending on a person’s age, sex, or ethnicity. This kind of mistake could make someone with serious liver disease believe they are healthy and avoid getting the care they need.
Don’t want to miss the best from Android Authority?
- Set us as a favorite source in Google Discover to never miss our latest exclusive reports, expert analysis, and much more.
- You can also set us as a preferred source in Google Search by clicking the button below.
Google responded by removing AI Overviews for the specific search terms that were flagged, such as “what is the normal range for liver blood tests.” A spokesperson stated that Google acts according to its policies and makes “broad improvements.”
However, this is not a complete shutdown. If you change the wording of a health question a little, you can still get an AI summary. This shows that the system’s protections are limited, not total.
Vanessa Hebditch from the British Liver Trust noted that small changes in the search, like using “lft reference range,” still bring up the same incorrect AI summaries, The Guardian reports. This means that dangerous information is still easy to find with a slightly different search.
AI Overviews have made mistakes before. For example, the feature once suggested adding glue to pizza, which led to criticism. When it comes to health information, accuracy is essential. Medicine relies on context, probabilities, and exceptions, areas where large language models still struggle.
Thank you for being part of our community. Read our Comment Policy before posting.

