The doctor’s voice echoed: “These results are… unusual.” Panic flared as you scanned the confusing lab report, filled with numbers that meant nothing. Now, imagine trusting an AI to interpret that same report, only to find out later its advice was dangerously wrong.
Google has quietly pulled back some AI-generated health summaries just as the tech industry seems all-in on AI healthcare integration. This move follows a report in The Guardian that revealed some of the AI-provided information was misleading.
The Guardian reported that Google removed AI-generated search summaries for the queries “what is the normal range for liver blood tests” and “what is the normal range for liver function tests.” Now, these searches present short excerpts from traditional search results instead of AI Overviews.
According to The Guardian, the AI Overviews for those liver-related searches had “served up inaccurate health information and put people at risk of harm.” The summaries reportedly gave “masses of numbers, little context and no accounting for nationality, sex, ethnicity or age of patients.”
The Guardian cited experts who warned that these results could be dangerous. Someone with liver disease, for instance, might delay follow-up care if they rely on an AI-generated definition of what’s normal.
A Google spokesperson stated, “We invest significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information. Our internal team of clinicians reviewed what’s been shared with us and found that in many instances, the information was not inaccurate and was also supported by high quality websites. In cases where AI Overviews miss some context, we work to make broad improvements, and we also take action under our policies where appropriate.”
These removals come as more people seek health answers from AI. The industry sees this, and it’s responding.
OpenAI, the company behind ChatGPT, stated that about a quarter of its 800 million regular users submit a healthcare-related prompt every week, with over 40 million doing so daily.
Days later, OpenAI launched ChatGPT Health, a health-focused experience that connects with users’ medical records, wellness apps, and wearable devices. Soon after, they announced the acquisition of healthcare startup Torch, which tracks medical records like lab results, doctor visit recordings, and medications.
Anthropic is also in the race. They recently announced a new set of AI tools that allow healthcare providers, insurers, and patients to use its Claude chatbot for medical purposes.
Anthropic says its tools can streamline tasks such as prior authorization requests and patient communications for hospitals and insurance companies. Patients can give Claude access to lab results and health records to generate summaries and explanations in plain language.
AI’s Medical Missteps: A Wake-Up Call?
Imagine relying on an AI chatbot to interpret a critical lab result. For hospitals and insurance companies, Anthropic says the tools can help streamline tasks such as prior authorization requests and patient communications, saving time and resources. But should lives hinge on algorithms?
How accurate are AI health summaries typically?
Google claims the “vast majority” of its AI Overviews provide accurate information, especially for health topics. However, the incident with the liver function tests highlights a critical flaw: even if mostly accurate, AI can still miss vital context, like a patient’s age, sex, ethnicity, or pre-existing conditions. If AI is a map, these missing details are like crucial landmarks erased, leaving you stranded.
What measures are in place to correct AI inaccuracies in health information?
Google says they have an internal team of clinicians who review flagged information. They also state they “work to make broad improvements” when AI Overviews miss context and take action under their policies. But let’s be honest: relying on internal review after the fact is a reactive approach, like installing airbags after the car crash.
As AI advances in healthcare, even small errors or missing context can have big consequences for patients.
The Rush to Integrate: Are We Ready?
OpenAI says roughly a quarter of its 800 million regular users submit a healthcare-related prompt every week, with over 40 million doing so daily. OpenAI, Anthropic, and even Google seem eager to position themselves as indispensable partners in the healthcare ecosystem. But is this a true innovation, or a land grab?
What are the potential risks of using AI for health information?
The risks are significant: inaccurate information, missed context, and a potential delay in seeking proper medical care. There’s also the issue of data privacy. Who has access to your medical records when you share them with an AI chatbot? How is that data being used? What security measures are in place to protect your most sensitive health information?
As AI advances in healthcare, even minor errors or missing context can have significant consequences for patients. 2026 is the year AI companies want to handle your health. Whether these companies are truly ready to shoulder that responsibility remains to be seen. But are patients ready to trust them?