False and misleading health information in Google’s artificial intelligence summaries is putting people at risk of harm, a Guardian investigation has found.
The company has said that its AI overviews, which use generative AI to provide snapshots of essential information about a topic or question, are “helpful” and “reliable”.
But some summaries, which appear at the top of search results, provide inaccurate health information and put people at risk of harm.
In a case that experts described as “really dangerous”, Google wrongly advised people with pancreatic cancer to avoid high-fat foods. Experts said this is exactly the opposite of what should be recommended, and could increase patients’ risk of dying from the disease.
In another “alarming” example, the company provided false information about vital liver function tests, which could mislead people with severe liver disease into thinking they were healthy.
Google also provided “grossly inaccurate” information for women searching for answers about cancer tests, which experts say could result in people dismissing real symptoms.
A Google spokesperson said that many of the health examples shared with them were “incomplete screenshots”, but from what they could assess they were “linked to well-known, reputable sources and recommend taking expert advice”.
The Guardian’s investigation comes amid growing concerns that AI data could mislead consumers into believing it is trustworthy. In November last year, a study found that AI chatbots on various platforms gave inaccurate financial advice, while similar concerns have been raised about summaries of news stories.
Sophie Randall, director of the Patient Information Forum, which promotes evidence-based health information to patients, the public and health care professionals, said the examples showed that “Google’s AI observations can place inaccurate health information at the top of online searches, putting people’s health at risk”.
Stephanie Parker, director of digital at Marie Curie, an end-of-life charity, said: “People turn to the internet in moments of anxiety and crisis. If the information they receive is inaccurate or out of context, it can seriously harm their health.”
The Guardian uncovered multiple cases of inaccurate health information in Google’s AI overview after concerns were raised by a number of health groups, charities and professionals.
Anna Jewell, director of support, research and impact at Pancreas Cancer UK, said it was “completely wrong” to advise patients to avoid high-fat foods. “Doing so can be really dangerous and can jeopardize a person’s chances of getting well enough to undergo treatment,” he said.
Jewell said: “The Google AI response shows that people with pancreatic cancer avoid high-fat foods and provides a list of examples. However, if someone followed what was suggested in the search result they would not be able to consume enough calories, would struggle to gain weight, and would be unable to tolerate chemotherapy or potentially life-saving surgery.”
Typing “what is the normal range for a liver blood test” also yields confusing information, with large numbers of numbers, little context, and no account of the nationality, gender, ethnicity or age of patients.
Pamela Healy, chief executive of the British Liver Trust, said the AI summaries were worrying. “Many people with liver disease don’t show any symptoms until the end stages, which is why it’s so important they get tested. But what Google AI says is ‘normal’ may actually differ significantly from what is considered normal.
“This is dangerous because it means some people with severe liver disease may think their results are normal, then not bother to attend a follow-up health care meeting.”
A search for “vaginal cancer symptoms and tests” listed the Pap test as a test for vaginal cancer, which is incorrect.
Athena Lamnisos, chief executive of the Eve Appeal cancer charity, said: “This is not a test to detect cancer, and certainly not a test to detect vaginal cancer – that is completely misinformation. Receiving such misinformation could potentially lead someone with symptoms of vaginal cancer to not get checked because they had a clear result at a recent cervical screening.
“We were also concerned by the fact that the AI summary changed when we did the exact same search, resulting in a different response each time, derived from different sources. This means people are getting a different answer depending on the search, and that’s not good enough.”
Lamnisos said he was extremely concerned. “Some of the results we’ve seen are really worrying and could potentially put women at risk,” she said.
The Guardian also found that the Google AI overview provided misleading results for searches about mental health conditions. “This is a huge concern for us as a charity,” said Stephen Buckley, Mind’s head of information.
Some AI summaries for conditions such as psychosis and eating disorders offer “very dangerous advice” and “could be inaccurate, harmful or lead people to avoid seeking help,” Buckley said.
He said some people also missed important context or nuance. “They may suggest accessing information from sites that are inappropriate… and we know that when AI summarizes information, it can often reflect existing biases, stereotypes or stigmatizing narratives.”
Google said most of its AI observations were factual and helpful, and it continually improved the quality. It said AI Overview’s accuracy rate was on par with its other search features, such as Featured Snippets, which had been in place for more than a decade.
The company also said it will take appropriate action under its policies when AI overviews misinterpret web content or miss context.
A Google spokesperson said: “We invest significantly in the quality of AI observations, particularly for topics like health, and the vast majority provide accurate information.”
<a href=