A leading mental health charity has sounded the alarm about the risks of Google’s AI‑generated summaries used in search results, saying the tool can deliver harmful and misleading advice on critical issues such as mental health. The controversy has touched off a major inquiry and sparked urgent debate over how artificial intelligence should be used to inform people seeking sensitive health information.
Mind Launches Inquiry After Investigation Reveals Risk
Today, the England‑and‑Wales‑based mental health charity Mind announced a year‑long commission to study the impacts of AI on mental health information and wellbeing. This initiative follows a major investigation by The Guardian that exposed how Google’s AI Overviews provide sometimes inaccurate, misleading, or dangerously wrong advice when users search for health and mental wellbeing topics online.
The AI Overviews feature, shown to about 2 billion people per month, offers concise answers above traditional search results. While the intent is to give quick summaries of complex topics, the investigation revealed that these AI‑generated responses can misrepresent medical and mental health facts, undermining users’ ability to make informed decisions and potentially putting their wellbeing at risk.
Dr. Sarah Hughes, chief executive officer of Mind, described the situation as deeply concerning, emphasizing that “dangerously incorrect” mental health advice remains in circulation despite some removals by Google. Mind hopes its inquiry will recommend stronger safeguards to protect people, especially those in emotional distress.
Expert Warns Harmful Summaries Can Mislead Users
Rosie Weatherley, information content manager at Mind, summed up the stakes. She called the AI Overviews “very dangerous” because they can present harmful inaccuracies as uncontroversial facts. Her remarks were part of a broader commentary tied to The Guardian’s coverage.
Weatherley explained that before AI‑summarised results appeared, people could follow links into reputable medical sources and read full articles written by health professionals. This gave searchers deeper context and often led them to better support. But now, she said, AI Overviews often replace that journey with a single definitive‑sounding answer that can end information‑seeking prematurely.
In a short internal test run by Mind’s team, Weatherley and colleagues found examples where the AI Overviews:
-
Suggested that starvation was healthy
-
Claimed mental health problems are caused solely by chemical imbalances
-
Mistakenly confirmed a person’s imagined stalker as real
-
Asserted that most benefit claims for mental health conditions are malingering
These responses are not just incomplete or unhelpful; they distort hard‑won evidence and could worsen stress, stigma, and confusion for people already struggling.
Why Misleading AI Health Advice Is a Growing Concern
The root of the problem is how AI Overviews are created. They rely on generative technology that synthesizes snippets of information from the web into a short summary. While traditional search results list separate links to authoritative sites, AI Overviews present one unified answer that can sound confident even when it is wrong.
Medical and patient advocate groups argue that this design encourages users to treat the summary as reliable advice, even when nuance and disclaimers are missing. A recent related analysis highlighted that the positioning of AI Overviews — at the very top of search results — can give them undue authority in users’ eyes. Experts say this is especially risky for vulnerable individuals seeking immediate answers about symptoms or mental health needs.
Some media reports also note that Google’s summaries have sometimes included scam phone numbers or wrong contact information, which could put users at financial or personal risk by leading them to fraudulent sources.
Google Responds but Critics Say It’s Not Enough
Google asserts that most AI Overviews are accurate and helpful, and that where problems have been identified, the company has taken steps to improve the system. In some earlier cases reported by independent outlets, Google removed AI Overviews for certain medical queries after experts flagged serious errors, such as incorrect interpretations of liver test results. However, mental health summaries remain in place, and disclaimers about the potential for AI errors are often buried or hard to see at first glance.
A Google spokesperson has said that AI Overviews can and should help users quickly find trustworthy information, adding that the company works with clinical reviewers when evaluating health content. Google also encourages searchers to read full articles from experts and seek professional help when needed.
Despite these assurances, Mind and other charities argue that the present design creates a dangerous illusion of certainty, especially for people seeking help during emotional crises. By trimming away essential context, AI Overviews can give incomplete answers that lead people away from proper care.
Broader Debate on AI Safety and Health Information
The controversy over Google’s AI Overviews is part of a larger global conversation about how artificial intelligence should be regulated, particularly when it touches on health and safety. Unlike typical search algorithms, generative AI can produce text that resembles human‑written advice — something that raises ethical concerns about accuracy, bias, and accountability in automated health guidance.
Some mental health and AI experts argue that AI should never replace the depth and nuance of human‑generated medical content, especially for complex emotional and psychological issues. They recommend clearer warnings, mandatory professional review for medical summaries, and better user education about the limits of AI answers.
At the same time, AI has potential benefits. With careful guardrails and collaboration with mental health professionals, generative technologies could help support access to information and connect people with resources faster than ever before. But for now, charities like Mind believe the risks outweigh the rewards when flawed summaries circulate unfiltered in moments of vulnerability.
As this debate evolves, the world will be watching whether tech giants, policymakers, and health advocates can find a way to harness AI’s power without putting lives at risk. Share your thoughts and tell us how you think AI should handle health information by commenting below. Use hashtag #AISafeHealth to join the conversation.






