Are you experiencing a headache or could it be a sinus infection? Wondering about the sensation of a stress fracture? Concerned about chest pain? If you search for answers to these questions on Google, you may come across responses generated by artificial intelligence. Recently, Google introduced a new feature called AI Overviews, which utilizes generative AI technology. This machine-learning technology is trained on vast amounts of internet data and can provide conversational answers to certain search queries within seconds.
Since its launch, users have encountered numerous inaccuracies and strange responses across various topics using the tool. However, experts have highlighted the significance of how it answers health-related questions. While the technology can potentially guide individuals towards healthier habits or necessary medical care, it also carries the risk of providing inaccurate information. The AI system has been known to fabricate facts and, if influenced by unreliable sources, may offer advice that contradicts medical guidance or poses a health risk to users.
An example of the system producing flawed answers based on unreliable sources is when users were advised to consume at least one rock a day for vitamins and minerals when asking “how many rocks should I eat.” This advice was actually scraped from The Onion, a satirical website.
According to Dr. Karandeep Singh, the chief health AI officer at UC San Diego Health, it is important to be cautious of the information you read and trust, especially when it comes to health-related topics. He emphasizes the significance of considering the source of the information.
Hema Budaraju, a senior director of product management at Google, who is involved in the development of AI Overview, mentioned that health-related searches have certain precautions in place. She did not provide specific details but mentioned that searches that are considered dangerous, explicit, or indicate vulnerability, such as self-harm, do not generate AI summaries.
Google did not disclose a comprehensive list of websites that support the information in AI Overviews. However, they stated that the tool works in conjunction with the Google Knowledge Graph, an existing system that aggregates billions of facts from numerous sources.
For instance, when asked about the health benefits of chocolate, Google’s answer pulls up information from research on heart health, mental health, and other relevant areas.
Responses to health questions like this one typically rely on credible sources. However, in this particular case, the answer also includes information from Venchi, an Italian chocolate and gelato company.
A similar search for the question “Is chocolate healthy for you?” yielded a response from various sources, including the website of ZOE, a company that offers at-home “gut intelligence tests” and a nutritional app.
Although the new search responses do mention some sources, such as the Mayo Clinic, WebMD, the World Health Organization, and PubMed (a scientific research hub), this list is not exhaustive. The tool can also gather information from sources like Wikipedia, blog posts, Reddit, and e-commerce websites. Additionally, it does not provide users with details about which facts come from which sources.
When conducting a search, users can usually differentiate between a reliable medical website and a candy company. However, if information from multiple sources is combined into one block of text, it can lead to confusion.
Dr. Seema Yasmin, the director of the Stanford Health Communication Initiative, expressed concern about whether people are even paying attention to the source of the information. She questioned whether users have been adequately educated to look beyond a quick answer. Dr. Yasmin’s research on misinformation has made her skeptical about the average user’s willingness to dig deeper.
According to Dr. Dariush Mozaffarian, a cardiologist and professor of medicine at Tufts University, the chocolate answer is mostly accurate and effectively summarizes the research on chocolate’s health benefits. However, it fails to differentiate between strong evidence from randomized trials and weaker evidence from observational studies. Additionally, it does not provide any caveats regarding the evidence presented.
While it is true that chocolate contains antioxidants, the claim that it can prevent memory loss has not been definitively proven and requires further clarification. Presenting these claims together may give the impression that some are more firmly established than they actually are.
Furthermore, the accuracy of the answers provided by the AI can change as the technology itself evolves, even if the underlying scientific evidence remains the same.
According to a Google spokesperson, the company has implemented disclaimers in responses where necessary, cautioning users that the information should not be taken as medical advice.
It is unclear how AI Overviews assess the strength of evidence or if it considers contradictory research findings. For instance, conflicting studies on the health benefits of coffee are not addressed. Yasmin, along with other experts, also raised concerns about whether the tool relies on outdated or disproven scientific research.
Dr. Danielle Bitterman, a physician-scientist specializing in artificial intelligence, emphasized the importance of human judgment in evaluating the quality of sources. This critical decision-making process is routinely performed by clinicians who carefully analyze the evidence.
According to experts, if tools like AI Overviews are to fulfill that role, a better understanding is needed of how they navigate different sources and apply a critical lens to generate a summary. This is concerning because the new system prioritizes the AI Overview response over individual links to reputable medical websites like the Mayo Clinic and the Cleveland Clinic, which have traditionally been at the top of search results for health-related queries. A Google spokesperson clarified that AI Overviews will match or summarize the information from the top search results, but it is not intended to replace that content. Instead, it aims to provide users with an overview of the available information.
The Mayo Clinic chose not to provide a comment regarding the new responses. According to a representative from the Cleveland Clinic, individuals seeking health information should rely on reliable sources and consult a healthcare provider if they have any symptoms. A representative from Scripps Health, a healthcare system in California mentioned in some AI Overview summaries, stated that citations in Google’s AI generated responses could be useful in establishing Scripps Health as a trusted source of health information. However, the representative expressed concerns about being unable to verify the content produced by AI in the same manner as their own content, which is reviewed by medical professionals.
According to experts, when it comes to medical questions, it’s not just about the accuracy of the answer, but also how it is presented to users. Dr. Richard Gumina, the director of cardiovascular medicine at the Ohio State University Wexner Medical Center, pointed out that the AI response to the question “Am I having a heart attack?” provided a useful summary of symptoms. However, he noted that he had to read through a long list of symptoms before the text advised him to call 911. To further test the tool, Dr. Gumina searched for “Am I having a stroke?” and found that the response was more urgent, immediately instructing users to call 911 in the first line. Based on his experience, he would strongly recommend patients experiencing symptoms of a heart attack or stroke to seek immediate help.
Health experts advise individuals to exercise caution when seeking health information from AI responses. It is important for users to pay attention to the disclaimers provided in the fine print of certain AI Overviews answers, which state that the information is intended for informational purposes only. For medical advice or diagnosis, it is recommended to consult a professional as generative AI is still in the experimental phase.