Google’s AI search tool, AI Overviews, has been generating misleading and potentially harmful information. The tool recently suggested adding glue to pizza, citing a misinterpreted joke. Katie Notopoulos, an internet legend, made a prank about putting glue on pizza.
From brainstorming ideas to speeding up daily tasks, AI tools can enhance the way you work.
Dive into Google AI Essentials, a comprehensive course from #GrowWithGoogle designed to boost productivity ↓ https://t.co/TnMN7UR9qg— Google (@Google) June 10, 2024
Still seeing significantly fewer AI Overviews than what was appearing before, and it feels a bit like the ones that do show have a lot more text/images in the answer and fewer links? (Purely anecdotal, maybe others with data can chime in?)
One of my coworkers did have his first… pic.twitter.com/bB8QTJOu0J
— Lily Ray 😏 (@lilyraynyc) June 10, 2024
Google’s AI took this misinformation literally and recommended adding an eighth of a cup of Elmer’s glue to pizza sauce. It is crucial to clarify that glue is not edible and consuming it can be toxic and harmful to health. When asked about the substance on pizza, other AI tools like Perplexity AI and ChatGPT correctly advised against it, affirming that glue is not safe for consumption and explaining the meme’s origins.
This mishap is not an isolated incident. Google’s AI also faces difficulties with other queries, including questions about Google’s own products.
"Every time someone like me reports on Google’s AI getting something wrong, we’re training the AI to be wronger." https://t.co/ilT3bHLf0A
— Damon Beres (@dlberes) June 12, 2024
It was unable to correctly explain how to take a screenshot in Chrome’s Incognito mode, offering erroneous advice and conflicting answers.
These inaccuracies highlight the current limitations of AI in providing reliable information. Despite advancements, AI-generated responses still occasionally misinterpret context and parrot misinformation.
google search misfires with erroneous info
'Devastating' potential impact of Google AI Overview – new research. AI-written summaries were returned for 1/4 of news-related search queries in US mid-May so organic links to publisher articles were pushed far down the page. @pressgazette https://t.co/8cjFSam8lg
— Colleen Murrell 🦘🇪🇺🇮🇪🇬🇧 🇫🇷😎 (@ivorytowerjourn) June 13, 2024
Renée DiResta, the technical research manager at Stanford’s Internet Observatory, addressed concerns over whether AI search tools like AI Overviews could exacerbate the spread of erroneous medical advice to unsuspecting users. DiResta noted that the AI search tool does not seem to adhere to the high standards long established by search policies for health-related information. There is a policy called “Your Money or Your Life,” which acknowledges that for queries related to finance and health, there is a responsibility to hold search results to a very high standard of care.
However, instances have surfaced where AI-generated search results returned clearly wrong health information from low-quality sites included in the training data. DiResta expressed concern that AI Overviews does not appear to follow this policy rigorously. In response to these issues, it has been stated that they are aware of the problems and are working to make improvements.
Notably, they mentioned that for topics like news and health, “additional triggering refinements were launched to enhance quality protections.
While DiResta acknowledges the effort, she points out that it places a significant onus on users. The transparency achieved by pointing to URLs allows users to review sources but also relies on the trust that has been built over the years for high-quality results. For future steps, DiResta recommends that the “Your Money or Your Life” policy should be upheld robustly in the implementation of AI search tools.
Ensuring that ethical guidelines remain foundational to new AI search capabilities is crucial in preventing medical misinformation. As users, we are responsible for verifying unexpected or unusual advice through multiple sources. Always double-check before following AI-generated guidance, and avoid ingesting any material that isn’t food-safe.
Johannah Lopez is a versatile professional who seamlessly navigates two worlds. By day, she excels as a SaaS freelance writer, crafting informative and persuasive content for tech companies. By night, she showcases her vibrant personality and customer service skills as a part-time bartender. Johannah's ability to blend her writing expertise with her social finesse makes her a well-rounded and engaging storyteller in any setting.























