Google’s AI recommends glue in pizza recipe

Google’s AI recommends glue in pizza recipe

AI Pizza

Google’s AI search tool, AI Overviews, has been generating misleading and potentially harmful information. The tool recently suggested adding glue to pizza, citing a misinterpreted joke. Katie Notopoulos, an internet legend, made a prank about putting glue on pizza.

Google’s AI took this misinformation literally and recommended adding an eighth of a cup of Elmer’s glue to pizza sauce. It is crucial to clarify that glue is not edible and consuming it can be toxic and harmful to health. When asked about the substance on pizza, other AI tools like Perplexity AI and ChatGPT correctly advised against it, affirming that glue is not safe for consumption and explaining the meme’s origins.

This mishap is not an isolated incident. Google’s AI also faces difficulties with other queries, including questions about Google’s own products.

It was unable to correctly explain how to take a screenshot in Chrome’s Incognito mode, offering erroneous advice and conflicting answers.

See also  Harvard study suggests aliens among us

These inaccuracies highlight the current limitations of AI in providing reliable information. Despite advancements, AI-generated responses still occasionally misinterpret context and parrot misinformation.

google search misfires with erroneous info

Renée DiResta, the technical research manager at Stanford’s Internet Observatory, addressed concerns over whether AI search tools like AI Overviews could exacerbate the spread of erroneous medical advice to unsuspecting users. DiResta noted that the AI search tool does not seem to adhere to the high standards long established by search policies for health-related information. There is a policy called “Your Money or Your Life,” which acknowledges that for queries related to finance and health, there is a responsibility to hold search results to a very high standard of care.

However, instances have surfaced where AI-generated search results returned clearly wrong health information from low-quality sites included in the training data. DiResta expressed concern that AI Overviews does not appear to follow this policy rigorously. In response to these issues, it has been stated that they are aware of the problems and are working to make improvements.

Notably, they mentioned that for topics like news and health, “additional triggering refinements were launched to enhance quality protections.

While DiResta acknowledges the effort, she points out that it places a significant onus on users. The transparency achieved by pointing to URLs allows users to review sources but also relies on the trust that has been built over the years for high-quality results. For future steps, DiResta recommends that the “Your Money or Your Life” policy should be upheld robustly in the implementation of AI search tools.

See also reports 900% surge in travel scams

Ensuring that ethical guidelines remain foundational to new AI search capabilities is crucial in preventing medical misinformation. As users, we are responsible for verifying unexpected or unusual advice through multiple sources. Always double-check before following AI-generated guidance, and avoid ingesting any material that isn’t food-safe.


About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist