Users catch Google AI Overviews inventing fake facts

0
Users catch Google AI Overviews inventing fake facts

When users search on Google, they expect facts, not fiction. However, some Google AI Overviews are raising eyebrows because they give bizarre or inaccurate answers to random queries.

The feature, introduced last May, aims to save users time by offering quick generative summaries at the top of search results. However, several examples on social media show the tool inventing definitions and linking to sources that do not actually support its claims.

What are Google AI Overviews?

According to Google, AI Overviews “appear in Google Search results when our systems determine that generative responses can be especially helpful — for example, when you want to quickly understand information from a range of sources, including information from across the web and Google’s Knowledge Graph.

But many users say the feature is backfiring. On social media, people are sharing examples of AI Overviews creating meanings for random or made-up phrases.

Real-world examples show errors

In one test, we searched for “milk the thunder meaning,” which produced a Google AI Overview citing a source. However, the linked article, “Weird English Phrases and Their Meaning” from EF English Live, did not mention “milk the thunder” at all. Instead, it separately explained “steal someone’s thunder” and “crying over spilt milk.”

When repeating the same search later, Google highlighted entirely different sources.

Another example highlighted by Wired involved a user searching “you can’t lick a badger twice meaning.” The AI Overview claimed it was an idiom meaning “you can’t trick or deceive someone a second time after they’ve been tricked once” — despite no such phrase existing.

While not every random phrase triggers a fake result, enough cases have surfaced that it’s becoming a wider concern.

A pattern of AI hallucinations

During the Super Bowl, travel blogger Nate Hake pointed out several errors in Google’s campaign highlighting small businesses from all 50 states. In Wisconsin’s ad, set in America’s Dairyland, Google’s chatbot Gemini helped a cheesemonger write a product description claiming Gouda accounts for 50 to 60 percent of the world’s cheese consumption.

Hake fact-checked the claim on X, formerly Twitter, writing, “Gemini provides no source, but that is just unequivocally false. Cheddar & mozzarella would like a word.”

Google’s response and public skepticism

Google says an AI hallucination occurs when a model generates an incorrect or misleading result.

Since AI models are trained on massive datasets, they make predictions based on patterns, but the accuracy depends heavily on the quality of the data.

A Google executive later responded to Hake’s post, saying it was not technically a hallucination because “Gemini is grounded in the web,” and users can check multiple sources. The executive noted that several sites do repeat the 50-60% statistic.

Despite these explanations, public trust in AI remains low. Edelman’s 2025 Trust Barometer shows that while 72% of people in China trust AI, only 32% of Americans do. Edelman notes that some people see AI as a force for progress, while others worry about unintended consequences.

Adding to the concern, Google spokesperson Meghann Farnsworth told Wired their system tries to offer context whenever possible — but admitted “nonsensical prompts” are still likely to cause strange AI Overviews.

Ella Rae Greene, Editor In Chief

Leave a Reply

Your email address will not be published. Required fields are marked *