Google shifts blame for erroneous AI Overviews spectacle to a ‘data void’ of contextual topics coupled with faked screenshots, but still makes “a dozen technical improvements”

Google addresses its AI Overviews search issue after misleading users and recommending eating rocks and glue.

When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.

What you need to know

What you need to know

Every major tech corporation is quickly hopping into the great AI race. But mostly forgotten or ignored, pace doesn’t necessarily equate to perfect execution. Microsoft has predominantly had a smooth sail with the technology after making amulti-billion investment in OpenAI technology. Its success in the category is well outlined in its latest earnings call report, and it’s nowthe world’s most valuable company ahead of Applewith over $3 trillion in market capitalization.

Unfortunately, Google’s blatant attempt to chase Microsoft’s footsteps and success in AI has seemingly flopped iflast week’s AI Overviews spectacleis anything to go by. The feature was spotted bizarrely recommending eating rocks, and glue, and potentially even committing suicide, despite having recently acquired exclusive rights to Reddit content to power its AI.

Did Google AI Overviews go insane?

Did Google AI Overviews go insane?

Google recently published a new blog post highlighting what happened to the AI feature and contributed to it generating misleading information and recommendations for queries.Head of Google Search Liz Reid indicatedAI Overviews are different from chatbots or LLM products that are broadly available and create responses based on training data. Instead, the feature is powered by a customized language model integrated with Google’s web ranking systems. This way, the feature presents users with well-curated and high-quality search results, including relevant links.

According to the Google Search lead:

“AI Overviews generally don’t “hallucinate” or make things up in the ways that other LLM products might. When AI Overviews get it wrong, it’s usually for other reasons: misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available.”

While addressing the erroneous and “insane” search results from the AI Overviews feature, Reid stated the feature is optimized for accuracy and was run through extensive tests, including robust red-teaming efforts, before it was shipped. The Search lead also indicated that the team had spotted “nonsensical” searches potentially aimed at creating erroneous results.

Reid also indicated that some screenshots broadly shared across social media platforms were fabricated, causing users to think that Google had returned dangerous results for topics like smoking while pregnant. The company claims this isn’t the case and recommends users try running such searches using the tool to confirm.

Get the Windows Central Newsletter

All the latest news, reviews, and guides for Windows and Xbox diehards.

Google admitted that the feature provided inaccurate AI Overviews for some searches. Interestingly, Google claims “these were generally for queries that people don’t commonly do.” The company further indicated that before screenshots of people using the feature to find out “How many rocks should I eat?” went viral, practically no one asked that question.

Additionally, Google says there’s limited quality content covering such topics while referring to the phenomenon as a data void. “In other examples, we saw AI Overviews that featured sarcastic or troll-y content from discussion forums. Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza.”

What’s Google doing to address these critical issues?

Google has highlighted several measures that won’t necessarily fix queries one by one but will address broad sets of queries via updates and “a dozen technical improvements” to its core search systems, including:

Lastly, Google has also indicated that it will keep track of the feedback based on the tool’s user experience and external reports to inform its decisions on how to improve its experience and more.

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You’ll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.