Google’s recent launch of AI-generated overviews in US search results is facing significant backlash. Major media outlets such as The New York Times, BBC, and CNBC have reported numerous errors and strange suggestions from this new feature.
Users have taken to social media to share countless examples of the feature’s bizarre and sometimes dangerous outputs. From recommending non-toxic glue on pizza to suggesting that eating rocks provides nutritional benefits, these blunders are not just alarming, but also potentially harmful.
Replicated….
— MMitchell (@mmitchell_ai) May 23, 2024
But I appreciate that the glue suggested is "non-toxic"!
Look, this isn't about "gotchas", this is about pointing out clearly foreseeable harms. Before–eg–a child dies from this mess.
This isn't about Google, it's about the foreseeable effect of AI on society. https://t.co/kG0xP2if7L pic.twitter.com/MIPjF8hg0i
According to The New York Times, Google’s AI overviews fail to correct basic facts. One notable mistake was claiming that Barack Obama was the first Muslim president of the United States. Another error stated that Andrew Jackson graduated from college in 2005.
These inaccuracies are not just a minor glitch. They seriously undermine trust in Google’s search engine, a platform that over two billion people worldwide depend on for reliable information. This makes these errors particularly concerning, and the need for a solution is urgent.
Manual removal and system refinements
The Verge has reported that Google is working hard to remove these strange AI-generated responses and improve its systems. This involves a two-step process: first, the problematic responses are manually identified and removed, and then, the errors are used to refine the AI overview feature, ensuring such mistakes are not repeated.
This flawed rollout of AI overviews is not an isolated issue for Google. In February, Google paused its Gemini chatbot after it generated inaccurate images of historical figures and refused to depict white people in most cases. Similarly, the Bard chatbot from Google was ridiculed for providing incorrect information about outer space. These incidents highlight the recurring challenges Google faces in integrating AI into its products.
Earlier, the Bard chatbot from Google was ridiculed for providing incorrect information about outer space. This led to a significant drop in Google’s market value, costing the company US$100 billion.
Despite these setbacks, industry experts cited by The New York Times argue that Google must continue advancing AI integration to stay competitive. However, the challenges of managing large language models, which can ingest false information and satirical posts, are becoming increasingly clear. These models, while powerful, can also be prone to errors and misinterpretations, posing a significant challenge in ensuring their accuracy and reliability.
The debate over AI in search
The controversy surrounding AI overviews fuels the ongoing debate over AI’s risks and limitations. While the technology holds great potential, these mistakes highlight the need for thorough testing before it is widely deployed. As users and consumers, we also have a role to play in ensuring the responsible use of AI technology by reporting errors and providing feedback to companies like Google.
The BBC reports that Google’s rivals are also facing backlash for their attempts to incorporate more AI tools into their consumer products. For example, the UK’s data watchdog is investigating Microsoft after it announced a feature that would continuously take screenshots of users’ online activity. Additionally, actress Scarlett Johansson criticised OpenAI for using a voice similar to hers without permission.
What this means for websites and SEO professionals
The mainstream media’s focus on Google’s erroneous AI overviews brings attention to the issue of declining search quality. As Google works to correct these inaccuracies, this situation serves as a stark warning for the entire industry. The key takeaway is the urgent need to prioritise the responsible use of AI technology to ensure its benefits outweigh the risks.