Metaโs AI assistant recently made headlines for an error involving the attempted assassination of former President Donald Trump. The AI incorrectly stated that the event didnโt happen, which a company executive has now attributed to the technologyโs inherent limitations.
AI assistant’s incorrect response
In a blog post published on July 30, Joel Kaplan, Metaโs global head of policy, described the AI’s responses to questions about the shooting as โunfortunate.โ Initially, Meta AI was programmed to avoid responding to questions about the attempted assassination. However, after users began to notice this restriction, the company decided to remove it. Despite this change, the AI provided incorrect answers in a few instances, sometimes asserting that the event didnโt occur. Kaplan assured that the company is actively working to correct these errors.
โThese types of responses are called hallucinations, an industry-wide issue seen across all generative AI systems. Itโs an ongoing challenge to see how AI handles real-time events in the future,โ Kaplan explained. He added, โLike all generative AI systems, models can return inaccurate or inappropriate outputs. Weโll continue to address these issues and improve these features as they evolve and more people share their feedback.โ
<blockquote class="twitter-tweet" data-media-max-width="560"><p lang="en" dir="ltr">Meta AI wonโt give any details on the attempted ass*ss*nation.<br><br>Weโre witnessing the suppression and coverup of one of the biggest most consequential stories in real time.<br><br>Simply unreal. <a href="https://t.co/BoBLZILp5M">pic.twitter.com/BoBLZILp5M</a></p>— Libs of TikTok (@libsoftiktok) <a href="https://twitter.com/libsoftiktok/status/1817654239587701050?ref_src=twsrc%5Etfw">July 28, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
Broader industry challenges
This incident is not isolated to Meta. On the same day, Google had to refute claims that its search autocomplete feature was censoring results about the assassination attempt. Former President Trump commented on the situation in a post on Truth Social, accusing Meta and Google of attempting to influence the election. โHere we go again, another attempt at RIGGING THE ELECTION!!! GO AFTER META AND GOOGLE,โ he wrote.
Since the launch of ChatGPT, the tech industry has been grappling with how to manage generative AIโs tendency to produce false information. Some companies, including Meta, have tried to anchor their chatbots with quality data and real-time search results to mitigate these issues. However, this incident demonstrates the difficulty of overcoming the inherent design of large language models, which can sometimes generate inaccurate information.
Ongoing improvements
Metaโs approach to this problem involves continuous improvements and user feedback. Kaplan highlighted that the company is committed to refining its AI systems to minimise inaccuracies. He emphasised that while generative AI has advanced significantly, it still faces challenges, especially when dealing with real-time events.
The situation underscores a broader issue within the AI industry: the balance between providing helpful, accurate information and managing the AI’s propensity for generating incorrect or misleading content. Companies like Meta and Google must find more effective ways to ensure their systems deliver reliable information as AI technology evolves.
Metaโs commitment to addressing these challenges and improving AI systems is crucial. By doing so, the company aims to enhance the reliability of its AI assistants, ultimately providing users with more accurate and trustworthy responses.