According to Meta, AI-generated misinformation posed a far smaller challenge during major elections in 2024 than experts initially feared. In a recent update, the company shared that AI-driven content accounted for less than 1% of the election-related misinformation flagged and reviewed by its fact-checkers.
“During the election period in the major elections listed above, AI content related to elections, politics, and social topics represented less than 1% of all fact-checked misinformation,” Meta stated in a blog post. The analysis spanned elections in the US, UK, Bangladesh, Indonesia, India, Pakistan, France, South Africa, Mexico, Brazil, and the EU Parliamentary elections.
The news comes after months of concerns from government officials and researchers about generative AI’s potential to amplify election-related falsehoods. With over 2 billion people voting globally in 2024, many anticipated widespread use of AI tools to create deepfakes or fuel misinformation campaigns. However, according to Meta’s President of Global Affairs, Nick Clegg, the feared risks did not materialise significantly.
Meta’s proactive approach reduced risks
Clegg acknowledged that public concerns were valid, given the rapid advancements in generative AI. However, he said the company’s monitoring revealed only a modest impact. “People were understandably concerned about the potential impact of generative AI on this year’s elections,” he explained during a briefing. “There were warnings about risks such as widespread deepfakes and AI-enabled disinformation campaigns. From what we’ve seen across our platforms, these risks were limited in scope.”
While Meta did not disclose the exact volume of AI-related misinformation flagged by its systems, it highlighted proactive measures. For example, its in-house AI image generator blocked 590,000 attempts to create fake images of prominent figures like Donald Trump, Joe Biden, and Kamala Harris in the US during the weeks leading up to election day.
Meta also expanded its AI content labelling efforts earlier this year. The move followed recommendations from the Oversight Board and was designed to increase transparency and limit misuse.
Striking a balance between enforcement and free expression
Despite these efforts, Meta needs help finding the right balance between tackling misinformation and supporting free expression. “We know that when enforcing our policies, our error rates are still too high, which interferes with free expression,” Clegg admitted.
The company has also adjusted its stance on political content, reflecting a broader strategy to reduce its focus on politics. For example, Instagram and Threads no longer recommend political content by default, while Facebook has deprioritised news in its feeds. CEO Mark Zuckerberg has also regretted some pandemic-era misinformation policies, calling for a refined approach.
As Meta looks ahead, it aims to improve the precision of its enforcement methods while maintaining its commitment to transparency and safety. The company has emphasised its dedication to learning from past challenges and building on its efforts to safeguard future elections.