On December 16, Instagram’s head, Adam Mosseri, took to Threads to discuss the growing challenges artificial intelligence (AI) poses to the online world. In his posts, Mosseri cautioned users against unquestioningly trusting images shared on social media, as AI can now create content that closely mimics reality. He emphasised the importance of understanding the source behind posts, calling for social platforms to play a more significant role in offering context to help users navigate the digital landscape.
View on Threads
“Our role as internet platforms is to label content generated as AI as best we can,” Mosseri stated. However, he admitted that no system is perfect, and some AI-generated content might slip through without labels. For this reason, he urged platforms to go beyond labelling and provide additional context about the accounts sharing content, empowering users to make informed decisions about its trustworthiness.
Why context matters in the AI era
Mosseri’s comments come as the internet becomes increasingly saturated with AI-generated content. From photorealistic images to text created by advanced language models, the line between human-made and machine-made content is becoming harder to distinguish.
Drawing comparisons to AI chatbots, which have been known to share false information confidently, Mosseri underscored the importance of verifying sources. He noted that relying on reputable accounts and fact-checking claims can help users assess the accuracy of what they encounter online.
Meta’s platforms—including Instagram—offer limited tools for providing this type of context. While the company has hinted at upcoming changes to its content moderation policies, no specific features have been announced to address the issues Mosseri raised.
Is Meta looking at user-led moderation?
Mosseri’s vision for combating misinformation shares similarities with initiatives already in place on other platforms. For instance, X (formerly Twitter) uses Community Notes to add context to tweets, while YouTube and Bluesky have introduced custom moderation tools to help users filter content.
Whether Meta will follow in their footsteps remains unclear, but the company has previously adopted features inspired by competitors. If Meta implements such measures, it could significantly shift how the platform addresses misinformation and builds trust in the era of AI-generated content.
Mosseri’s remarks signal that platforms like Instagram must adapt quickly to the challenges posed by AI. Social media companies can help maintain trust in an increasingly complex digital world by providing users with more context and better tools for evaluating content.