Meta, the parent company of Facebook, Instagram, and WhatsApp, has announced sweeping changes to its content moderation policies. The changes, described as a move towards fostering free expression, include ending its third-party fact-checking programme and relaxing restrictions on controversial topics. These decisions are part of a broader strategy to streamline content policies while focusing on high-severity violations.
In a blog post titled โMore Speech, Fewer Mistakes,โ Meta’s new global affairs officer, Joel Kaplan, outlined the changes. The update is intended to reduce over-enforcement, which Kaplan claimed had led to unnecessary censorship and limited political debate.
Major changes to content moderation policies
The overhaul focuses on three key areas:
1. Ending third-party fact-checking: Meta is phasing out its partnership with third-party fact-checking organisations and introducing a โCommunity Notesโ model. This approach, similar to the one used by X.com (formerly Twitter), allows users to provide context and flag misinformation directly.
2. Lifting topic restrictions: Meta will now ease restrictions on topics considered part of โmainstream discourse.โ Instead of tightly moderating these discussions, the company will prioritise addressing illegal content and high-severity violations, such as terrorism, child exploitation, and fraud.
3. Personalised political content: Users will have more control over the political content they see, enabling them to tailor their feeds to their preferences. This move embraces a more individualised experience, even if it creates echo chambers.
The timing of these changes is notable, as they come just weeks before a new US presidential administration takes office. Former President Donald Trump, whose accounts were once banned by Meta, has emphasised a broader interpretation of free speech, which aligns with Meta’s new approach.
Meta has faced criticism from all sides in recent years. While some have accused the platform of not doing enough to curb misinformation, others argue that its rules were overly restrictive and politically biased. Kaplan highlighted these concerns, stating that Meta had made mistakes by over-enforcing policies, leading to censorship of content that didn’t violate guidelines.
The third-party fact-checking programme, introduced in 2016 after accusations that Facebook spread misinformation during the US presidential election, was one of Meta’s most significant efforts to combat fake news. However, the programme has drawn criticism for potential bias in selecting content to fact-check.
Shifting accountability
This policy shift also reflects changes in Meta’s leadership. Joel Kaplan, a prominent Republican, has replaced Nick Clegg as Meta’s global affairs head. Kaplan has signalled a desire to align more closely with the incoming Trump administration’s free speech priorities.
The company is also relocating its trust and safety teams from California to Texas and other US locations to diversify perspectives. This move signals an effort to broaden its approach to content moderation.
Despite these changes, some question whether Meta’s relaxed stance will make the platform more susceptible to misinformation. The Oversight Board, Meta’s independent review body, welcomed the update but stated that it would work closely with the company to refine its approach to free speech in 2025.
Meta’s evolving priorities
Meta’s latest actions depart from its previous commitment to rigorous content moderation. CEO Mark Zuckerberg has expressed interest in a more collaborative relationship with the new administration, potentially influencing these policy adjustments.
In a symbolic gesture, Meta recently added UFC president Dana White, a Trump supporter, to its board of directors. These developments underscore Meta’s focus on repositioning itself during this period of political change.
โMeta’s platforms are built to be places where people can express themselves freely. That can be messy, but it’s the essence of free expression,โ Kaplan wrote in the blog post.
As Meta redefines its content moderation strategy, the impact on users and global discourse remains to be seen.