Tuesday, 8 April 2025
28.9 C
Singapore
35.5 C
Thailand
24.8 C
Indonesia
28.6 C
Philippines

Meta ends fact-checking in the U.S. as moderation policies shift

Meta ends U.S. fact-checking and shifts to community-based moderation, sparking concerns about rising misinformation across its platforms.

Meta has officially ended its use of fact-checkers in the United States, a decision that will take full effect on Monday, June 10. Joel Kaplan, Metaโ€™s Chief Global Affairs Officer, confirmed the move, which marks a major shift in the companyโ€™s approach to content moderation.

This change was first announced in January, alongside other updates that relaxed the companyโ€™s rules around harmful or misleading content. At the time, Meta CEO Mark Zuckerberg framed the decision as part of a broader effort to prioritise โ€œfree speech.โ€ However, the move has drawn concern from critics who worry that it may open the door to more misinformation and harm to vulnerable communities.

A shift in approach to speech and moderation

Metaโ€™s latest changes come at a politically sensitive time. When the updates were introduced earlier this year, Zuckerberg had just donated US$1 million to former President Donald Trumpโ€™s inauguration fund and attended the event. Not long after, he appointed Dana Whiteโ€”CEO of UFC and a known Trump supporterโ€”to Metaโ€™s board of directors.

In a video statement, Zuckerberg described the recent election cycle as a โ€œcultural tipping pointโ€ for freedom of speech. He argued that it was time to refocus on allowing open conversations, even if the topics were controversial.

However, some of Metaโ€™s updated policies have raised eyebrows. According to the companyโ€™s hateful conduct policy, Meta now permits users to claim someone has a mental illness or abnormality based on their gender or sexual orientation. The platform justifies this as being part of โ€œpolitical and religious discourseโ€ on topics like transgender identity and homosexuality.

Fact-checking replaced by community notes

Instead of professional fact-checkers, Meta is now adopting a community-based system similar to โ€œCommunity Notesโ€ used on Elon Muskโ€™s platform, X (formerly Twitter). This new system relies on users to add context to potentially misleading posts across Facebook, Instagram, and Threads.

While this approach may help surface additional information, experts say it is far less effective alone. Professional fact-checkers typically work quickly and with clear standards, helping platforms control the spread of false information. With their removal, critics say, the spread of fake news could worsen.

Metaโ€™s decision to step away from fact-checking appears to align with its business interests. The less moderation there is, the more content can circulate. Since Metaโ€™s platforms use algorithms that prioritise posts that get strong reactionsโ€”whether positive or negativeโ€”more controversial or misleading posts are likely to get more attention.

The rise in misinformation is already visible

Since Meta began scaling back its fact-checking efforts earlier this year, there has already been a rise in false claims spreading across its platforms. One example involves a fake story that U.S. Immigration and Customs Enforcement (ICE) would pay US$750 to individuals who report undocumented immigrants. The rumour went viral and was shared widely before being debunked.

The person behind that post welcomed Metaโ€™s policy changes, telling investigative outlet ProPublica that the end of fact-checking was โ€œgreat information.โ€

In January, Kaplan summarised the companyโ€™s new direction as follows: โ€œWeโ€™re getting rid of several restrictions on immigration, gender identity, and gender that are the subject of frequent political discourse and debate. Itโ€™s not right that things can be said on TV or the floor of Congress but not on our platforms.โ€

As of this week, Meta no longer has any fact-checkers operating in the U.S. The move signals a new chapter for the tech giantโ€”one focused on letting users decide whatโ€™s true rather than relying on experts to guide the way.

Hot this week

Microsoft reveals AI-powered Quake II demo with clear limitations

Microsoft releases a playable AI-generated Quake II demo but admits it has clear limitations and is more research than an actual game.

MediaTekโ€™s Kompanio Ultra chip challenges Copilot+ PCs with AI power

MediaTekโ€™s Kompanio Ultra chip brings powerful AI processing and high-end performance to Chrome OS, competing with Windows Copilot+ PCs.

Qualcomm expands AI research with MovianAI acquisition

Qualcomm has acquired Vietnamese AI research firm MovianAI to boost its AI development in smartphones, PCs, and software-defined vehicles.

ChatGPT Plus is free for students โ€“ how to claim your limited-time access

College students in the US and Canada can get ChatGPT Plus for free until May 31. Hereโ€™s how to check if you qualify and claim the offer.

Pixel 10 to feature more cameras, but with downgraded specs

Google's Pixel 10 may feature more cameras but with downgraded specs, including a telephoto lens, while the Pixel 10 Pro retains its advanced setup.

DBS and Bank of China customer data exposed after ransomware attack on printing vendor

Over 11,000 DBS and Bank of China customers in Singapore had data exposed in a ransomware attack on a third-party printing vendor.

Quantum mechanics could fix joystick drift once and for all

Tunnelling magnetoresistance (TMR) technology could solve joystick drift by offering better accuracy, lower power consumption, and more stability.

Google Pixel 10 base model features a telephoto camera but with some trade-offs

The Google Pixel 10 base model could feature a telephoto camera but with compromises on sensor sizes and resolutions.

Meta’s new AI model tests raise concerns over fairness and transparency

Metaโ€™s AI model Maverick ranked high on LM Arena, but developers donโ€™t get the same version tested, raising concerns over fairness.

Related Articles