TikTok, the popular social media platform owned by ByteDance, has laid off hundreds of content moderators globally as part of a move towards AI-driven moderation. According to Reuters reports, the job cuts affected approximately 500 employees, primarily in Malaysia. The company, which employs more than 110,000 people worldwide, said the layoffs are part of an ongoing effort to enhance its global content moderation model.
“We are making these changes as part of our ongoing efforts to strengthen our global operating model for content moderation,” a TikTok spokesperson explained. The social media giant currently uses a combination of human moderators and AI systems, with the AI handling around 80% of the moderation tasks. This blend of human oversight and machine learning ensures that content on the platform complies with community standards.
TikTok’s US$2 billion investment into safety
In 2024, ByteDance plans to invest an estimated US$2 billion in improving its trust and safety efforts. This considerable investment comes amid growing concerns over spreading harmful content and misinformation on social media platforms. As TikTok continues to expand its reach, the company faces increased scrutiny from governments and regulators worldwide, particularly in areas where the impact of social media on public discourse is under the spotlight.
The decision to reduce its human moderation workforce is part of ByteDance’s commitment to refining its processes and ensuring content safety and compliance. However, the changes have also raised concerns about whether AI alone can effectively monitor the vast and varied content uploaded to TikTok daily.
Instagram faces moderation issues
While TikTok moves towards AI, Instagram, owned by Meta, is facing its challenges with moderation. On Friday, Adam Mosseri, Instagram’s head, revealed that recent issues on the platform, which resulted in user accounts being locked or posts being incorrectly marked as spam, were due to human error rather than flaws in the AI moderation system.
Mosseri explained that the errors were made by human moderators who lacked adequate context when reviewing certain posts and accounts. “They were making decisions without the proper context on how the conversations played out, and that was a mistake,” he said. However, he also acknowledged that the tools the moderators were using were partly to blame. “One of the tools we built broke, so it wasn’t showing them sufficient context,” Mosseri admitted.
Users locked out of accounts
Over the past few days, numerous Instagram and Threads users reported that their accounts were locked for allegedly violating age restrictions, which prohibit users under the age of 13 from having accounts. Despite uploading age verification, many users found their accounts remained locked. The issue caused widespread frustration, with users feeling unfairly penalised due to miscommunication between moderation tools and human reviewers.
Although Mosseri took responsibility for the moderation errors, Instagram’s PR team had a slightly different take. The company stated that not all of the problems users encountered were directly linked to human moderation. They said the ongoing age verification issue is still under investigation as the platform works to identify the root cause.
As both TikTok and Instagram navigate their respective moderation challenges, it remains clear that social media platforms are struggling to find the right balance between human oversight and AI-driven technology. With these platforms’ growing influence on everyday life, the pressure to get content moderation right is higher than ever.