In a landmark collaboration, major artificial intelligence firms, including OpenAI, Microsoft, Google, and Meta, have committed to ensuring their technologies do not facilitate child exploitation. This pledge is part of an initiative spearheaded by the child protection group Thorn and the responsible technology advocate All Tech Is Human.
The commitments by these AI giants mark an unprecedented move in the tech industry, aiming to shield children from sexual abuse as generative AI technologies evolve. According to Thorn, these steps are crucial in countering the severe threats posed by the potential misuse of AI. “This sets a groundbreaking precedent for the industry and represents a significant leap in efforts to defend children from sexual abuse,” a spokesperson for Thorn stated.
The initiative focuses on preventing the creation and dissemination of sexually explicit material involving children across social media platforms and search engines. Thorn reports that in 2023 alone, over 104 million files suspected of containing child sexual abuse material (CSAM) were identified in the US. Without a united effort, the proliferation of generative AI could exacerbate this issue, overwhelming law enforcement agencies already struggling to pinpoint actual victims.
Strategic approach to safety
On Tuesday, Thorn and All Tech Is Human released a comprehensive paper titled “Safety by Design for Generative AI: Preventing Child Sexual Abuse.” This document offers strategies and recommendations for entities involved in the creation of AI tools—such as developers, social media platforms, search engines, and hosting companies—to take proactive measures against the misuse of AI in harming children.
One key recommendation urges companies to meticulously select the datasets used to train AI models, advocating for the exclusion of datasets that contain not only CSAM but also adult sexual content. This caution stems from the generative AI’s tendency to merge different concepts, potentially leading to harmful outcomes.
The paper also highlights the emerging challenge of AI-generated CSAM, which complicates the identification of actual abuse victims by adding to the “haystack problem”—the overwhelming volume of content that law enforcement must sift through.
Rebecca Portnoff, Vice President of Data Science at Thorn, emphasised the proactive potential of this initiative in an interview with the Wall Street Journal. “We want to be able to change the course of this technology to where the existing harms of this technology get cut off at the knees,” she explained.
Some companies have already begun implementing measures such as segregating children-related images, videos, and audio from datasets that include adult content to prevent inappropriate data mingling. Additionally, the introduction of watermarks to identify AI-generated content is being adopted, although these are not foolproof as they can be removed.
This collective effort underscores a significant stride towards safer digital environments for children, leveraging the power of AI for protection rather than peril.