Saturday, 19 April 2025
29.1 C
Singapore
33.7 C
Thailand
22.8 C
Indonesia
29.3 C
Philippines

Internal chats expose Meta’s approach to AI training data

Court filings reveal Meta staff debated using copyrighted materials for AI training, discussing legal risks and alternative data sources like Libgen.

Meta employees have debated using copyrighted materials to train artificial intelligence (AI) models for years, even when acquiring such content raised legal concerns. According to newly unsealed court documents, internal discussions among staff reveal how Meta may have obtained and used copyrighted books and other materials without clear permission.

The documents were filed as part of the lawsuit Kadrey v. Meta, one of several ongoing copyright disputes involving AI in the U.S. legal system. The plaintiffs, including authors Sarah Silverman and Ta-Nehisi Coates, argue that Meta’s use of protected works in AI training is unlawful. On the other hand, Meta insists that its actions fall under “fair use.”

Earlier court filings claimed that Meta CEO Mark Zuckerberg approved training AI models on copyrighted content and that the company had halted negotiations with book publishers over licensing deals. The latest documents, which include internal chat logs, provide further insight into how Meta’s AI team may have approached this controversial issue.

Staff conversations reveal concerns and strategies

One conversation from February 2023 shows Meta researchers openly discussing acquiring copyrighted books for AI training despite potential legal risks. According to the filings, Xavier Martinet, a Meta research engineer, suggested a bold approach: “My opinion would be (in the line of ‘ask forgiveness, not for permission’): we try to acquire the books and escalate it to execs so they make the call.”

Instead of striking licensing agreements with publishers, Martinet proposed purchasing e-books at retail prices to build a dataset for AI training. When a colleague pointed out that using unauthorised materials could lead to legal trouble, Martinet responded that many AI startups already used pirated books. “Worst case: we find out it is finally okay, while a gazillion startups just pirated tons of books on BitTorrent,” he wrote.

Melanie Kambadur, a senior manager for Meta’s Llama AI model research team, acknowledged that using copyrighted material required approval but noted that Meta’s legal team had become “less conservative” in approving training data than before. “We need to get licenses or approvals on publicly available data still,” she said, according to the filings. “The difference now is we have more money, more lawyers, more business development help, the ability to fast track/escalate for speed, and lawyers are being a bit less conservative on approvals.”

Another key discussion highlighted in the court documents involves Libgen, a website known for offering free access to copyrighted books. Meta employees considered using Libgen as a training data source despite its reputation for copyright infringement. Libgen has faced multiple lawsuits, shutdown orders, and hefty fines.

In an internal email to Meta AI Vice President Joelle Pineau, Sony Theakanath, a director of product management at Meta, described Libgen as “essential to meet SOTA numbers across all categories,” referring to maintaining state-of-the-art (SOTA) AI performance. Theakanath also suggested ways to reduce legal risks, such as filtering out content “clearly marked as pirated/stolen” and not publicly disclosing the use of Libgen data. “We would not disclose the use of Libgen datasets used to train,” he wrote.

These internal discussions prove how Meta approached sourcing training data for its AI models. The lawsuit is ongoing, and the outcome could have significant implications for how AI companies use copyrighted materials in the future.

Hot this week

Microsoft highlights growing AI-assisted scams and offers advice on how to stay safe

Microsoft’s latest report warns of rising AI-driven scams and outlines new tools and tips to help users stay safe online.

Netflix begins testing OpenAI-powered search feature

Netflix is testing an AI search tool for iPhones in Australia and New Zealand that helps users find shows using detailed, mood-based prompts.

Enterprises accelerate adoption of AI agents despite concerns over data privacy and fairness

Cloudera survey finds 96% of global enterprises plan to expand AI agents, with Singapore leading adoption but facing fairness concerns.

US government places licence rule on Nvidia’s H20 chip exports to China

Nvidia must now get a licence to export its H20 AI chips to China, as the US cites supercomputer risks and the company braces for a US$5.5B impact.

Tenable warns AI growth is outpacing cloud security readiness

Tenable warns that rapid AI adoption using open-source tools and cloud services is outpacing security, leaving organisations exposed to growing risks.

PlayStation Plus prices rise worldwide, including Singapore

PlayStation Plus subscription prices have increased worldwide, including Singapore, with changes affecting new and existing users.

OpenAI’s latest reasoning AI models are more prone to making mistakes

OpenAI’s new o3 and o4-mini AI models perform better in some areas but hallucinate more often than their predecessors, raising concerns.

Google removes over 5 billion ads in 2024 as AI boosts enforcement against online scams

Google’s Ads Safety Report 2024 shows how AI helped remove over 5.1 billion ads and block 700,000 scam accounts from its platform.

Microsoft highlights growing AI-assisted scams and offers advice on how to stay safe

Microsoft’s latest report warns of rising AI-driven scams and outlines new tools and tips to help users stay safe online.

Related Articles

Popular Categories