Using copyright-protected materials like novels, music, images, and articles to train AI models has sparked heated debate. Creatives are fighting to protect their rights, viewing the trend as a potential threat to the future of creativity. Over 30,000 individuals, including high-profile writers and artists, signed the widely circulated Statement on AI Training, highlighting these concerns.
On the other hand, AI companies argue for broader freedoms to use such materials to drive innovation. Microsoft’s CEO Satya Nadella compared AI training to studying from a textbook, advocating for free access to data for model training. In this tug-of-war, the UK government is tasked with finding a middle ground in legislation to address these competing interests.
The UK’s current stance on AI and copyright
The UK has so far adopted a cautious approach to intellectual property issues related to AI. The Copyright, Designs and Patents Act 1988 (CDPA), crafted long before AI’s rise, remains the primary legislation. It grants copyright holders control over the reproduction and distribution of their works. However, the law still needs to be updated to address the use of copyrighted content for training AI explicitly.
Currently, the UK prohibits the unauthorised use of protected works for AI training intended for commercial purposes. This contrasts with the EU, where such use is allowed unless creators opt-out, and the US, where developers can claim “fair use.” Despite these restrictions, enforcement is tricky. AI training processes are opaque, making identifying whether specific works have been used difficult. Even if a rights holder proves their work was involved, jurisdictional and technical challenges further complicate legal proceedings.
A notable case is Getty Images v Stability AI, where Getty alleges that Stability AI used its images without permission for training and outputted works bearing its watermark. This trial, set for June 2025, will provide crucial insights into how these issues might be resolved in the UK.
Similar disputes are underway across the Atlantic. The New York Times has sued OpenAI in the US, demanding that AI models trained on its content be destroyed. These cases could set significant precedents, shaping how AI companies interact with copyrighted materials.
Anticipated changes in UK policy
The UK government is expected to clarify its stance in the forthcoming Artificial Intelligence Opportunities Action Plan. Prime Minister Sir Keir Starmer has indicated that the plan will include measures ensuring publishers retain control over their content and are compensated for its use in AI training. This follows a 2021 debate in Parliament sparked by Labour MP Kevin Brennan’s proposed bill. Although it did not progress, the bill pushed for transparency and fair remuneration for creators.
In 2022, the government’s AI and IP consultation suggested introducing an exception to allow data mining for AI training, provided creators receive safeguards like subscription-based access to their works. Licensing deals between AI firms and publishers, such as agreements involving OpenAI and media outlets like the Financial Times, have pre-empted legal reforms. These deals are worth tens of millions of pounds annually and could set a model for future collaborations.
Broader implications of reform
Potential reforms could reshape the UK’s intellectual property laws. Historically, changes to copyright laws have been incremental, but the challenges posed by AI may require more substantial adjustments. The upcoming Action Plan is expected to provide much-needed clarity.
The UK’s approach will also be closely watched post-Brexit. The EU recently introduced the EU AI Act, focusing on transparency and accountability in AI systems. Whether the UK aligns with or diverges from this framework could signal its broader direction on AI governance. Whatever the case, these developments will profoundly impact both creators and innovators, shaping the future of AI and copyright in the UK.