In recent news, AI startup Anthropic, known for developing the Claude large language models, has accused multiple websites of disregarding their anti-scraping protocols. Freelancer and iFixit have raised concerns over Anthropic’s alleged behaviour, claiming that the company’s web crawler has been excessively active on their sites.
Freelancer’s complaints
Matt Barrie, CEO of Freelancer, has stated that Anthropic’s ClaudeBot is “the most aggressive scraper by far.” Barrie said the crawler visited Freelancer’s website 3.5 million times within four hours, causing significant disruption. This traffic volume is reportedly “about five times the volume of the number two” AI crawler. Barrie noted that this aggressive scraping has negatively impacted their site’s performance and revenue. Despite initially trying to refuse access requests, Freelancer blocked Anthropic’s crawler to prevent further issues.
iFixit’s experience
Hey @AnthropicAI: I get you're hungry for data. Claude is really smart! But do you really need to hit our servers a million times in 24 hours?
— Kyle Wiens (@kwiens) July 24, 2024
You're not only taking our content without paying, you're tying up our devops resources. Not cool.
Kyle Wiens, CEO of iFixit, echoed similar concerns. Wiens mentioned on social media platform X (formerly Twitter) that Anthropic’s bot hit iFixit’s servers one million times within 24 hours. This high volume of requests led to considerable strain on iFixit’s resources, prompting the team to set alarms for high traffic that woke them up at 3 AM due to Anthropic’s activities. The situation improved only after iFixit specifically disallowed Anthropic’s bot in its robots.txt file.
This isn’t the first time an AI company has been accused of ignoring the Robots Exclusion Protocol, or robots.txt. Back in June, Wired reported that AI firm Perplexity had been crawling its website despite the presence of a robots.txt file, which typically instructs web crawlers on which pages they can and cannot access. Although adherence to robots.txt is voluntary, bad bots often need to pay more attention to it. After Wired’s report, startup TollBit revealed that other AI firms, including OpenAI and Anthropic, have also bypassed robots.txt signals.
Anthropic’s response and ongoing issues
Anthropic has responded to these accusations, telling The Information that it respects robots.txt and that its crawler “respected that signal when iFixit implemented it.” The company strives for minimal disruption by being thoughtful about how quickly it crawls the exact domains and is currently investigating the issue to ensure compliance.
AI firms frequently use web crawlers to collect content to train their generative AI technologies. However, this practice has led to multiple lawsuits from publishers accusing these firms of copyright infringement. Companies like OpenAI have started forming partnerships with content providers to mitigate the risk of further legal action. OpenAI’s content partners include News Corp., Vox Media, the Financial Times, and Reddit.
Wiens from iFixit is willing to discuss a potential licensing agreement with Anthropic, suggesting that a formal deal could benefit both parties. This approach could pave the way for a more collaborative relationship between content providers and AI developers, reducing the friction caused by unauthorised scraping activities.