Wednesday, 19 November 2025
31.3 C
Singapore
22.1 C
Thailand
25.2 C
Indonesia
28.5 C
Philippines

AI startup Anthropic is accused of bypassing anti-scraping rules

Websites accuse AI startup Anthropic of bypassing anti-scraping protocols, causing disruptions and sparking debates over compliance and licensing.

In recent news, AI startup Anthropic, known for developing the Claude large language models, has accused multiple websites of disregarding their anti-scraping protocols. Freelancer and iFixit have raised concerns over Anthropic’s alleged behaviour, claiming that the company’s web crawler has been excessively active on their sites.

Freelancer’s complaints

Matt Barrie, CEO of Freelancer, has stated that Anthropic’s ClaudeBot is “the most aggressive scraper by far.” Barrie said the crawler visited Freelancer’s website 3.5 million times within four hours, causing significant disruption. This traffic volume is reportedly “about five times the volume of the number two” AI crawler. Barrie noted that this aggressive scraping has negatively impacted their site’s performance and revenue. Despite initially trying to refuse access requests, Freelancer blocked Anthropic’s crawler to prevent further issues.

iFixit’s experience

Kyle Wiens, CEO of iFixit, echoed similar concerns. Wiens mentioned on social media platform X (formerly Twitter) that Anthropic’s bot hit iFixit’s servers one million times within 24 hours. This high volume of requests led to considerable strain on iFixit’s resources, prompting the team to set alarms for high traffic that woke them up at 3 AM due to Anthropic’s activities. The situation improved only after iFixit specifically disallowed Anthropic’s bot in its robots.txt file.

This isn’t the first time an AI company has been accused of ignoring the Robots Exclusion Protocol, or robots.txt. Back in June, Wired reported that AI firm Perplexity had been crawling its website despite the presence of a robots.txt file, which typically instructs web crawlers on which pages they can and cannot access. Although adherence to robots.txt is voluntary, bad bots often need to pay more attention to it. After Wired’s report, startup TollBit revealed that other AI firms, including OpenAI and Anthropic, have also bypassed robots.txt signals.

Anthropic’s response and ongoing issues

Anthropic has responded to these accusations, telling The Information that it respects robots.txt and that its crawler “respected that signal when iFixit implemented it.” The company strives for minimal disruption by being thoughtful about how quickly it crawls the exact domains and is currently investigating the issue to ensure compliance.

AI firms frequently use web crawlers to collect content to train their generative AI technologies. However, this practice has led to multiple lawsuits from publishers accusing these firms of copyright infringement. Companies like OpenAI have started forming partnerships with content providers to mitigate the risk of further legal action. OpenAI’s content partners include News Corp., Vox Media, the Financial Times, and Reddit.

Wiens from iFixit is willing to discuss a potential licensing agreement with Anthropic, suggesting that a formal deal could benefit both parties. This approach could pave the way for a more collaborative relationship between content providers and AI developers, reducing the friction caused by unauthorised scraping activities.

Hot this week

Businesses report rising revenue loss from inefficient tech as AI adoption grows

New research shows two in five global businesses face revenue loss due to tech inefficiencies, with many turning to AI to improve productivity.

Mizuho Bank accelerates ISO 20022 compliance with new Boomi-powered platform

Mizuho Bank speeds up ISO 20022 adoption with a Boomi-powered platform that improves onboarding and streamlines payments across Asia Pacific.

Toyota Gazoo Racing Asia brings 2025 Esports GT Championship Finals to Thailand

Toyota Gazoo Racing Asia brings the 2025 Esports GT Championship Finals to Thailand, featuring top sim drivers and an expanded racing programme.

When fraud is inevitable, resilience becomes the real defence

As identity scams and deepfakes surge, companies must focus on recoverability. Here’s why resilience now matters most.

Major web outage affects numerous global sites on 18 November

A major Cloudflare outage on 18 November caused widespread website failures as the company investigated significant service disruptions.

Apple’s ring light-style feature reaches Windows first through Microsoft VP’s new tool

Windows users gain early access to a ring light-style screen feature through Microsoft VP Scott Hanselman’s new Windows Edge Light tool.

Jeff Bezos to co-lead AI startup Project Prometheus

Jeff Bezos will become co-CEO of AI startup Project Prometheus, focusing on manufacturing technologies.

When fraud is inevitable, resilience becomes the real defence

As identity scams and deepfakes surge, companies must focus on recoverability. Here’s why resilience now matters most.

Singapore organisations face rising data risks amid AI adoption and data sprawl, says Proofpoint

Proofpoint’s 2025 report finds Singapore firms face growing data security risks as AI tools and data sprawl intensify insider threats.

Related Articles

Popular Categories