Anthropic, the artificial intelligence company, has issued a copyright takedown notice to remove its own leaked source code from GitHub. The leak, which occurred on Tuesday, involved a segment of the source code for its AI coding assistant, Claude Code.

The company confirmed the action in a statement to Business Insider. "We issued a DMCA takedown against one repository hosting leaked Claude Code source code and its forks," an Anthropic spokesperson said, referring to the Digital Millennium Copyright Act.

Irony in Legal Protection

The move carries significant irony, as Anthropic and other major AI firms like OpenAI and Google are currently defending multiple lawsuits alleging they used copyrighted material without permission to train their large language models. Authors, artists, and publishers have used copyright law to seek accountability and compensation.

In a prominent case, a US court ordered Anthropic in September to pay $1.5 billion in damages in a class-action lawsuit brought by authors and publishers. The plaintiffs, including Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, alleged the company used pirated books and "shadow libraries" to train its Claude models.

Other legal actions include a June lawsuit from Reddit for scraping user content without authorisation, and a case filed last month by Universal Music Group, Concord, and ABKCO alleging the illegal downloading of over 20,000 copyrighted songs for model training.

Limited Impact of Leak

Cybersecurity analysis suggests the leak's impact may be limited. Paul Price, founder of the ethical hacking firm Code Wall, assessed that the exposed code did not contain the company's most critical assets. "It's more embarrassing than detrimental. Most of the real juicy stuff is in their internal source models and that wasn't leaked," he told Business Insider.

Price explained that the leak pertained to the software "harness"—the infrastructure connecting large language models to their operational context. "Claude Code is one of the best-designed agent harnesses out there, and now we can see how they approach the hard problems," he noted, adding it could provide useful intelligence for competitors.

Broader Implications for AI Industry

The incident underscores a central paradox in the rapid development of AI technology: the tools that accelerate product development also facilitate the instant leakage and replication of sensitive information. Anthropic stated it is implementing new measures to prevent future leaks. "We're rolling out measures to prevent this from happening again," a company spokesperson said.

The event highlights the evolving and complex relationship between AI companies and intellectual property law, where firms are simultaneously defendants in cases concerning training data and plaintiffs protecting their own proprietary code.