Anthropic, the US-based artificial intelligence company, has publicly accused three of China's leading AI labs of conducting "industrial-scale campaigns" to illicitly copy its flagship Claude model. The company stated on Monday that DeepSeek, MiniMax, and Moonshot AI were involved in the unauthorised use of its technology, a practice known as distillation, to accelerate their own development.

The alleged operation involved approximately 24,000 fraudulent Claude accounts that generated over 16 million exchanges in violation of Anthropic's terms of service and regional access restrictions. Claude is not commercially available in China, but the company says its rivals found workarounds.

Sophisticated and Rapid Campaigns

Anthropic provided detailed findings, claiming the campaigns were "growing in intensity and sophistication." The company stated it detected MiniMax's campaign while it was still active, offering a rare glimpse into a competitor's real-time operations. "When we released a new model during MiniMax's active campaign, they pivoted within 24 hours, redirecting nearly half their traffic to capture capabilities from our latest system," Anthropic said in its statement.

Among the specific allegations, Anthropic said DeepSeek sought to create "censorship-safe alternatives to policy-sensitive queries." The company warned that the "window to act is narrow, and the threat extends beyond any single company or region," calling for coordinated action among industry players and policymakers.

A Wider Industry Concern

This accusation is the latest in a series of similar claims from major US AI firms. In January 2025, OpenAI suggested DeepSeek may have "inappropriately" used its outputs. Earlier this month, Google disclosed it had identified an increase in similar "model extraction attempts or 'distillation attacks.'"

Distillation is a legitimate training method where a less powerful model learns from the outputs of a more advanced one. However, Anthropic and others allege their Chinese competitors are improperly using the technique to steal proprietary work. "Competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost," Anthropic stated.

Security Risks and Policy Implications

Beyond commercial concerns, Anthropic highlighted significant security risks. The company argued that models trained through illicit distillation may lack the proper safeguards built into systems like Claude, potentially making them dangerous. CEO Dario Amodei has previously warned that advanced models could, without proper guardrails, assist in building bioweapons.

The incident has reinforced Anthropic's stance on US export controls for advanced chips. The company argued that "restricted chip access limits both direct model training and the scale of illicit distillation." This position contrasts with that of Nvidia CEO Jensen Huang, who has said such restrictions will not curb China's AI progress.

Representatives for DeepSeek, MiniMax, and Moonshot AI did not immediately respond to requests for comment from Business Insider, which first reported the allegations.

Countermeasures and Context

In response to the threat, Anthropic says it has developed "behavioral fingerprinting systems" to detect such campaigns, shares threat data with other AI companies, and is building additional countermeasures. The company itself has faced legal challenges over its training data, settling a $1.5 billion class-action lawsuit with authors and publishers last year, though it admitted no wrongdoing.

The escalating allegations underscore the fiercely competitive and strategically sensitive nature of the global AI race, where technological advantage is paramount and the rules of engagement remain fiercely contested.