Anthropic, the US artificial intelligence company, has publicly accused three leading Chinese AI labs of creating over 24,000 fake accounts to illicitly extract capabilities from its Claude AI model. The alleged activity, involving more than 16 million exchanges, comes as Washington debates the enforcement of export controls on advanced AI chips to China.

The targeted companies—DeepSeek, Moonshot AI, and MiniMax—are accused of using a technique called "distillation" to improve their own models. According to Anthropic, the labs specifically targeted "Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding."

Scale and Scope of Alleged Attacks

The scale of the alleged extraction differed between the firms. Anthropic tracked more than 150,000 exchanges from DeepSeek, which appeared focused on improving foundational logic and alignment, particularly around censorship-safe alternatives to policy-sensitive queries.

Moonshot AI allegedly generated over 3.4 million exchanges targeting agentic reasoning, tool use, coding, data analysis, and computer vision. MiniMax's activity was the most extensive, with 13 million exchanges focused on agentic coding and tool orchestration. Anthropic stated it observed MiniMax redirecting nearly half its traffic to target the latest Claude model upon its launch.

Links to Broader Geopolitical Tensions

The accusations emerge amid intense debate in the United States over AI chip exports to China. Last month, the Trump administration formally allowed companies like Nvidia to export advanced chips, such as the H200, to China. Critics argue this loosening increases China's computing capacity during a critical phase of the global AI race.

Anthropic directly linked the alleged attacks to this policy debate. "The scale of extraction... requires access to advanced chips," the company stated in a blog post. "Distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation."

Box: What is Model Distillation?
Distillation is a common AI training method where a larger, more capable "teacher" model is used to train a smaller, more efficient "student" model. While legitimate for a company's own models, using a competitor's model for this purpose can effectively copy proprietary advancements.

National Security and Industry Concerns

Beyond commercial competition, Anthropic warned of significant national security risks. The company argued that models built through illicit distillation may not retain the safety safeguards built into US frontier models. "Models built through illicit distillation are unlikely to retain those safeguards, meaning that dangerous capabilities can proliferate with many protections stripped out entirely," Anthropic's blog post reads.

The firm pointed to risks of authoritarian governments deploying such AI for "offensive cyber operations, disinformation campaigns, and mass surveillance," a threat amplified if the models are open-sourced.

Expert Reaction and Industry Context

Dmitri Alperovitch, chairman of the Silverado Policy Accelerator think-tank and co-founder of CrowdStrike, told TechCrunch the allegations were unsurprising. "It’s been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of US frontier models. Now we know this for a fact," Alperovitch said. He argued this should provide "compelling reasons to refuse to sell any AI chips" to the implicated companies.

The allegations follow similar concerns from OpenAI, which sent a memo to US House lawmakers earlier this month accusing DeepSeek of using distillation to mimic its products. DeepSeek first gained significant attention a year ago with its open-source R1 reasoning model, which nearly matched US frontier lab performance at a fraction of the cost. The company is expected to soon release DeepSeek V4, reportedly capable of outperforming both Claude and ChatGPT in coding.

Next Steps and Official Responses

Anthropic stated it will continue investing in defences to make such attacks harder to execute and easier to identify. However, it called for "a coordinated response across the AI industry, cloud providers, and policymakers."

TechCrunch has reached out to DeepSeek, MiniMax, and Moonshot AI for comment. The outcome of this dispute is likely to influence ongoing policy discussions in Washington regarding technology transfer and export controls to China.