Dario Amodei, CEO of artificial intelligence firm Anthropic, has publicly rejected a demand from the US Department of Defense to provide the military with unrestricted access to its advanced AI systems. The refusal comes less than 24 hours before a deadline set by Defense Secretary Pete Hegseth, which expires on Friday at 5:01 PM.

In a statement released on Thursday, Amodei stated he “cannot in good conscience accede to [the Pentagon’s] request.” The core of the dispute centres on two specific use cases: mass surveillance of American citizens and the deployment of fully autonomous weapons systems with no human oversight. The Pentagon maintains it should be able to use Anthropic's technology for all lawful purposes without restrictions imposed by a private company.

Ultimatum and Conflicting Threats

The Department of Defense has threatened to force Amodei’s compliance by either designating Anthropic a supply chain risk—a label typically applied to foreign adversaries—or by invoking the Defense Production Act (DPA). The DPA grants the president authority to compel companies to prioritise production for national defence.

Amodei highlighted the contradiction in these positions, noting, “One labels us a security risk; the other labels Claude as essential to national security.” He emphasised that while it is the Department's right to choose its contractors, he hopes it will reconsider given “the substantial value that Anthropic’s technology provides to our armed forces.”

Anthropic's Stance and Market Position

Anthropic is currently the only frontier AI lab with systems certified for use with classified military information. However, the Department of Defense is reportedly preparing a rival firm, xAI, for a similar role.

“Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place,” Amodei said. Should the Pentagon decide to terminate the relationship, Anthropic has pledged to ensure a “smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions.”

Broader Context and Industry Implications

This confrontation underscores the growing ethical and operational tensions between the US government and leading AI developers. Amodei’s firm position frames the debate as a conflict between unfettered technological deployment and the preservation of democratic values, stating that in some cases, AI “can undermine, rather than defend, democratic values.”

The outcome of this standoff is likely to set a significant precedent for how other AI companies, such as OpenAI and Google DeepMind, negotiate terms for military and governmental use of their most powerful systems in the future.