US Defense Secretary Pete Hegseth has summoned Anthropic CEO Dario Amodei to the Pentagon for a meeting on Tuesday morning to discuss the military's use of the company's Claude AI system. The urgent meeting follows tensions over a $200 million contract signed last summer, with the Department of Defense threatening to label Anthropic a "supply chain risk."
This designation, typically applied to foreign adversaries, would be triggered by Anthropic's refusal to allow its technology to be used for the mass surveillance of American citizens and for developing weapons systems that operate without human control. A source familiar with the matter told Axios that Secretary Hegseth is presenting Amodei with an ultimatum: comply with the Pentagon's requirements or face being cut off.
Contract at Risk After Operational Use
The Claude AI was reportedly utilised during the January 3 special operations raid that led to the capture of Venezuelan President Nicolás Maduro, an event that brought the underlying disagreements between Anthropic and the Pentagon into public view. The successful use of the technology in a high-stakes operation underscores its value to military planners, making a potential rupture more consequential.
It remains unclear if the Pentagon's threat is a negotiating tactic, as replacing Anthropic and its advanced AI systems would represent a significant logistical and technical undertaking for the Department of Defense. However, the stakes are substantial: an official "supply chain risk" designation would immediately void Anthropic's existing contract and compel all other Pentagon partners to cease using Claude entirely.
The Stakes of the "Supply Chain Risk" Label
The "supply chain risk" label is a powerful tool within US procurement, designed to mitigate threats from untrusted vendors, particularly those with foreign ties. Its application to a domestic AI leader like Anthropic would be unprecedented and could set a major precedent for the defence industry's relationship with commercial tech firms. The outcome of Tuesday's meeting could therefore define the boundaries of ethical AI development for military applications.
For Anthropic, a company founded with a strong emphasis on AI safety, acquiescing to the Pentagon's demands for surveillance and lethal autonomous weapons would represent a fundamental breach of its stated principles. The firm now faces a critical choice between maintaining its ethical stance and preserving a lucrative government contract that offers deep integration into national security infrastructure.
Broader Implications for AI and Defence
The confrontation highlights the growing friction between the rapid innovation of the commercial AI sector and the stringent, mission-critical requirements of military defence. Other AI companies with government contracts will be watching closely, as the resolution will signal how much leverage the Pentagon holds in dictating the use of privately developed advanced technologies.
The Department of Defense has not issued an official statement ahead of the meeting. The discussions are expected to centre on finding a potential compromise that addresses national security needs while respecting the contractual and ethical red lines established by Anthropic's leadership.