OpenAI Chief Executive Sam Altman has publicly stated his opposition to the US Department of Defence using the Defence Production Act to compel AI companies into military contracts. His comments, made during a CNBC interview on Friday, come amid a tense standoff between the Pentagon and AI firm Anthropic over the use of its frontier model, Claude.
Altman emphasised the necessity of collaboration between artificial intelligence developers and the military, but drew a line at the potential use of the 1950s-era law. "I don't personally think the Pentagon should be threatening DPA against these companies," Altman stated. He added that companies choosing to work with the Pentagon, provided they comply with legal protections and established "red lines," are undertaking important work.
Anthropic's "Red Lines" and Pentagon Ultimatum
The conflict centres on Anthropic CEO Dario Amodei's refusal to allow Claude to be used for what he defines as two unacceptable purposes: mass domestic surveillance and fully autonomous weapons. In a memo posted to the company's website, Amodei said he "cannot in good conscience accede to their request."
This stance prompted a senior Pentagon official to tell Business Insider that Defence Secretary Pete Hegseth is prepared to invoke the Defence Production Act against Anthropic. The act would compel the company to prioritise government contracts, with non-compliance risking blacklisting from future defence work—a significant financial threat.
OpenAI's Evolving Military Stance and Internal Deal
Despite his criticism of the Pentagon's approach towards Anthropic, Altman confirmed OpenAI is actively pursuing its own agreement with the Defence Department. According to a note to staff reported by The Wall Street Journal, this deal would allow OpenAI's models "to be deployed in classified environments" in a manner consistent with the company's principles.
Altman described the effort as meant to "help de-escalate things," referencing the heated exchanges between Anthropic and defence officials. OpenAI's policies on military work have shifted notably; in 2024, it removed a ban on "military and warfare" applications from its usage policies and appointed former National Security Agency Director Paul Nakasone to its board.
A Competitive Field for Government AI
OpenAI, Anthropic, xAI, and Google are all competing to become the US government's preferred AI provider. While several companies have been cleared to handle government information, only xAI's Grok model has so far received clearance to process classified data within the Pentagon.
Altman expressed a degree of solidarity with his competitor, stating, "For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety." His comments underscore the complex balance AI leaders are attempting to strike between ethical safeguards, commercial competition, and national security demands.