OpenAI has reached an agreement with the United States Department of Defense to allow the use of its artificial intelligence models within the department's classified networks. The deal, announced late on Friday by CEO Sam Altman, includes explicit technical safeguards addressing key ethical concerns that recently caused a high-profile rupture between the Pentagon and its rival, Anthropic.

The agreement comes after the Department of Defense, referred to under the previous administration as the Department of War, pushed AI firms to allow their models to be used for "all lawful purposes." Anthropic resisted, seeking to prohibit applications in mass domestic surveillance and fully autonomous weapon systems.

Anthropic's Stand and the Fallout

In a statement released on Thursday, Anthropic CEO Dario Amodei argued that "in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values." This position garnered significant internal industry support, with more than 60 OpenAI employees and 300 Google employees signing an open letter backing Anthropic.

Following the failed negotiations, President Donald Trump criticised the "Leftwing nut jobs at Anthropic" in a social media post and directed federal agencies to phase out use of the company's products within six months. Secretary of Defense Pete Hegseth escalated the conflict, designating Anthropic as a supply-chain risk. "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," Hegseth stated.

Anthropic responded that it had "not yet received direct communication from the Department of War or the White House" but would "challenge any supply chain risk designation in court."

OpenAI's "Safety Principles" and Technical Safeguards

In a post on the social media platform X, Altman outlined the core protections embedded in OpenAI's new contract. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman said. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."

Altman confirmed that OpenAI "will build technical safeguards to ensure our models behave as they should" and will deploy engineers to work alongside Pentagon personnel to oversee model safety. According to a report by Fortune's Sharon Goldman, Altman told employees the government will allow OpenAI to build its own "safety stack," and that "if the model refuses to do a task, then the government would not force OpenAI to make it do that task."

Calls for De-escalation and Industry-Wide Terms

Striking a conciliatory tone, Altman expressed a desire to see the situation de-escalate "away from legal and governmental actions and towards reasonable agreements." He revealed that OpenAI has asked the Department of Defense to offer the same contractual terms to all AI companies. "We think everyone should be willing to accept," Altman added.

The announcement was swiftly followed by reports that the U.S. and Israeli governments had begun bombing Iran, with Trump calling for the overthrow of the Iranian government, highlighting the complex geopolitical context in which military AI agreements are being forged.