OpenAI has finalised an agreement with the US Department of War to use its artificial intelligence models, company CEO Sam Altman confirmed. The deal was reached after rival AI lab Anthropic refused an ultimatum regarding the use of its frontier model, Claude, for deployment in mass domestic surveillance and fully autonomous weapons systems.

In an "Ask Me Anything" session on social media platform X, Altman revealed the Pentagon contract was negotiated quickly in "an attempt to de-escalate the situation." He acknowledged the process had been "rushed" and admitted the "optics don't look good" for his company.

Negotiations and Industry Tensions

Altman stated that OpenAI had been in discussions with the Department of War for "many months" concerning non-classified work before "things shifted into high gear on the classified side." He said the defence department was "flexible on what we needed," and OpenAI wants "to support them in their very important mission."

When asked why the Pentagon chose OpenAI over Anthropic, Altman declined to speak for his competitor but offered speculation. "First, I saw reporting that they were extremely close on a deal, and for much of the time both sides really wanted to reach one," he wrote. He suggested negotiations may have deteriorated rapidly and that OpenAI and the Department of War "got comfortable with the contractual language."

Altman added, "I think Anthropic may have wanted more operational control than we did."

Ethical Red Lines and Democratic Concerns

The OpenAI CEO outlined that his company has "three redlines" governing the deal, but noted these could change as the technology evolves and "new risks" emerge. He emphasised a critical distinction between corporate and governmental responsibility.

"But a really important point: we are not elected. We have a democratic process where we do elect our leaders," Altman wrote. "We have expertise with the technology and understand its limitations, but I think you should be terrified of a private company deciding on what is and isn't ethical in the most important areas."

He illustrated this by contrasting content moderation with national security: "Seems fine for us to decide how ChatGPT should respond to a controversial question. But I really don't want us to decide what to do if a nuke is coming towards the US."

Potential Applications and Warnings

Altman identified two primary areas where AI could assist national defence: cybersecurity and biosecurity. He highlighted the US's "ability to defend against major cyber attacks," particularly one targeting the national electrical grid. On biosecurity, he stated, "I do not think we are currently set up well enough to detect and respond to a novel pandemic threat."

Despite finalising the deal, Altman expressed concern about the broader situation, writing on X that the "current path things are on is dangerous for Anthropic, healthy competition, and the US." He claimed OpenAI negotiated to ensure "similar terms would be offered to all other AI labs" and asked for "some empathy" for the Department of War's mission.

The CEO concluded that the company's gamble would be judged by results: "If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses... If not, we will continue to be characterized as rushed and uncareful." He added he sees "promising signs" for the outcome.