OpenAI has publicly shared details of its contractual agreement with the US Department of Defense, asserting it contains stronger ethical safeguards than the terms its competitor Anthropic refused, leading to its blacklisting. The move, detailed in a company blog post on Saturday, aims to justify its partnership with the military while criticising the government's treatment of its rival.
In the post, OpenAI published specific language from the pact, including clauses that explicitly prohibit the use of its technology for mass domestic surveillance, fully autonomous weapons, or high-stakes decision systems like "social credit" scores. The company stated its agreement employs a "more expansive, multi-layered approach" to safety than any previous deal for classified AI deployments.
Contractual Red Lines and CEO Defence
"We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's," the blog post stated. It emphasised that OpenAI retains "full discretion" over its safety systems, deploys via cloud with cleared personnel involved, and maintains "strong contractual protections" on top of existing US law.
CEO Sam Altman engaged directly with public concern on social media, conducting an Ask-Me-Anything session. He argued that while Anthropic focused on specific contractual prohibitions, OpenAI was comfortable relying on applicable laws. "I think Anthropic may have wanted more operational control than we did," Altman wrote, addressing why his company agreed to partner where its rival would not.
Escalating Dispute with Anthropic
The disclosure follows the Pentagon's decision to blacklist Anthropic and declare it a supply chain risk after it refused the military's terms for using its frontier model, Claude. In a statement on Friday, Anthropic vowed to "challenge any supply chain risk designation in court," reaffirming its stance against mass surveillance and autonomous weapons.
OpenAI's blog post argued that Anthropic should not be designated a risk and said it had made this position "clear to the government." The company framed its own deal as an attempt to "de-escalate things between DoD and the US AI labs," advocating for collaboration. "The current state is a very bad way to kick off this next phase," the post read.
Safety Controls and Constitutional Limits
As part of the deal, OpenAI will maintain "full control" over its safety stack and can terminate the contract if the government violates terms—a scenario the company said it does not expect. Altman stated OpenAI would not allow its technology to be used for mass domestic surveillance "because it violates the constitution."
He expressed a deep belief in the democratic process but added a stark personal contingency: if the Constitution were amended to make such surveillance legal, "Maybe I would quit my job." He described being "terrified" of a world where either AI companies usurp government power or the government normalises mass surveillance.
Broader Implications and Public Reaction
The dispute has ignited significant ethical debate about the military's use of AI. OpenAI contends that providing its models to national security defenders offers "the best tools" to manage the new risks AI will introduce. The controversy has spurred a public backlash, with reports of users cancelling ChatGPT subscriptions and Anthropic's Claude app rising to the top of download charts.
It remains unclear whether the Pentagon has offered similar contractual terms to Anthropic or other leading AI firms. Representatives for both OpenAI and Anthropic did not immediately respond to requests for comment from Business Insider.