OpenAI has published a detailed defence of its new agreement with the US Department of Defense, acknowledging the deal was "definitely rushed" but necessary to de-escalate tensions between the government and the AI industry. The announcement follows the collapse of talks between the Pentagon and rival firm Anthropic last Friday.
In a blog post, OpenAI outlined strict prohibitions on how its models can be used, banning applications in mass domestic surveillance, autonomous weapon systems, and high-stakes automated decisions like social credit systems. The company asserts its "multi-layered approach" of technical and contractual safeguards goes beyond the usage policies relied upon by competitors.
Contractual Safeguards and Deployment Limits
OpenAI stated it retains "full discretion over our safety stack" and will deploy its models via a cloud API, with cleared company personnel involved in the loop. This architecture, it argues, physically prevents integration into weapons hardware. "Deployment architecture matters more than contract language," wrote Katrina Mulligan, OpenAI's head of national security partnerships, on LinkedIn.
The company's blog post directly addressed public scepticism, stating: "We don’t know why Anthropic could not reach this deal, and we hope that they and more labs will consider it." This comes after Anthropic publicly drew "red lines" against uses in autonomous weapons and mass surveillance, similar to OpenAI's stated limits.
Criticism and CEO's Admission
Despite these assurances, the deal faced immediate criticism. Techdirt editor Mike Masnick argued the contract language, which states data collection will comply with Executive Order 12333, "absolutely does allow for domestic surveillance." He described the order as a legal basis the NSA has used to capture communications of US persons from outside the country.
On social media platform X, OpenAI CEO Sam Altman conceded the process was rushed and the "optics don't look good," resulting in significant backlash. The controversy contributed to Anthropic's Claude chatbot overtaking ChatGPT in Apple's App Store over the weekend. Altman framed the decision as a gamble: if it leads to de-escalation, OpenAI will "look like geniuses"; if not, it will be seen as "rushed and uncareful."
Context of a Fractured AI Landscape
The agreement was reached swiftly after President Donald Trump directed federal agencies to stop using Anthropic's technology, following the breakdown of its Pentagon negotiations. Defense Secretary Pete Hegseth designated Anthropic a supply-chain risk, prompting a six-month transition period away from its AI.
OpenAI's move positions it as a willing government partner at a time of heightened scrutiny over AI's role in national security. The company's public defence underscores the intense commercial and ethical pressures facing AI firms as they navigate contracts with defence and intelligence agencies.