Steve Bannon, the former White House chief strategist under President Donald Trump, has publicly endorsed artificial intelligence company Anthropic's decision to reject a deal with the US Department of Defense. Bannon made the remarks during the Semafor World Economy Summit in Washington, D.C., on Thursday.

His comments come amid an ongoing legal and contractual dispute between the AI firm and the Pentagon, which began in February. The clash centred on negotiations for the military's use of Anthropic's frontier model, Claude.

Bannon Warns of Unchecked Military AI

"I think Anthropic had it right," Bannon stated, arguing that allowing the Pentagon to operate Claude with minimal guardrails would be "too dangerous." He has been a vocal critic of developing superintelligent AI without sufficient controls.

Bannon emphasised the critical need for transparency regarding how weapons manufacturers intend to use advanced AI systems. "The central thing is what is happening in the weapons lab with AI," he said. "We have no earthly idea."

The Pentagon Contract Dispute

The disagreement escalated when Defense Secretary Pete Hegseth pressured Anthropic to accept the Pentagon's terms of use or risk losing its military contract. In a public blog post, Anthropic's CEO, Dario Amodei, explained the company's refusal, stating it "cannot in good conscience accede" to the requests.

Amodei cited specific ethical concerns over two potential applications: enabling mass domestic surveillance and developing fully autonomous weapons systems.

Fallout and Public Reaction

The Pentagon responded swiftly, effectively blacklisting Anthropic by labelling it a supply chain risk and barring federal agencies from using its technology. In March, Anthropic filed a lawsuit against Secretary Hegseth, the Pentagon, the Executive Office of the President, and other federal agencies over these actions.

Subsequently, the Pentagon secured a deal with Sam Altman's rival firm, OpenAI. Despite the business and legal repercussions, Anthropic's stance resonated with the public. Its Claude model temporarily overtook ChatGPT in the App Store rankings, and the company received significant praise for its ethical position.

Anthropic's New Model and Security Pause

More recently, Anthropic announced a new AI model, Mythos, but paused its general release due to cybersecurity concerns. The company stated the "large increase in capabilities" led to the decision to withhold it from wide availability.

Instead, Anthropic is using the Mythos preview "as part of a defensive cybersecurity program with a limited set of partners," according to the model's system card.