The Trump administration has abruptly terminated all federal contracts with artificial intelligence company Anthropic and barred it from future defence work, invoking a rarely used national security provision. The move follows founder Dario Amodei's refusal to permit the company's AI systems to be deployed for domestic mass surveillance of US citizens or for autonomous armed drones capable of selecting targets without human oversight.
Defence Secretary Pete Hegseth announced the decision on Friday, applying a supply-chain-risk law designed to counter foreign threats to a domestic company for the first time. The immediate consequence is the cancellation of a Pentagon contract worth up to $200 million. President Donald Trump subsequently directed all federal agencies via Truth Social to "immediately cease all use of Anthropic technology."
Anthropic, founded in 2021 by former OpenAI researchers concerned about AI safety, has stated it will challenge the designation in court, calling it "legally unsound." The company's core identity has been built on a public commitment to developing AI responsibly.
Industry-Wide Safety Pledges Abandoned
The crisis emerges against a backdrop of major AI firms retreating from their own self-imposed safety guidelines. Earlier this week, Anthropic dropped its central safety pledge not to release increasingly powerful AI systems until confident they would not cause harm. This follows OpenAI removing the word "safety" from its mission statement, xAI shutting down its entire safety team, and Google abandoning its long-standing "Don't be evil" motto and a subsequent AI harm prevention commitment.
"The road to hell is paved with good intentions," said Max Tegmark, physicist and founder of the Future of Life Institute, in an interview. He argues the industry's resistance to binding regulation has created its own predicament. "All of these companies have persistently lobbied against regulation of AI, saying, 'Just trust us, we're going to regulate ourselves.'... We right now have less regulation on AI systems in America than on sandwiches."
The Regulatory Vacuum and National Security
Tegmark contends that the absence of clear laws, which the companies themselves helped create by opposing government oversight, has left them vulnerable. "There is no law right now against building AI to kill Americans, so the government can just suddenly ask for it," he said. He frames the pursuit of uncontrollable superintelligence not as a national asset but as a direct threat to government stability, a view he believes is gaining traction in Washington.
He dismissed the common industry argument that a regulatory race with China necessitates a hands-off approach, noting that China is moving to ban certain AI applications like "AI girlfriends" over societal concerns. "Who in their right mind thinks that Xi Jinping is going to tolerate some Chinese AI company building something that overthrows the Chinese government?"
Rapid Pace of Development
The technological timeline underscores the urgency of the governance debate. Tegmark cited a recent paper co-authored with top AI researchers defining Artificial General Intelligence (AGI). It found GPT-4 was 27% of the way to this benchmark, while GPT-5 had reached 57%. "Going from 27% to 57% that quickly suggests it might not be that long," he warned his MIT students, suggesting job markets could be transformed within years.
Industry Reaction and Future Path
The blacklisting forces other AI giants to declare their positions. Following the announcement, OpenAI's Sam Altman stated he shared Anthropic's "red lines," while Google and xAI had not immediately commented. Hours after Tegmark's interview, OpenAI announced its own separate deal with the Pentagon.
Tegmark expressed a tempered optimism, suggesting a positive outcome is still possible if AI development is subjected to independent oversight akin to clinical trials. "Then we get a golden age with all the good stuff from AI, without the existential angst. That’s not the path we’re on right now. But it could be."