The artificial intelligence company Anthropic has formally abandoned a foundational commitment to pause the scaling or deployment of its AI models if they outpace the company's ability to implement safety measures. The policy change, announced on Tuesday, marks a significant strategic shift for a firm built on "responsible scaling" principles, as it navigates a fiercely competitive market with limited government oversight.
In a statement, Anthropic cited the "heightened competition" in the AI sector and an "anti-regulatory political climate" as key reasons for revising its Responsible Scaling Policy (RSP). The company stated that effective government engagement on AI safety remains a "long-term project" not happening organically as the technology advances.
From Unilateral Pause to Industry Guidelines
The original policy, introduced in 2023 and loosely modelled on U.S. government biosafety levels, committed Anthropic to halt progress if its safety evaluations fell behind. The new framework separates internal company guidelines from broader industry recommendations. Anthropic's chief science officer, Jared Kaplan, told Time Magazine the unilateral commitment no longer made sense if competitors were "blazing ahead."
"We felt that it wouldn't actually help anyone for us to stop training AI models," Kaplan said. The revised policy retains a commitment to delay "highly capable" models, but under more limited circumstances.
CEO Points to Past Sacrifices for Safety
Anthropic's co-founder and CEO, Dario Amodei, has repeatedly pointed to the company's 2022 decision to delay the public release of its Claude chatbot as evidence of its safety-first ethos. That move came months before OpenAI's release of ChatGPT ignited the current AI race. "Now, that was very commercially expensive," Amodei admitted in a recent interview. "We probably seeded the lead on consumer AI because of that."
Amodei has framed the company's approach using the Spider-Man adage, telling podcaster Lex Fridman in November 2024 that "with great power comes great responsibility." He defended the company's advocacy for measures like U.S. export controls on advanced chips to China, a stance criticised by Nvidia CEO Jensen Huang.
Pressure from Pentagon and Theoretical Risks
The policy revision coincides with reported pressure from the U.S. Department of Defense regarding the "redlines" Anthropic sets for military use of its AI. Amodei met with Defense Secretary Pete Hegseth on Tuesday, facing a potential Friday deadline for the company to alter its stance.
Anthropic also argued that theoretical higher levels of AI risk, designated ASL-4 and beyond in its framework, cannot be managed by any single company alone. It compared these levels to BSL-4, the highest biosafety containment level used for pathogens like Ebola.
The company maintains that its RSP was always intended as a "living document." In its blog post, Anthropic reaffirmed its belief that government regulation is "both necessary and achievable," but conceded that achieving it is proving slower than the pace of AI advancement.