Imagine being told someone built a bomb, then offered a shelter to hide from it — but only if you’re deemed worthy. That’s exactly how Sam Altman describes Anthropic’s strategy for its latest AI model, Claude Mythos.

The Bombshell Accusation

In a fiery episode of Ashlee Vance’s “Core Memory” podcast, the OpenAI CEO didn’t hold back. “It is clearly incredible marketing to say, ‘We have built a bomb. We were about to drop it on your head. We will sell you a bomb shelter for $100 million… but only if we pick you as a customer,’” Altman said.

His words cut straight to the heart of a growing debate in tech: should powerful AI be locked away from the public, or released with responsibility?

What Anthropic Did With Claude Mythos

Earlier this month, Anthropic announced it wouldn’t release Claude Mythos publicly. Their reason? The model was too good at identifying cybersecurity vulnerabilities — a capability they feared could be weaponised. Instead, they launched “Project Glasswing,” granting access to just 11 organisations, including Google, Microsoft, Amazon Web Services, Nvidia, and JPMorgan Chase.

But Altman sees a different motive. “There are people in the world who, for a long time, have wanted to keep AI in the hands of a smaller group of people,” he said. “The fear-based marketing is probably the most effective way to justify that.”

OpenAI’s Contrasting Vision

Altman didn’t just criticise — he laid out his own philosophy. “The goal here is to benefit everybody,” he insisted. He acknowledged that “very dangerous models will have to be released in different ways”, but promised OpenAI would “err on the side of larger releases.”

“We are going to try to help set up the world for as much success as we can,” he added, framing his approach as one of trust and transparency.

A Personal Attack That Stings

The rivalry got even more personal. Altman suggested Anthropic’s rhetoric may have contributed to a real-world consequence: the attack on his own home. “I think the doomerism talk hasn’t helped. I think the way certain other labs talk about us hasn’t helped,” he said, directly pointing the finger at Anthropic.

This is sure to fuel the already bitter feud between Altman and Anthropic CEO Dario Amodei, who left OpenAI to found one of its biggest competitors.

What This Means For You

Behind the corporate drama lies a critical question: who gets to control the most powerful AI on Earth? If Anthropic’s approach wins, the most advanced models could become exclusive tools for a handful of elite companies. If Altman has his way, they’ll be in everyone’s hands — with safeguards, but not locked away.

The battle lines are drawn. And the outcome could shape the future of technology for decades to come.