Anthropic, the AI safety company, announced this week it would not widely release its next-generation model, "Mythos," citing significant cybersecurity concerns. The company stated the model was powerful enough for non-experts to exploit vulnerabilities in major operating systems, prompting a high-level meeting between US financial regulators and bank executives.
Instead of a public launch, Anthropic is making a preview of Claude Mythos available to 11 external organisations, including Google, Microsoft, Amazon Web Services, JPMorgan Chase, and Nvidia, as part of "Project Glasswing." The announcement ignited a fierce debate among AI researchers and cybersecurity specialists about the model's genuine threat level and Anthropic's motivations.
Claims of 'Overblown' Hype and Marketing
Several prominent AI figures have cast doubt on the severity of Anthropic's warnings. AI researcher and author Gary Marcus labelled the announcement "overblown," suggesting the public was misled about an immediate threat. He argued on Substack that the model appears only "incrementally better" than its predecessors, not a breakthrough.
Yann LeCun, founder of AMI Labs and Meta's former chief AI scientist, was more blunt, calling the "Mythos drama" nonsense from "self-delusion." His criticism followed a report from AI security firm Aisle, which found smaller, cheaper models could perform similar vulnerability analyses.
Jake Moore, global cybersecurity specialist at ESET, acknowledged "some marketing language" in the statement but conceded the model seemed "incredibly impressive." He noted Anthropic's "safety first" reputation means such announcements serve dual purposes: "genuine caution and signaling its safety-conscious stance."
Competitive Landscape and Regulatory Scramble
The reaction suggests other tech giants may be close behind. Dave Kasten, head of policy at Palisade Research, told CNBC he expects "Anthropic is a little ahead, but not overwhelmingly ahead." He referenced an Axios report indicating OpenAI also possesses a model with advanced cybersecurity abilities earmarked for limited release.
The meeting between Federal Reserve Chair Jerome Powell, Treasury Secretary Scott Bessent, and major bank heads was characterised by T.J. Marlin, CEO of Guardrail Technologies, as a legal safeguard. He stated on LinkedIn that it ensured banks could not later claim ignorance, putting CEOs who fail to document a board-level response in a "legally exposed position."
A Defender's Advantage?
Some experts argue the cybersecurity narrative is missing a crucial perspective: that defenders may benefit more from AI than attackers. Venture capitalist Pablos Holman of Deep Future stated on LinkedIn that defenders "have the same AIs. Often better ones and way more compute," including access to source code. "This is still a war of escalation, but now the defender has the advantage," he wrote.
Ben Seri, cofounder of Zafran Security, described the moment as "cybersecurity's Manhattan Project." While acknowledging the real threat, he argued the bottleneck has never been finding or fixing vulnerabilities alone, but deploying fixes "safely, quickly, and at scale" into production environments.
Tech investor and former White House AI advisor David Sacks summarised the dilemma on X, stating the world must take the cyber threat seriously but noting it is "hard to ignore that Anthropic has a history of scare tactics."