Imagine a vault, designed by the world's top security minds to be impenetrable, holding a weapon that could crack any corporate defence. Now imagine a group of hobbyists, chatting on Discord, who found the key the very day it was unveiled. This isn't a thriller plot—it's the startling reality facing AI giant Anthropic, as a report claims its exclusive cyber tool, Mythos, is already in unauthorised hands.
According to a Bloomberg investigation, members of a private online forum have gained access to the powerful AI model through a third-party vendor. Anthropic has confirmed it is investigating the claim, stating, "We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." Crucially, the company says it has found no evidence the breach impacted its own core systems. But the mere fact of the access raises a terrifying question: what happens when a tool built to fortify walls is taken for a test drive by strangers?
How Did They Do It? The "Educated Guess" That Unlocked Everything
The group's method was deceptively simple. Bloomberg reports they **"made an educated guess about the model’s online location based on knowledge about the format Anthropic has used for other models."** This access was reportedly facilitated by an employee at a third-party contractor working for Anthropic. It wasn't a sophisticated cyber-heist; it was a digital lockpick, using familiarity with the company's own habits to find a hidden door.
Once inside, they didn't just peek. The group provided Bloomberg with screenshots and even a live demonstration, proving they have been using Mythos regularly. This tool, part of the exclusive "Project Glasswing" initiative released only to select vendors like Apple, was designed specifically to *prevent* use by bad actors. Yet here it is, being toyed with in a private chat room.
"Playing Around" or Prelude to Chaos?
The most chilling detail may be the most mundane. A source told Bloomberg the group is **"interested in playing around with new models, not wreaking havoc with them."** This casual intent exposes the core vulnerability: the line between a curious hobbyist and a malicious actor is perilously thin. Mythos, by Anthropic's own admission, could be weaponised against the very corporate security it was meant to bolster. The fact that its first unauthorised users are "playing" doesn't change the potency of the toy in their hands.
This breach strikes at the heart of Anthropic's promise. The limited release of Mythos was meant to be a controlled environment, a safety protocol to allay fears about enterprise security. If a Discord group can bypass it so quickly, what does that say about the integrity of the "walled garden" approach to dangerous AI?
The fallout from this report is just beginning. For the tech industry, it's a stark warning about the fragility of vendor security chains. For every business relying on "exclusive" AI defences, it's a wake-up call. The future of secure AI may depend less on building stronger vaults, and more on assuming that someone, somewhere, is already figuring out how to pick the lock.