Anthropic, the artificial intelligence startup, has released a preview of its new frontier model, Mythos, to a select group of over 40 partner organisations for defensive cybersecurity work. The initiative, named Project Glasswing, will see the model deployed to scan for vulnerabilities in both proprietary and open-source software systems.

The company claims that in recent weeks, Mythos has already identified "thousands of zero-day vulnerabilities, many of them critical," with some flaws dating back one to two decades. While not specifically trained for cybersecurity, Anthropic states the model possesses strong agentic coding and reasoning skills suitable for the task.

Limited Preview with Major Tech Partners

The preview is not being made generally available. Partner organisations granted access include Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks. As part of Project Glasswing, these partners will share their findings to benefit the wider tech industry.

Anthropic describes its frontier models as its most sophisticated, designed for complex tasks like agent-building and coding. A leaked internal document, originally reported by *Fortune*, called the model (then codenamed "Capybara") "by far the most powerful AI model we’ve ever developed." The leak was attributed by Anthropic to "human error."

Security Concerns and Legal Context

The same leaked memo acknowledged that such a powerful model could pose a cybersecurity threat if weaponised by malicious actors to find and exploit bugs, rather than fix them. This underscores the controlled nature of the current preview for defensive purposes only.

Anthropic also confirmed it has held "ongoing discussions" with federal officials regarding Mythos's use. These talks are complicated by an ongoing legal battle with the Trump administration, after the Pentagon labelled Anthropic a supply-chain risk over its refusal to permit autonomous targeting or surveillance of U.S. citizens.

Recent Operational Challenges

The launch follows recent operational incidents for Anthropic. Last month, the company accidentally exposed nearly 2,000 source code files and over half a million lines of code due to a mistake in a software package launch. Its subsequent attempt to clean up the leak resulted in the accidental takedown of thousands of code repositories on GitHub.

The company has stated that the GitHub takedowns were an accident. These events highlight the complex security challenges facing AI labs as they develop and deploy increasingly powerful systems.