Anthropic’s Glasswing tests AI bug hunters — Arabian Post

Anthropic has launched Project Glasswing, a controlled cybersecurity programme that gives a select group of large technology, finance and security organisations access to its unreleased Claude Mythos Preview model to identify and help fix serious software vulnerabilities in widely used systems. The company says the model has already uncovered thousands of significant flaws across operating systems, browsers and other foundational software, but has stopped short of making it generally available because of the risk that such capabilities could be misused by attackers.

The move places Anthropic at the centre of a growing debate over whether advanced artificial intelligence will strengthen cyber defence faster than it empowers offensive hacking. Under Glasswing, partners including Amazon, Microsoft, Apple, Google, Nvidia, CrowdStrike and Palo Alto Networks are being allowed to use Mythos Preview for defensive work such as local vulnerability detection, black-box testing of binaries, endpoint security and penetration testing. Anthropic has also said it plans to widen access to around 40 more organisations responsible for critical software infrastructure.

Anthropic’s own technical material presents the programme as a response to a sharp jump in model capability. In a research post dated April 7, the company said Mythos Preview had demonstrated the ability to identify and exploit zero-day vulnerabilities in every major operating system and every major web browser during testing. It added that many of the weaknesses it found were subtle and long-lived, including flaws that had remained buried for decades, while more than 99% of the vulnerabilities discovered had not yet been patched and therefore could not be publicly described in detail.

That claim, if borne out more broadly, would represent a meaningful shift in the balance between software defenders and attackers. Security teams have long struggled with the gap between the volume of code that needs to be reviewed and the number of specialists available to inspect it. Anthropic argues that cybersecurity is moving beyond purely human capacity and that AI can compress the time needed to surface dangerous weaknesses. Several launch partners echoed that view, warning that the gap between discovery and exploitation has narrowed dramatically and that defenders need access to stronger tools before hostile actors obtain similar systems.

Yet the announcement also underlines the danger of concentrating such capability in a small circle. Anthropic has explicitly said it does not plan to make Mythos Preview generally available at this stage. Instead, it says its longer-term aim is to build safeguards that would allow Mythos-class systems to be deployed more safely at scale. The company has said it intends to test those protections on a future Claude Opus model that, in its view, does not present the same level of cyber risk. That approach reflects a broader industry dilemma: frontier AI firms want to prove their systems can deliver public benefit, but they are increasingly reluctant to release models that may lower the barrier to sophisticated attacks.

Glasswing is also being positioned as an attempt to avoid leaving open-source maintainers behind. Anthropic has committed up to $100 million in usage credits for participants and another $4 million in donations to open-source security groups, including Alpha-Omega, OpenSSF through the Linux Foundation, and the Apache Software Foundation. The Linux Foundation said the effort could reduce the security burden on maintainers who underpin much of the world’s digital infrastructure but often lack the budgets and dedicated teams available to large corporate users. Anthropic says maintainers will be able to apply for access through its open-source programme.

The commercial implications were visible almost immediately. Reuters reported that earlier fears over Anthropic’s cyber capabilities had weighed on shares of security firms such as Palo Alto Networks and CrowdStrike, amid concern that increasingly powerful AI could disrupt existing cyber products. But the formal inclusion of established cyber groups in Glasswing has helped reframe the story, suggesting that the next phase of AI security may favour partnerships between model developers, infrastructure providers and specialist defenders rather than outright displacement.

Read Previous

Kendall Washington ‘Unwell Winter Games’ Dropout Not Staged, No Bad Blood Either

Read Next

Meet the Pennsylvania man who hunts meteorites for a living

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular