AI vulnerability race gathers speed — Arabian Post

Commercial artificial intelligence models are advancing quickly in vulnerability research and exploit development, sharpening concern across the cybersecurity industry that tools built for productivity and defence could also lower the barrier for offensive misuse. A study by Forescout’s Vedere Labs found that commercial systems now outperform open-source and underground alternatives in identifying software flaws, while more than half of the models tested were able to generate exploits with varying degrees of autonomy or user guidance.

That marks a notable shift from the picture Forescout outlined in mid-2025, when failure rates remained high across vulnerability research and exploit development tasks. In the earlier study, 48 per cent of models failed the first vulnerability-research task, 55 per cent failed the second, 66 per cent failed the first exploit-development task and 93 per cent failed the second. In its follow-up work, the company said progress over even a three-month testing window was “remarkable”, with newer reasoning systems solving tasks that had been out of reach only weeks earlier.

Forescout’s testing suggests the strongest gains are now concentrated in mainstream commercial offerings rather than in openly distributed or illicitly marketed “uncensored” tools. In its breakdown of 17 commercial models, several products from OpenAI, Google, DeepSeek and specialist offensive-security assistants successfully handled at least some stages of both flaw discovery and exploit construction. Open-source models lagged badly, with none of the 16 tested generating a working exploit for the first exploit-development task. Underground models, often advertised in criminal forums as unrestricted alternatives, were described as unstable, technically weak and poor value when compared with commercial systems.

Even so, Forescout stopped short of arguing that fully autonomous AI hacking has arrived. Its researchers said many models still required substantial steering, correction and debugging help from the user, and warned that polished but inaccurate responses can mislead inexperienced operators. That caveat is important because it suggests the present risk is less about a push-button machine attacker and more about a sharp increase in the speed, scale and accessibility of skilled or semi-skilled exploitation work.

The wider industry backdrop points in the same direction. The World Economic Forum’s Global Cybersecurity Outlook 2026 found that 87 per cent of respondents identified AI-related vulnerabilities as the fastest-growing cyber risk over 2025. The International AI Safety Report 2026 said AI systems are particularly effective at discovering software vulnerabilities and writing malicious code, adding that criminal groups and state-associated attackers are already using general-purpose AI in their operations. It also cited a cyber competition in which an AI agent identified 77 per cent of vulnerabilities in real software, placing it among the top-performing teams.

Pressure on defenders has intensified as frontier AI companies begin releasing specialised cyber models under controlled programmes. Reuters reported on April 14 that OpenAI introduced GPT-5.4-Cyber for vetted security professionals, a week after Anthropic announced Mythos under its Project Glasswing initiative. Reuters said Anthropic’s model had already found thousands of major vulnerabilities in operating systems, browsers and other software. Forescout, commenting on that development, said such systems could expose serious flaws at machine speed and compress the time between discovery and exploitation.

That compression matters most in environments where patching is slow or operationally risky. Forescout noted that critical infrastructure operators may patch only every few months to avoid disrupting energy supply or manufacturing, while hospitals must weigh security fixes against patient-safety concerns. In those sectors, a sudden rise in AI-assisted vulnerability discovery could leave organisations exposed for longer, especially when vendors have not yet produced patches or asset inventories are incomplete.

There is also a second layer to the threat. Reuters reported in January that researchers found thousands of internet-accessible open-source large language model deployments operating outside the guardrails of the main AI platforms, with hundreds showing evidence that safety controls had been stripped away. Researchers warned such systems could be repurposed for phishing, spam and disinformation, illustrating how defensive or neutral AI capability can spill into criminal use once controls weaken or disappear.

Read Previous

OpenAI turns to science with Rosalind — Arabian Post

Read Next

Scientology Wants LAPD to Answer Questions About Shelly Miscavige Leak

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular