OpenAI plans to open its Trusted Access for Cyber programme to government cyber defenders across federal, state and local levels, marking a broader push to place advanced artificial intelligence tools in the hands of public agencies responsible for protecting critical systems.
The expansion is aimed at agencies handling national security, emergency response, public health systems, benefits delivery, municipal infrastructure and other services that face rising digital threats. OpenAI said the programme will create pathways for government users with varying levels of technical capacity to access more capable cyber models, supported by technical resources tailored to mission needs.
Trusted Access for Cyber was introduced in February as a trust-based framework for giving verified defenders greater room to use frontier models for security work while keeping safeguards against misuse. The model rests on graduated tiers, with more powerful or permissive capabilities requiring stronger vetting, monitoring, security commitments and use-case controls.
The move comes as cyber-capable AI systems become more useful for both defenders and attackers. Security teams can use frontier models to review code, investigate suspicious behaviour, analyse malware, identify vulnerabilities and accelerate patching. At the same time, malicious actors are already using AI to improve phishing, automate reconnaissance, evade detection and scale operations.
OpenAI’s approach seeks to reduce what security professionals have long described as friction in legitimate defensive work. A prompt asking a model to “find vulnerabilities in my code” may reflect responsible testing, but the same request can also support unauthorised exploitation. Trusted Access is designed to make that distinction through identity verification, tiered access, usage monitoring and restrictions on clearly harmful tasks such as credential theft, malware deployment and destructive testing.
The government expansion builds on OpenAI’s April rollout of GPT-5.4-Cyber, a version of its flagship model fine-tuned for defensive cybersecurity use cases. Access to the more permissive model began with vetted security vendors, organisations and researchers, with higher programme tiers unlocking advanced workflows such as vulnerability research, threat analysis and binary reverse engineering.
OpenAI has said its existing programme is being scaled to thousands of verified individual defenders and hundreds of teams responsible for protecting critical software. Government participation adds a new layer to that strategy because public agencies often manage systems with high civic impact but uneven cyber staffing, especially at local and regional levels.
The company has also identified smaller hospitals, school districts, water utilities, municipalities and local infrastructure providers as priority areas for support through trusted intermediaries. Many such organisations lack the capacity to operate advanced AI security tools directly, making managed security providers, sector bodies, major security vendors and government-supported programmes central to the wider rollout.
The initiative forms part of a five-pillar cyber defence plan focused on democratising access, coordinating government and industry, strengthening security-by-design, aligning incentives and working with democratic allies. Financial institutions, cloud platforms, internet-facing technology providers, software-supply-chain defenders and critical infrastructure operators are among the sectors expected to receive priority attention because their protection can benefit large numbers of downstream users.
OpenAI has also committed $10 million in API credits through its Cybersecurity Grant Program to support teams working on open-source security and critical infrastructure protection. That funding is intended to accelerate defensive research, remediation and deployment of AI-assisted security tools.
The wider race in cyber-focused AI has intensified as model developers test controlled access to systems that can perform advanced technical tasks. Anthropic has been developing its own frontier cyber model under Project Glasswing, with selected organisations using a preview system for defensive security work. The competition is drawing attention from policymakers because the same capabilities that help defenders identify weaknesses can also lower barriers for attackers if released without adequate controls.
For government agencies, the practical value will depend on implementation. Smaller public bodies may benefit from faster incident triage, automated code review and better threat intelligence processing, but they will also need training, procurement guidance, data-handling rules and clear accountability frameworks. Sensitive public-sector environments may require limits on data retention, logging, model visibility and third-party platform access.
