The company also released the Agent Trust Protocol, an open cryptographic standard designed to verify the identity, authority and integrity of AI agents as they operate across digital systems. The protocol is being positioned as a trust layer for autonomous agents that can read emails, write code, trigger transactions, access enterprise tools and act on behalf of human users.
OTT Cybersecurity said the protocol is open, royalty-free and planned for submission to the Internet Engineering Task Force. Its reference implementation has been released under an MIT licence, signalling an attempt to encourage wider adoption rather than keeping the mechanism locked inside a proprietary platform.
The development comes as enterprises and public-sector organisations accelerate the use of AI agents in workflows that previously required direct human intervention. That shift has created a security gap: many agents can be granted broad access to systems, but organisations often lack a reliable way to confirm what an agent is, who authorised it, whether its instructions have been altered, and whether its permissions remain valid.
Lyrie’s Agent Trust Protocol seeks to address that gap through five core functions: identity, scope, attestation, delegation and revocation. In practical terms, those functions are intended to help a receiving system determine whether it is communicating with a legitimate agent, what that agent is allowed to do, whether the agent or its instructions have been tampered with, who delegated its authority, and whether that authority has been withdrawn.
Guy Sheetrit, founder and chief executive of OTT Cybersecurity, described Lyrie as a security layer for AI rather than a conventional tool operating beside it. He said autonomous agents on the internet are effectively strangers unless systems can verify their identity and permissions before allowing them to act.
Acceptance into Anthropic’s Cyber Verification Program is significant because it gives Lyrie a recognised pathway for work involving vulnerability research, red-team workflows and offensive security tooling on Claude’s AI infrastructure, subject to Anthropic’s safety and security policies. Such programmes are intended to separate legitimate cybersecurity operators from malicious actors while allowing controlled research into vulnerabilities and misuse risks.
Anthropic’s broader cyber push has gained prominence through Project Glasswing, an initiative involving major technology and security organisations aimed at securing critical software in the AI era. Anthropic has said advanced models are now capable of finding and exploiting software vulnerabilities at a level that could reshape cyber defence, making structured access, verification and responsible deployment more important.
Lyrie’s announcement also coincides with its emergence from stealth and completion of a $2 million pre-seed funding round. The company said the capital will support platform development, expansion of its security research team, infrastructure scaling, the IETF submission process for the Agent Trust Protocol, and partnerships with enterprise and government customers.
The Lyrie platform is being marketed as a combined offensive and defensive cybersecurity system for the agentic AI era. Its stated capabilities include autonomous penetration testing, adversarial AI red-teaming, zero-day research workflows, vulnerability scanning, endpoint defence, web application firewall functions, breach monitoring and automated remediation.
Cybersecurity specialists have increasingly warned that agentic AI creates a different risk profile from standard chatbots or conventional automation. Unlike systems that only generate text, agents may be able to take actions across multiple tools, interact with live data, modify files, execute code, approve workflows and make decisions at machine speed. That makes identity and permission controls central to any safe deployment model.
OWASP’s Agentic Security Initiative has identified emerging risks around agent misconfiguration, excessive privilege, rogue behaviour, tool misuse and weak governance. These concerns have pushed agent security from a niche research issue into a boardroom question for organisations experimenting with autonomous systems.
Lyrie’s challenge will be turning a proposed open standard into an accepted industry mechanism. Cryptographic identity and revocation systems depend on broad implementation, interoperability and trust across vendors. Without adoption by platform providers, enterprise software companies and cloud environments, any agent verification protocol risks remaining useful only inside a limited ecosystem.
Regulatory pressure may help accelerate demand. Governments are sharpening scrutiny of AI systems that affect critical infrastructure, financial transactions, personal data and national security. Companies deploying autonomous agents will face growing expectations to prove that such systems are accountable, auditable and constrained by enforceable controls.
For Dubai’s cybersecurity sector, OTT Cybersecurity’s move places a UAE-based company into a global debate over how to secure autonomous AI. The firm is entering a crowded and fast-moving market, but its focus on identity, scope and tamper verification reflects one of the most urgent technical questions facing enterprises as AI agents move from experimental pilots into production environments.
