GPT-5.5-Cyber
OpenAI Ships GPT-5.5-Cyber, a Near-Mythos Model for Vetted Defenders
OpenAI launched GPT‑5.5‑Cyber, a specialized model for cybersecurity defenders that scored 81.9% on the CyberGym benchmark and completed simulated corporate cyberattacks. The UK AISI found it nearly as capable as Anthropic's Claude Mythos — 20% vs 30% success on a 32‑step attack simulation. But the strategy diverges: Anthropic locks Mythos to ~40 orgs, while OpenAI offers tiered access through its Trusted Access for Cyber program.
A Second Cybersecurity AI Arrives
One month after Anthropic's Claude Mythos captivated Wall Street and Washington, OpenAI is shipping its own answer: GPT‑5.5-Cyber, a specialized version of its latest model tuned for cybersecurity defenders and rolling out in limited preview through the company's Trusted Access for Cyber (TAC) program.
"GPT‑5.5‑Cyber lets a smaller set of partners study advanced workflows where specialized access behavior may matter," OpenAI said in its announcement, CNBC reported. The model is designed to be more permissive on security‑related tasks than the publicly available GPT‑5.5, reducing friction for legitimate defensive work that would otherwise trigger safety guardrails.
The release is explicitly framed against Mythos. Anthropic's model "generated major attention" with interest from "Wall Street, senior U.S. government officials, major bank CEOs, and tech executives," CNBC reported. OpenAI's move signals that cybersecurity is now a named competitive front in the frontier model race.
How Close Is It to Mythos? The Numbers
The benchmark data puts GPT‑5.5-Cyber in the same league as Mythos — close enough that the difference may not matter for most defensive use cases. Note: the UK AISI tested GPT‑5.5 base (not the Cyber variant), but the results are indicative of the underlying model family's capability:
- CyberGym GPT‑5.5‑Cyber scored 81.9% on CyberGym, a benchmark of 1,500+ historical vulnerabilities from open‑source projects, per SiliconANGLE.
- UK AISI attack simulation GPT‑5.5 completed a 32‑step simulated corporate cyberattack in 2 out of 10 test runs. Claude Mythos did the same in 3 out of 10 runs, according to the UK AI Security Institute. Before Mythos, no AI model had ever successfully completed the test.
- MindStudio analysis GPT‑5.5 scored 71.4% on expert cyber tasks and cracked a reverse‑engineering challenge in 10 minutes for $1.73. Claude Mythos scored 68.6%, per MindStudio.
- Capability equivalence A source familiar with GPT‑5.5‑Cyber told Axios that its abilities are "roughly on par with Mythos."
What Vetted Defenders Can Actually Do With It
GPT‑5.5-Cyber is not just a less‑restricted version of the public model. It adds capabilities the standard model blocks entirely. Per SiliconANGLE, the model can:
- Generate vulnerability exploitation plans — and then validate them by launching simulated cyberattacks against the systems being studied
- Automate red teaming exercises by mimicking attacker behavior to test infrastructure defenses
- Hunt for software bugs, study malware, and reverse engineer attacks
- Write proofs of concept for discovered vulnerabilities
- Run security posture simulations to test organizational defenses
OpenAI still blocks explicitly malicious activities — credential theft, writing malware — but the expanded access is a significant step beyond the standard TAC tier, which only provides technical descriptions of attack methods without validating whether exploits actually work.
OpenAI vs Anthropic: Two Very Different Access Philosophies
The most interesting dimension of this release isn't the model itself — it's the access strategy. Anthropic and OpenAI have taken fundamentally different approaches to distributing cyber‑capable AI:
Anthropic's approach: Tight control. Claude Mythos access is limited to approximately 40 organizations through Project Glasswing, where members share information about how they're testing the model. The rollout was accompanied by CEO Dario Amodei meeting with senior Trump administration officials, CNBC reported — an unusually high‑touch government engagement for a product launch.
OpenAI's approach: Tiered transparency. The Trusted Access for Cyber program uses multiple tiers — a less‑permissive version for general TAC members and the full GPT‑5.5-Cyber for the highest tier of vetted defenders "responsible for securing critical infrastructure," per Axios. Stronger access verification and misuse monitoring are built in.
The contrast matters for builders and researchers. Anthropic's model is more capable but harder to access. OpenAI's model is nearly as capable and has a clear (if selective) pathway for qualified defenders to get in. For the cybersecurity community, which skews toward open access and transparency, OpenAI's approach may prove more popular.
The Safety Calculus
Any model that can simulate corporate cyberattacks raises hard safety questions. The UK AISI evaluation is sobering: GPT‑5.5 successfully executed a multi‑step attack chain in 20% of attempts. That's not reliable enough for attackers to depend on — but it's a capability that didn't exist in publicly documented AI systems before 2026.
OpenAI has added safeguards: stronger identity verification for TAC applicants, enhanced misuse monitoring, and ongoing compliance checks. But the fundamental tension remains — giving defenders better tools means the same underlying capability exists in the model, gated only by access controls.
Axios framed the moment well: before Mythos, no AI model had ever successfully completed the UK AISI's 32‑step attack test. Now two models can do it, from two different companies, in the space of a month. The genie isn't going back in the bottle — the question is who gets to use it, and under what rules.
What Builders and Security Researchers Should Know
For the developer and security communities, GPT‑5.5-Cyber has several immediate implications:
- Apply for TAC access. If you're doing legitimate defensive security work, the Trusted Access for Cyber program is the only way to access these capabilities. The highest tier gets GPT‑5.5‑Cyber. OpenAI says applications are open to vetted defenders.
- The benchmark gap between models is shrinking. A month ago, Mythos was unique. Now GPT‑5.5‑Cyber is within striking distance on every major benchmark. The competitive dynamics of AI cybersecurity are evolving in weeks, not months.
- Tooling implications. As these models become accessible to defenders, expect a new wave of AI‑native security tools — automated vulnerability scanners, continuous red teaming agents, and patch validation pipelines that use frontier models under the hood.
- The access tier you're in matters more than the model you use. Both OpenAI and Anthropic are building tiered access systems. For security researchers, relationship‑building with these programs may matter as much as technical skill.
May 9, 2026
Cloudflare Cuts 1,100 Jobs as AI Makes Roles 'Obsolete' at Record-Revenue Company
Cloudflare announced its first mass layoff in 16 years, cutting 1,100 employees — 20% of its workforce — while reporting record quarterly revenue of $639.8 million. CEO Matthew Prince said internal AI usage grew 600% in three months and some workers became '100x more productive.' This isn't cost-cutting. It's a restructuring for the agentic AI era.
May 9, 2026
Anthropic Inks $1.8B Cloud Deal With Akamai, Its Biggest Compute Bet Yet
Anthropic signed a $1.8 billion, seven-year cloud infrastructure deal with Akamai — the largest contract in Akamai's history and the latest in a series of massive compute commitments from the Claude maker. Combined with its SpaceX deal and 80x annualized revenue growth, Anthropic is building the most diversified AI compute backbone in the industry.
Related News
May 9, 2026
Mozilla Used Claude Mythos to Find 271 Firefox Bugs — Almost No False Positives
Mozilla built a custom agent wrapper around Anthropic Claude Mythos Preview and pointed it at the Firefox codebase. The result: 271 security vulnerabilities found, 180 rated sec-high, with almost no false positives.
May 9, 2026
Microsoft Feared OpenAI Would Storm Off to Amazon and Shit-Talk Azure. It Happened Anyway.
Court documents from Musk v. Altman reveal Microsoft execs in 2018 feared OpenAI would storm off to Amazon in a huff and shit-talk Azure. Eight years later, those fears came true within 24 hours.
May 9, 2026
OpenAI Ships GPT-Realtime-2 — A Voice Model That Reasons Inside the Audio Loop
OpenAI launched GPT-Realtime-2 and two companion voice models on May 7, 2026. The flagship brings GPT-5-class reasoning to live voice with 128K context window.