Updated 3 hours ago
Anthropic Built an AI Too Dangerous to Release. Then OpenAI Did Too.

AI Cybersecurity Arms Race

Anthropic Built an AI Too Dangerous to Release. Then OpenAI Did Too.

Anthropic's Mythos can find and exploit software vulnerabilities as well as top security experts — so the company restricted access. The White House pushed back on broader release. Then OpenAI followed suit with its own restricted GPT‑5.5‑Cyber model. Meanwhile, Anthropic launched Claude Security for defenders. The cybersecurity AI arms race has officially entered a new phase.

Mythos: The Model That Matches Top Security Experts

Anthropic built an AI model called Mythos that can find and exploit software vulnerabilities as effectively as elite cybersecurity professionals — and then decided the public shouldn't have access to it. The model "drastically reduces the time needed to exploit vulnerabilities," and similar tools "will likely spread among criminals and nation‑state actors," Security Affairs reported.

Mythos is capable of "matching top experts in identifying and exploiting weaknesses," according to the same report. The implications are significant: a tool that can automate zero‑day discovery could democratize offensive cybersecurity capabilities that previously required years of specialized expertise.

White House Pushes Back on Mythos Expansion

The White House has pushed back on letting more firms access Mythos. According to TipRanks, the administration intervened to block broader distribution of the model. The concerns center on dual‑use risk: the same capabilities that help defenders find vulnerabilities before attackers do could be turned against critical infrastructure by hostile actors.

This intervention marks a significant escalation in government oversight of frontier AI capabilities. It's one thing for companies to self‑regulate — it's another for the White House to step in and actively restrict a model's distribution. The move signals that the most powerful AI capabilities are now being treated as a national security concern, not just a commercial product decision.

Defense Secretary Pete Hegseth escalated the rhetoric further, calling Anthropic CEO Dario Amodei "a lunatic" and criticizing the company's insistence that its Claude chatbot not be used for mass surveillance, AFR reported. The political pressure on Anthropic comes even as the Pentagon has frozen the company out of classified AI contracts.

OpenAI Follows Suit With GPT‑5.5‑Cyber

Here's the twist: after Sam Altman publicly criticized Anthropic for gatekeeping Mythos — calling it "fear‑based marketing" — OpenAI announced it would restrict access to its own competing cybersecurity model, GPT‑5.5-Cyber. TechCrunch reported that Altman confirmed the rollout would begin in the coming days, available only to "critical cyber defenders" through an application process.

OpenAI's Trusted Access for Cyber (TAC) program has scaled to "thousands of verified defenders and hundreds of teams responsible for protecting critical software," an OpenAI spokesperson told TechCrunch. The program is tiered: verified defenders can access more cyber‑permissive models like GPT 5.4‑Cyber and the forthcoming GPT 5.5‑Cyber with fewer safety restrictions.

The irony wasn't lost on observers. When Anthropic restricted Mythos, Altman called the tactic fear‑based marketing. Some critics agreed, saying Anthropic's rhetoric was overblown. Yet an unauthorized group reportedly managed to gain access to Mythos anyway, suggesting the restrictions may be harder to enforce than they are to announce.

Claude Security: Arming the Defenders

While Mythos remains restricted, Anthropic has launched Claude Security to give defenders a fighting chance. The product is now in public beta for Claude Enterprise customers, using Claude Opus 4.7 to scan code for vulnerabilities and generate proposed fixes. As Anthropic announced, the product offers scanning on the Claude Platform or through technology and services partners building with Claude. The launch was also covered by Security Affairs.

The tool has already been tested by hundreds of organizations and now offers scheduled scans, easier integration, and better tracking without requiring complex setup. For builders, this is the accessible side of AI‑powered security — a product you can actually use today to find and fix vulnerabilities in your own code before attackers do.

The New Asymmetric Access Model

What's emerging is a two‑tier system for AI cybersecurity capabilities. The offensive tools — Mythos, GPT‑5.5-Cyber — are locked behind application processes and government oversight. The defensive tools — Claude Security, code scanning — are commercially available to anyone who can pay for an enterprise subscription.

This asymmetry raises real questions for the builder community. Small security teams and independent researchers may find themselves without access to the same AI‑powered vulnerability discovery tools that both attackers and well‑resourced defenders have. The "offense‑defense imbalance" in cybersecurity — where finding one vulnerability is inherently easier than patching all of them — gets amplified when the best tools for finding vulnerabilities are restricted.

Bloomberg reported that the global alarm around Mythos centers on this exact concern: AI models that can compress the time‑to‑exploit from weeks to hours could fundamentally destabilize the software ecosystem if they become widely available to threat actors.

Federal Reserve Weighs In

The regulatory response extends beyond the White House. Federal Reserve Governor Michelle Bowman said regulators must consider how best to supervise new technology like Anthropic's Mythos, Bloomberg reported. Bowman said Mythos shows the "dynamic nature" of AI tools, suggesting that financial regulators are watching how AI‑powered vulnerability discovery could affect the stability of banking infrastructure.

For builders working in fintech or handling financial data, this means AI security tools are about to become a regulatory concern, not just a technical one. Expect increased scrutiny of how your applications handle vulnerability discovery and patching — especially as AI models get better at finding flaws faster than humans can fix them.

What Builders Should Do Now

If you're building software, the cybersecurity AI arms race has direct implications for your workflow:

  • Try Claude Security: If you're a Claude Enterprise customer, the public beta is live. Use it to scan your codebases for vulnerabilities before someone with a restricted model does it for you.
  • Apply for TAC: If your team handles critical infrastructure security, apply for OpenAI's Trusted Access for Cyber program to get access to GPT‑5.4-Cyber and eventually GPT‑5.5-Cyber.
  • Expect faster exploit cycles: The window between vulnerability disclosure and active exploitation is shrinking. Automated patching and continuous security scanning are becoming non‑negotiable.
  • Watch for regulation: The White House and Federal Reserve are both paying attention. Financial services and critical infrastructure builders should expect new compliance requirements around AI‑powered vulnerability discovery.

Share this article

PostShare

Related News