Pentagon AI Deals
Pentagon Signs 8 AI Companies for Classified Work While Shutting Out Anthropic
The Department of Defense announced deals with OpenAI, Google, Nvidia, SpaceX, Microsoft, AWS, Oracle, and Reflection AI for classified military AI work — while Anthropic remains designated a supply chain risk after refusing 'any lawful use' contract language.
The Pentagon's New AI Roster
On May 1, 2026, the Department of Defense announced formal agreements with eight AI companies to deploy their frontier AI capabilities on the Pentagon's classified networks for "lawful operational use." According to TechCrunch, the companies — OpenAI, Google, SpaceX, Nvidia, Microsoft, Amazon Web Services, Oracle, and startup Reflection AI — will provide resources to deploy AI on IL6 (classified) and IL7 (top secret) environments.
Conspicuously absent: Anthropic, the company that was previously the Pentagon's primary classified AI partner and the first to deploy AI on classified networks. According to DefenseScoop, Anthropic remains designated a "supply chain risk" — a label typically reserved for foreign adversaries — after refusing to accept contract language that would allow "any lawful use" of its AI by the military.
Why Anthropic Was Frozen Out
The core dispute centers on contract language. The Pentagon wanted Anthropic to agree to "any lawful use" or "any lawful purpose" provisions. Anthropic refused to loosen its safety "red lines" around mass domestic surveillance and fully autonomous weapons systems, according to The Verge.
After talks collapsed, the Trump administration designated Anthropic a supply chain risk — a move Military Times notes was "the first such designation against an American firm." Defense Secretary Pete Hegseth moved "within days" to apply the label, per the BBC, and posted on X that "effective immediately, no contractor, supplier, or partner" doing business with the Pentagon may engage in any commercial activity with Anthropic.
Anthropic has filed two lawsuits challenging the designation and won a temporary injunction in March 2026. The legal battle is expected to go to court in September 2026.
What the Deals Actually Cover
The DOD press release stated the agreements will "streamline data synthesis, elevate situational understanding, and augment warfighter decision‑making" across three core tenets: "warfighting, intelligence, and enterprise operations," as quoted by TechCrunch.
According to DefenseScoop, the DOD's GenAI.mil platform — launched in December 2025 — has already seen over 1.3 million DOD personnel use it, generating "tens of millions of prompts and deploying hundreds of thousands of agents in only five months." GenAI.mil currently operates at IL5 (controlled unclassified); the new deals push into the IL6 and IL7 classified territory for the first time across multiple vendors.
- OpenAI Signed original deal in late February 2026; maintains control over its "safety stack" and prohibits use for mass surveillance or autonomous weapons, per Gizmodo
- Google First time Gemini will be used at classified level; over 600 Google employees signed a letter urging the company not to deepen military ties, per BBC
- SpaceX SpaceX, now parent company of xAI (Grok), signed alongside other frontier AI firms, per BBC
- Nvidia New contract; provides AI hardware and models for classified infrastructure
- Microsoft and AWS Deep existing Pentagon cloud relationships extended to classified AI deployment
- Reflection AI Startup that raised $2 billion in 2025; new DOD contract
The Mythos Paradox
Here's the twist: even as Anthropic is officially blacklisted, the NSA is already testing Anthropic's Mythos model to identify cybersecurity vulnerabilities in widely used software, including Microsoft products, according to Gizmodo. Mythos is Anthropic's most advanced AI model, released only to select organizations.
DOD CTO and Undersecretary of Defense for Research and Engineering Emil Michael told CNBC that Mythos represents a "separate national security moment," adding: "We have to make sure that our networks are hardened up, because that model has capabilities that are particular to finding cyber vulnerabilities and patching them," as reported by The Verge. Michael confirmed Anthropic is still a "supply chain risk" even as its most powerful model is being tested by the NSA — a paradox that highlights the tension between security concerns and the practical value of Anthropic's technology.
The Safety vs. Access Debate
The Pentagon's deals reveal a sharp divide in how AI companies approach government work. Every company that signed accepted the "lawful use" framework — but with varying degrees of safety language. Gizmodo reports that OpenAI claims it maintained control over its safety stack and prohibited use of its AI for mass domestic surveillance or directing lethal autonomous weapons. But Google's deal explicitly states the company "does not confer any right to control or veto lawful Government operational decision‑making," per The Information, cited by Gizmodo.
At a Senate Armed Services Committee hearing on April 30, Defense Secretary Hegseth called Anthropic's leadership an "ideological lunatic who shouldn't have sole decision‑making over what we do." When asked whether there will "always be a human in the loop," Hegseth said "We follow the law and humans make decisions" and that AI is not currently "making lethal decisions," per Gizmodo.
President Trump told CNBC his administration had "some very good talks" with Anthropic and a future agreement was "possible," adding: "They're very smart, and I think they can be of great use."
Financial Context and Previous Contracts
Last summer, the Pentagon unveiled individual contracts with OpenAI, Anthropic, Google, and xAI — each worth up to $200 million — for "frontier AI projects," according to DefenseScoop. Anthropic had a $200 million deal to handle classified materials for the Pentagon, per The Verge. The financial terms of the new contracts were not disclosed.
DOD CTO and Undersecretary of Defense for Research and Engineering Emil Michael framed the expansion as a diversification strategy: "It's irresponsible to be reliant on any one partner. And we learned that that one partner didn't really want to work with us in the way we wanted to work with them," he told CNBC, as reported by DefenseScoop.
What This Means for Builders
For developers and builders working with AI platforms, the Pentagon deals have several practical implications:
- Multi‑model classified infrastructure is here. The DOD explicitly framed these deals as preventing vendor lock‑in. Builders shipping products that integrate with government systems now need to support multiple AI backends, not just one.
- The "any lawful use" standard is becoming the norm for government AI procurement. Every major AI company except Anthropic has now accepted it. This sets a precedent that will shape future contracts at federal, state, and local levels.
- Anthropic's tools are still in use across government. The BBC notes Claude is still currently in use in many US government and defense agencies, creating a complex transitional period for agencies that built workflows around Anthropic's tools.
- OpenAI's safety claims are untested. While OpenAI says it maintains safety guardrails, Google's deal language explicitly surrenders veto power. Builders should watch how these safety commitments hold up under actual operational pressure.
- Automation bias is the real risk. Lauren Kahn, a former DOD policy advisor now at CSET, called the expansion "long overdue" but warned the next step must involve training operators against "automation bias" — the tendency to overdelegate to machines — per DefenseScoop.
Related News
May 2, 2026
OpenAI Shares ChatGPT User Data With Advertisers as EU Expansion Begins
OpenAI updated its U.S. privacy policy to share free users' data with advertisers and enable marketing cookies by default. A conversion tracking pixel with EU consent management signals an imminent European ad rollout. Here's what builders need to know about the data boundary shifts.
May 2, 2026
Anthropic Built an AI Too Dangerous to Release. Then OpenAI Did Too.
Anthropic's Mythos can find and exploit software vulnerabilities as well as top security experts — so the company restricted access. The White House pushed back on broader release. Then OpenAI followed suit with its own restricted GPT-5.5-Cyber model. Meanwhile, Anthropic launched Claude Security for defenders. The cybersecurity AI arms race has officially entered a new phase.
May 2, 2026
OpenAI Trial Week 1: Judge Shuts Down Musk as 7 Stumbles Undermine Case
Elon Musk's first week on the stand against OpenAI was a disaster for his own case. A federal judge repeatedly shut down his AI extinction rhetoric, his own lawyer couldn't keep xAI's safety record off the table, and documents contradicted his testimony at nearly every turn. For builders, the trial's outcome could reshape how AI companies transition from nonprofit to for-profit.