
The Pentagon just made one thing clear: control matters more than capability.
On May 1, 2026, the Department of Defense moved forward with a sweeping set of AI agreements – while cutting out one of its most prominent partners, Anthropic. In its place, a new coalition of eight tech companies is being integrated directly into the military’s most sensitive systems.
At stake isn’t just performance. It’s who gets to decide how these systems are used.
The New AI Stack
The Pentagon’s new partners are being deployed into Impact Level 6 (IL6) and Impact Level 7 (IL7) environments – networks reserved for classified and top-secret operations.
The lineup:
- OpenAI
- Microsoft
- Amazon Web Services (AWS)
- Nvidia
- SpaceX
- Reflection AI
- Oracle
Originally announced as a seven-company group, Oracle was added later in the day to complete the stack.
This isn’t a single-vendor strategy. It’s a deliberately mixed system designed to avoid dependence on any one provider.
What Forced Anthropic Out
The split didn’t come down to performance. It came down to terms.
Under the new agreements, AI providers must allow their systems to be used for “all lawful purposes.” That clause became the breaking point.
Anthropic refused to agree.
Its models – including Claude – were already in use across government workflows, but the company maintained firm restrictions against:
- Mass domestic surveillance
- Fully autonomous lethal weapons systems
Those limits directly conflicted with what the Pentagon now expects from its vendors.
Defense Secretary Pete Hegseth reportedly viewed those constraints as incompatible with military operations. The result: Anthropic was designated a “supply chain risk” – a label never before applied to an American company.
Related: The $200 Million “Sloppy” Deal: A Line in the Sand for AI
The Power Struggle Moves Beyond the Courts: The White House and Mythos
The dispute is no longer confined to a courtroom.
Anthropic filed suit against the U.S. government, arguing the “supply chain risk” designation was unlawful retaliation for enforcing its safety policies.
In March, a federal judge in California issued a preliminary injunction, temporarily blocking parts of the government’s action and signaling that the challenge had merit. The government appealed, and the D.C. Circuit Court of Appeals allowed the designation to remain in place for now, citing national security concerns.
But even as that legal fight continues, a separate battle is emerging inside the executive branch.
According to reports from outlets including CNN and Reuters, the White House is exploring ways to work around the Pentagon’s restriction – driven by a single factor: Anthropic’s new “Mythos” model.
Unveiled in April, Mythos is described by industry sources as a major leap in cybersecurity capability, with the ability to autonomously identify and exploit previously unknown vulnerabilities across major systems. That dual-use potential – both defensive and offensive – has raised urgency at the highest levels.
Anthropic CEO Dario Amodei recently met with White House Chief of Staff Susie Wiles, as officials weigh how to access the technology without formally reversing the Pentagon’s position.
The result is a split inside the government itself: one side enforcing the restriction, the other looking for a way around it.
A Line Has Been Drawn – But Not Everyone Agrees
By signing with Anthropic’s competitors, the Pentagon isn’t waiting for a final ruling.
It’s building a system that no longer depends on them.
But that decision hasn’t settled the issue.
Even as the Pentagon enforces its position, the White House is exploring ways to regain access to Anthropic’s technology – highlighting just how valuable those capabilities are.
What’s emerging is a rare internal divide: one part of the government is pushing Anthropic out, while another is looking for a way back in.
In the new AI stack, control may be the priority – but who ultimately holds it is far from settled.
