Pentagon Inks Deals with Tech Giants to Deploy AI on Classified Networks
- The road to an AI‑first military
- Locking AI into the classified stack
- From favored partner to “supply‑chain risk”
- Diversification by design
- Competing visions of “lawful use”
- What changes on the battlefield
- The next phase: consolidation or collision?
Pentagon Inks Deals with Tech Giants to Deploy AI on Classified Networks Human Human coverage depicts the Pentagon’s classified AI agreements with firms like Nvidia, Microsoft, AWS, OpenAI, Google, xAI, and Reflection as a deliberate move to build an AI-first fighting force while diversifying away from a single vendor after a high-profile clash with Anthropic over surveillance and autonomous weapons limits. It stresses the concrete lineup of companies, the political and ethical implications of deploying AI on secret networks, and the power shift these deals create between the Defense Department and a small circle of technology giants. @Verge @TC The Pentagon is racing to turn the U.S. military into an “AI‑first fighting force,” cutting deals with nearly every major AI player in Silicon Valley—while dragging one of its former favorites, Anthropic, into court.
The road to an AI‑first military
In the past year, the U.S. Department of Defense (DoD) has quietly rewritten how it works with cutting‑edge AI vendors, moving from a small stable of partners to a sprawling ecosystem.
First came agreements with OpenAI, Google, SpaceX and Elon Musk’s xAI, giving the Pentagon access to powerful models under what officials insist will be “lawful operational use.”1 These early deals set the template: commercial models, run inside government‑approved clouds and on secure networks, to accelerate intelligence analysis and battlefield decision‑making.
By the time the Pentagon’s latest announcement dropped on Friday, that approach had hardened into doctrine. The department said it had now inked additional agreements with Nvidia, Microsoft, Amazon Web Services (AWS) and the startup Reflection AI, allowing their AI hardware and models to be deployed directly on classified networks.1 Taken together with earlier arrangements, the DoD now has formalized classified‑use deals with seven AI companies: OpenAI, Google, Microsoft, Amazon, Nvidia, xAI and Reflection.2
Locking AI into the classified stack
The core of the new push is technical but consequential: the Pentagon wants these commercial models running at its highest security tiers.
According to the department’s statement, the companies’ systems will be deployed on Impact Level 6 (IL6) and Impact Level 7 (IL7) environments, security classifications for data and systems deemed critical to national security.1 IL6 and IL7 environments are subject to stringent physical protections, tight access controls and constant auditing.
Inside those hardened networks, Pentagon officials say the tools will be used to “streamline data synthesis, elevate situational understanding, and augment warfighter decision‑making.”1 The goal is nothing less than decision dominance: faster, better judgments in complex combat situations, powered by large language models and other AI systems.
The shift is not hypothetical. The Pentagon says more than 1.3 million DoD personnel have already used its secure enterprise generative AI platform, GenAI.mil, to tap into large language models and other tools inside government‑approved cloud environments.1
In its most recent announcement, the department cast the new deals as the next stage of that transformation, claiming they will “strengthen our warfighters’ ability to maintain decision superiority across all domains of warfare” and help “establish the United States military as an AI‑first fighting force.”12
From favored partner to “supply‑chain risk”
Running in parallel with this expansion, however, is a bitter falling‑out with one of the Pentagon’s earlier AI partners.
Anthropic—once trusted with a $200 million deal to handle classified materials for the DoD—refused to soften what it calls “red lines” on two fronts: mass domestic surveillance and fully autonomous weapons.2 The company insisted its models not be used for either purpose, even in defense of national security.
The Pentagon pushed back hard, demanding more leeway. When Anthropic held firm, the government moved to brand the company a “supply chain risk,” effectively banning its products across the federal government.12
Anthropic sued. In March, it won a temporary injunction blocking the Pentagon’s designation while the case plays out.1 The legal fight has become a test case for how far AI firms can go in constraining military use of their systems—and how far the U.S. government will go to override those constraints.
Publicly, the Pentagon is not backing down. Emil Michael, the Defense Department’s chief technology officer, told CNBC that Anthropic remains a “supply chain risk,” even while praising its powerful security‑focused model Mythos as a “separate national security moment.”2 He said the model is particularly adept at “finding cyber vulnerabilities and patching them,” a capability so significant the government now has to ensure its own networks are “hardened up” against potential misuse.2
Diversification by design
Inside the Pentagon, the Anthropic dispute has become a cautionary tale—and a justification for spreading bets across as many vendors as possible.
“The Department will continue to build an architecture that prevents AI vendor lock‑in and ensures long‑term flexibility for the Joint Force,” the DoD said in its latest statement.1 It emphasized that “access to a diverse suite of AI capabilities from across the resilient American technology stack will give warfighters the tools they need to act with confidence and safeguard the nation against any threat.”1
That philosophy is clearly visible in the roster of companies now cleared for classified work.
Microsoft and Amazon, already deeply embedded in Pentagon IT modernization, were natural extensions of existing “deep relationships” with the department.2 Nvidia—a hardware and systems powerhouse whose GPUs underpin much of the AI boom—adds raw compute and tailored AI infrastructure.12 Reflection, a younger startup, signals that the Pentagon wants to tap not just incumbents but emerging players as well.12
OpenAI, Google and xAI, meanwhile, bring general‑purpose frontier models. OpenAI and xAI had already struck separate agreements for the “lawful” use of their AI systems before being folded into the broader classified framework.2 A report from The Information suggests Google is operating under a similar agreement, giving the DoD another major foundation model provider.2
The signal to industry is blunt: don’t expect the Pentagon to gamble on a single provider again—especially not one that asserts strong ethical vetoes over military use cases.
Competing visions of “lawful use”
The central fault line between the Pentagon and Anthropic is not over whether AI should be used in war—it already is—but over who sets the boundaries.
On paper, the DoD insists all new systems will be used only for “lawful operational use,” a phrase that now appears in almost every AI announcement it makes.12 That formulation reassures lawmakers that AI‑enabled operations will remain within U.S. and international law.
Anthropic’s position effectively adds an extra layer: legal isn’t always enough. The company’s refusal to support mass domestic surveillance or fully autonomous weapons reflects a belief that some uses, even if arguably legal, are too dangerous or corrosive to permit.2
Most of the Pentagon’s new partners, at least publicly, appear more willing to defer to government determinations of what counts as lawful and appropriate. Their constraints tend to focus on safety and security—preventing models from being easily repurposed by adversaries—rather than categorical bans on specific mission types.
The Anthropic injunction ensures that, for now, the courts—not just contracts—will help define where that line gets drawn.
What changes on the battlefield
The operational promise of these deals is straightforward: speed and scale. With AI models wired into IL6 and IL7 systems, commanders could, in theory, ingest satellite imagery, signals intelligence, battlefield reports and cyber telemetry in near‑real time, letting algorithms surface threats, vulnerabilities and options much faster than human analysts working alone.1
GenAI.mil’s 1.3 million users provide an early glimpse: staff already use AI‑assisted tools to draft reports, summarize intelligence and translate documents within secure environments.1 Extending that into higher‑classification domains, with more powerful commercial models and dedicated hardware from Nvidia and others, could lock AI into the core of U.S. war planning.
Critics, however, warn of new dependencies. The more essential AI becomes to military operations, the more leverage its corporate providers gain—and the more catastrophic failures or outages could become. The Pentagon’s anti‑lock‑in mantra is an attempt to hedge against exactly that risk.
The next phase: consolidation or collision?
Chronologically, the story of Pentagon AI is moving from ad‑hoc experiments to strategic integration:
- Initial partnerships with firms like Anthropic to handle classified work, under relatively narrow contracts.
- Early “lawful use” agreements with OpenAI, xAI and Google to test commercial frontier models inside controlled environments.12
- The Anthropic rift, triggered when the lab refuses to relax its red lines on surveillance and autonomous weapons, leading to its designation as a “supply chain risk” and a subsequent lawsuit and injunction.12
- The diversification push, capped by Friday’s announcement adding Nvidia, Microsoft, AWS and Reflection AI to the classified roster, and formalizing a stable of seven AI vendors with access to high‑security DoD networks.12
What comes next will determine whether the Pentagon’s AI future is defined by consolidation around a handful of compliant mega‑vendors—or by continued collision with firms that want stronger ethical brakes than “lawful use” alone.
For now, the Pentagon is making its bet: AI everywhere, at the highest levels of classification, from as many American tech giants as it can bring into the fold. Those not willing to play by those rules may find themselves, like Anthropic, fighting from the outside.
1. TechCrunch — “After landing agreements with Google, SpaceX, and OpenAI, the U.S. Defense Department said on Friday that it has signed deals with Nvidia, Microsoft, Amazon Web Services, and Reflection AI that allow it to deploy their AI tech and models on its classified networks for ‘lawful operational use.’”
2. The Verge — “The Pentagon has struck deals with OpenAI, Google, Microsoft, Amazon, Nvidia, Elon Musk’s xAI, and the startup Reflection, allowing the agency to use their AI tools in classified settings, according to an announcement on Friday.”
Story coverage
Write a comment