Google Signs Deal to Provide Gemini AI for Classified Pentagon Work
- Early rumblings: Congress talks, labs move
- Anthropic says no – and gets punished
- OpenAI, xAI – and then Google – move in
- April 27–28: Employees rebel, Google signs anyway
- A line on drones – and a much blurrier one on code
- “All lawful use” in a law‑light world
- Competing visions of “responsible” AI power
- The clock is now on Congress
Google Signs Deal to Provide Gemini AI for Classified Pentagon Work Human Human coverage portrays Google’s Gemini deal as a consequential deepening of military AI collaboration that outpaces congressional guardrails and leans heavily on unenforceable promises about ethical use. It emphasizes employee pushback, the classified and opaque nature of the work, and the risk that corporate and Pentagon decisions will shape battlefield AI norms before the public or lawmakers can meaningfully intervene. @TC @4qd8…qnwa @TNW Google has quietly crossed a Rubicon: its flagship Gemini AI will now run on U.S. classified networks, under a Pentagon deal that greenlights “any lawful government purpose” while Congress is still arguing over what “lawful” ought to mean in the age of machine intelligence.
Early rumblings: Congress talks, labs move
By late April, lawmakers in Washington were already being warned they were falling behind their own technology.
As AI systems raced ahead, Congress remained “far from passing military AI guardrails,” even as the Pentagon was cutting fresh AI contracts that test the limits of corporate red lines on warfighting and surveillance.1 Advocates pushed for hard law—transparency mandates, independent verification, and enforceable limits on how AI could be used in battle or at home—but those ideas were still being haggled over as part of the annual National Defense Authorization Act.1
The vacuum left a familiar power center in charge: the Pentagon’s contracting office. And the big labs were lining up with very different answers.
Anthropic says no – and gets punished
The decisive turn came first from Anthropic. When the Defense Department sought broad, essentially unrestricted access to its models—including for domestic mass surveillance and autonomous weapons—Anthropic balked.2
The company insisted on guardrails to prevent its AI from being pointed at Americans’ private lives or used to select and fire weapons without meaningful human control.2 The Pentagon’s response was blunt: it labeled Anthropic a “supply-chain risk,” a designation usually reserved for foreign adversaries, not a U.S. startup backed by some of the biggest names in tech and finance.2
Anthropic sued. In March, a judge granted the company an injunction pausing that blacklist while the case proceeds, turning the standoff into a live test of how far a government can go to punish an AI vendor for saying no.2
Even amid the fight, Anthropic leaned harder into its role as the “responsible” lab, unveiling Project Glasswing, an initiative to harden critical software systems using its new Claude Mythos Preview frontier model.3 CEO Dario Amodei framed it as a coalition effort to counter the dark side of AI, saying he was “proud that so many of the world’s leading companies have joined” Anthropic to “confront the cyber threat posed by increasingly capable AI systems head-on.”4
In other words: yes to AI for cyber defense; no to AI for mass surveillance and autonomous killing.
OpenAI, xAI – and then Google – move in
The Pentagon didn’t wait for the courts. As soon as Anthropic refused, other labs saw an opening.
OpenAI quickly inked its own arrangement with the Defense Department, followed by Elon Musk’s xAI.2 Both accepted broadly permissive terms that, according to reporting, effectively allow “all lawful uses” of their models on classified networks.12
On April 28, Google joined them.
Multiple outlets reported that Google had signed a deal giving the Pentagon access to its Gemini AI on classified networks, “essentially allowing all lawful uses.”2 A source familiar with the arrangement said the Defense Department could now use Gemini for “all lawful use,” and that the contract was “more permissive than OpenAI’s.”1
Crucially, Google agreed that the government could ask it to adjust Gemini’s safety settings and content filters—meaning the Pentagon can, at least in principle, dial down or bypass guardrails that Google’s own teams designed.13
April 27–28: Employees rebel, Google signs anyway
The timing was conspicuous.
On April 27, more than 580 Google employees signed an open letter urging CEO Sundar Pichai to follow Anthropic’s lead and refuse classified military work without hard guardrails against misuse, explicitly calling out scenarios like domestic mass surveillance and weapons autonomy.3
Within 24 hours, Google did the opposite.
On April 28, the company confirmed it had signed a deal letting the Pentagon use Gemini “for classified military work” under terms that permit “any lawful government purpose.”3 The arrangement extends an existing relationship in which Gemini is already deployed to about three million Pentagon personnel on unclassified systems, but it vaults the technology directly into the air‑gapped networks that handle mission planning, intelligence analysis, and weapons targeting.3
The contract text nods to ethics. It states that “the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control.”3 A Google spokesperson echoed this red line in broader terms, saying the company remains committed to the consensus that AI “should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.”1
But Axios and others noted the fine print: those commitments are couched in “aspirational language with no legal restrictions,” and critics warn they are not meaningfully enforceable.12 DeepMind research scientist Alex Turner put it more bluntly, arguing the agreement means Google “can’t veto usage” and is relying on words the Pentagon can simply ignore.1
TechCrunch reported that, like OpenAI’s arrangement, Google’s contract includes language disavowing certain uses—but it remains “unclear whether such provisions are legally binding or enforceable.”2
A line on drones – and a much blurrier one on code
Google is trying to argue that there is a line.
On the same day its classified Gemini deal became public, Bloomberg revealed that Google had quietly withdrawn in February from a $100 million Pentagon challenge to build voice‑controlled autonomous drone swarms.3 The company had advanced in the competition but pulled out after an internal ethics review, officially citing “resourcing” but clearly wary of building systems edging toward AI‑driven lethal autonomy.3
By contrast, the Gemini contract is structured as an extension of Google’s existing Pentagon work: providing API access rather than bespoke weapons software.3 That lets Google claim it isn’t designing missiles or targeting systems, just supplying a general‑purpose AI tool the government can plug into its own workflows.
The problem, critics argue, is that in modern warfare that distinction collapses. When an AI model can summarize satellite feeds, flag targets, write code for battlefield tools, or analyze communications patterns, offering it as an “API” into classified networks is tantamount to embedding it anywhere from logistics to weapons planning.
“All lawful use” in a law‑light world
The phrase doing the heaviest lifting in these contracts is “all lawful use.”123
In theory, that sounds like a constraint. In practice, with Congress still debating what AI guardrails should look like and no explicit statutory bans on, say, AI‑enabled mass surveillance of Americans, “lawful” mostly means “not yet prohibited.”
Google and OpenAI both tout the same bright lines—no domestic mass surveillance, no fully autonomous weaponry. But as Axios notes, without binding law, those “red lines” live in contract clauses that are “susceptible to loopholes and workarounds.”1
The Pentagon, for its part, has every incentive to push those boundaries. It is under pressure to keep pace with rival powers, to exploit AI advantages in intelligence and cyber operations, and to harden U.S. systems against attacks from exactly the kinds of frontier models companies like Anthropic are building.
Competing visions of “responsible” AI power
Put side by side, the three major labs offer a split‑screen of what “responsible AI” means in practice.
- Anthropic is willing to partner with the government on national security—Project Glasswing is explicitly about defending “the world’s most critical software” with a highly capable model—but is drawing a bright legal and contractual line against mass surveillance and autonomous killing.34
- OpenAI and xAI have accepted the Pentagon’s broad, classified access terms and rely mainly on contractual language and internal policies to prevent misuse, while retaining public claims of safety and ethics.12
- Google is trying to split the difference: exiting a splashy autonomous drone contest after an ethics review, even as it signs a more permissive, “any lawful purpose” AI access deal that its own employees had just begged it to refuse.23
The employee revolt underscores the internal split. Nearly a thousand Googlers have now either publicly or privately questioned whether the company is re‑playing its 2018 “Project Maven” moment—when it abandoned a Pentagon drone imagery contract after protests—only this time with far higher stakes and far more capable AI.23
The clock is now on Congress
For now, the most consequential decisions about how military AI will be used are being written not in statute but in boilerplate: procurement language, non‑binding “intent” clauses, and secret annexes on classified networks.
Outside groups are betting that the flurry of deals—particularly Google’s permissive contract, the Anthropic lawsuit, and the specter of AI‑assisted warfare in current conflicts—will finally “supercharge efforts on the Hill” to build real safeguards into the next defense policy bill.1
Until that happens, “all lawful use” will keep doing a lot of work—and the most powerful AI systems on Earth will increasingly be doing theirs in the dark, inside classified systems, governed more by contract lawyers than by democratic debate.
1. Congress stalls on military AI as Google and the Pentagon strike deal — “Congress is far from passing military AI guardrails as Google and the Pentagon strike deal … the Pentagon this week reached an agreement with Google to use its model for ‘all lawful use,’ … reportedly more permissive than OpenAI’s.”
2. Google expands Pentagon’s access to its AI after Anthropic’s refusal — “Google has granted the U.S. Department of Defense access to its AI for classified networks, essentially allowing all lawful uses … This deal follows Anthropic’s public stand … refusing … domestic mass surveillance and autonomous weapons … Anthropic refused those use cases, the DoD branded the model maker a ‘supply-chain risk’ … It is unclear whether such provisions are legally binding or enforceable.”
3. Google signs classified AI deal with Pentagon — “Google has signed a deal allowing the Pentagon to use its Gemini AI models for classified military work under terms that permit ‘any lawful government purpose’ … one day after more than 580 Google employees signed a letter urging CEO Sundar Pichai to refuse … The contract includes language stating that ‘the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons … without appropriate human oversight’ … the government can request adjustments to Google’s AI safety settings … Google had quietly dropped out of a $100 million Pentagon prize challenge to create … voice-controlled autonomous drone swarms, withdrawing in February after an internal ethics review.”
4. @DarioAmodei on X — “I’m proud that so many of the world’s leading companies have joined us for Project Glasswing to confront the cyber threat posed by increasingly capable AI systems head-on.”
Story coverage
Write a comment