Google Signs Classified AI Deal with Pentagon

Google has agreed to allow the Pentagon to use its Gemini AI models for classified military work, a move that comes despite a letter from over 580 employees urging the company to refuse such arrangements. The deal expands Google's existing relationship with the Department of Defense, granting API access to its AI systems on classified networks.
Google Signs Classified AI Deal with Pentagon

Google Signs Classified AI Deal with Pentagon Human Human coverage presents the deal as a controversial deepening of Google’s entanglement with the Pentagon, emphasizing internal employee resistance, the non-binding nature of safeguards, and the failure of Congress to keep pace with military AI adoption. It frames the agreement as part of a broader struggle over whether powerful AI systems should be used in opaque, classified military contexts at all. @4qd8…qnwa @TNW @TC @7dlt…clgf Google has quietly crossed a line its own workers begged it not to—opening the door for its most powerful AI to operate in the classified heart of the U.S. war machine.

Spring 2026: The pressure campaign inside Google

The showdown began inside the company, not at the Pentagon.

On April 27, around 600 Google employees from DeepMind and Cloud sent CEO Sundar Pichai a blunt warning: do not let Gemini be used for classified U.S. military operations.1 The letter, prompted by reports that Google was negotiating a classified Gemini deal, argued that the very nature of AI made secret military integration uniquely dangerous.

“As people working on AI, we know that these systems can centralize power and that they do make mistakes,” the workers wrote. “We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses.”1

Their demand was absolute: “the only way to guarantee that Google does not become associated with such harms is to reject any classified workloads.” Otherwise, they warned, “such uses may occur without our knowledge or the power to stop them.”1

The letter drew a direct line back to 2018, when internal revolt forced Google to walk away from Project Maven, a Pentagon contract to integrate AI into drone targeting. That crisis produced a set of AI principles pledging not to build tools for weapons or surveillance. But last year, Google quietly updated those principles—removing the explicit language about weapons and surveillance even as it signed new Pentagon cloud and AI deals and told DeepMind employees to expect more of them.1

By late March 2026, Google was already providing AI agents to the Pentagon in unclassified environments. The fight was over what came next: whether Gemini would move into the classified realm of mission planning, intelligence analysis, and weapons targeting.

April 28: Google says yes to “any lawful government purpose”

Within 24 hours of that letter, Google answered—just not publicly, and not the way its employees wanted.

On April 28, the company signed a classified agreement allowing the Pentagon to use its Gemini AI models for military work under terms that permit “any lawful government purpose.”2 The deal provides API access to Google’s AI systems on air‑gapped, classified networks, extending an existing relationship that already put Gemini in front of three million Pentagon personnel on unclassified systems.2

A Google Public Sector representative confirmed the arrangement, which effectively lets the Pentagon plug directly into Gemini in the systems that handle mission planning, intelligence, and targeting.2

In the fine print, Google can claim it drew a line. The contract states that “the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control.”2 That echoes Google’s public line: “We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight,” a spokesperson said.3

But there are two catches.

First, the agreement is explicitly for “all lawful use,” according to a source familiar with the deal.3 Second, the government can request adjustments to Google’s AI safety settings and content filters—effectively giving the Pentagon the ability to tune or relax the guardrails Google’s own researchers built into the models.2

DeepMind researcher Alex Turner publicly blasted the arrangement, arguing that Google “can’t veto usage” and is relying on “aspirational language with no legal restrictions,” Axios reported.3

The result: on paper, Google has red lines. In practice, the Pentagon holds the wrench.

The competitive backdrop: Anthropic says no, others rush in

Google’s move didn’t happen in a vacuum—it landed in the middle of a high‑stakes contest over who will arm the U.S. government with cutting‑edge AI.

Anthropic, the lab behind Claude, had refused to give the Department of Defense the same kind of broad license for “all lawful uses.” The Pentagon wanted unrestricted access, including potential pathways to domestic mass surveillance and autonomous weapons. Anthropic insisted on guardrails; the Pentagon responded by branding the company a “supply‑chain risk,” a label usually reserved for foreign adversaries.4

Anthropic sued. A judge granted the company an injunction, temporarily blocking the “supply‑chain risk” designation while the case proceeds.4 The Defense Department, meanwhile, continues using Anthropic models even as it pressures the company in court and works to expand government‑wide access.3

That standoff opened a lane for rivals. OpenAI “immediately signed a deal with the DoD, as did xAI,” TechCrunch notes, turning Anthropic’s principled stand into a business opportunity for competitors.4 Google is now the third major lab to move in.

Ironically, Anthropic has been touting a very different kind of government partnership. CEO Dario Amodei celebrated that “many of the world’s leading companies” had joined Anthropic’s Project Glasswing, an initiative using its Claude Mythos model to secure critical software by finding vulnerabilities “better than all but the most skilled humans.”5 Instead of feeding models into targeting systems, Anthropic is selling AI as a shield against AI‑enabled cyber threats—an alternative vision of what government AI work could look like.

Quiet exit from killer‑drone adjacent work

On the same day news of the classified Gemini deal broke, another revelation surfaced that complicates Google’s story.

Bloomberg reported—and The Next Web amplified—that Google had quietly dropped out of a $100 million Pentagon prize challenge to build technology for voice‑controlled autonomous drone swarms.2 Google had advanced in the competition but withdrew in February after an internal ethics review, officially citing a lack of “resourcing.”2

If the classified Gemini deal showed Google edging closer to the center of U.S. military power, the drone‑swarm exit suggested it still sees some lines it won’t cross—at least not yet. As The Next Web put it, “Google is drawing a line, but it is not the line its employees asked for.”2

In other words: no to helping the Pentagon prototype fully autonomous swarms; yes to putting Gemini inside the systems that help plan and support real‑world operations.

Washington’s missing guardrails

All of this is happening faster than Congress can legislate.

“Congress is far from passing military AI guardrails as Google and the Pentagon strike deal,” Axios reported.3 Lawmakers are only beginning to contemplate how to regulate AI in war even as the Pentagon signs increasingly permissive contracts.

The Google deal is “reportedly more permissive than OpenAI’s.” OpenAI says it retains “full discretion” over its safety mechanisms. By contrast, Google agreed to adjust its safety settings at the government’s request, according to The Information’s initial reporting.3

Advocates are now pushing to bolt real oversight onto the annual National Defense Authorization Act. Ideas on the table include transparency requirements around how military AI is used and verification mechanisms to ensure companies’ red lines aren’t just marketing copy.3 For now, though, the only constraints on how the Pentagon uses commercial frontier models are whatever contract clauses companies negotiate behind closed doors—and those clauses, critics note, “are not legally binding or enforceable.”4

The clash of narratives

Put in a timeline, the picture is stark:

  • 2018: Google bows out of Project Maven after employee revolt and publishes AI principles rejecting weapons and surveillance uses.1
  • 2023–2025: Those principles are softened; language around weapons and surveillance quietly disappears as Google deepens Pentagon contracts.1
  • February 2026: Google exits the voice‑controlled drone swarm contest after an internal ethics review, officially citing resource constraints.2
  • March 2026: Google announces non‑classified AI agents for the Pentagon, signaling more deals ahead.1
  • April 27, 2026: Around 600 employees urge Pichai to reject classified military AI work outright.1
  • April 28, 2026: Google signs a classified deal giving the Pentagon access to Gemini for “all lawful use,” with adjustable safety settings and only non‑binding language against mass surveillance and autonomous weapons.23

Each side now tells a different story about what this sequence means.

Google and the Pentagon argue that responsible AI can and should be part of national defense, bounded by promises against domestic mass surveillance and fully autonomous killing machines.23 For them, the keyword is “oversight”—human in the loop, human in control.

Employees and outside critics see something closer to a bait‑and‑switch. The public principles shrink, the secrecy grows, and the only hard “no” becomes a drone‑swarm contest the company wasn’t willing—or able—to staff.12

And Anthropic has turned its own “no” into a moral differentiator—even as that stance earns it a “supply‑chain risk” label from the very government it’s trying to constrain.4

The core tension is no longer about whether Silicon Valley will work with the Pentagon. That debate is over. What’s on the table now is more specific and more dangerous: who gets to set the real red lines for military AI, who can move them, and whether anyone outside a classified room will know when they’ve been crossed.


1. Business Insider — Around 600 Google employees urged Pichai: “the only way to guarantee that Google does not become associated with such harms is to reject any classified workloads.”

2. The Next Web — Google signed a classified AI deal letting the Pentagon use Gemini for “any lawful government purpose” and later exited a $100M voice‑controlled drone swarm contest after an ethics review.

3. Axios — Congress is “far from passing military AI guardrails” as Google agrees to let the Pentagon adjust Gemini’s safety settings for “all lawful use.”

4. TechCrunch — After Anthropic refused terms allowing domestic mass surveillance and autonomous weapons, the Pentagon labeled it a “supply-chain risk”; OpenAI, xAI, and now Google stepped in with more permissive deals.

5. @DarioAmodei on X — “I’m proud that so many of the world’s leading companies have joined us for Project Glasswing to confront the cyber threat posed by increasingly capable AI systems head-on.”

Story coverage

Referenced event not yet available nevent1qqs9c…4qantv3j
Referenced event not yet available nevent1qqsdz…8cfg4uxu
Referenced event not yet available nevent1qqsg8…vcwcgseg
Referenced event not yet available nevent1qqsr6…aqu9p0gx

Write a comment
No comments yet.