Google Gemini AI logo cracked open on a classified military server rack representing the Pentagon's unrestricted AI access deal 2026Google's classified amendment gives the Pentagon unrestricted Gemini access on air-gapped military networks where Google admits it can't monitor a single query.
Google’s Classified Pentagon AI Deal: Inside the Contract That’s Splitting Silicon Valley | NeuralWired

Google Gave the Pentagon Gemini Access for “Any Lawful Purpose” on Classified Networks

A classified amendment to Google’s existing DoD contract hands the U.S. military unrestricted Gemini AI access on air-gapped networks, where Google admits it can’t monitor a single query. Over 600 employees are furious. Anthropic already said no.

Eight years ago, Google’s workforce forced the company to walk away from the Pentagon. That was Project Maven, a drone-targeting AI program that drew more than 4,000 employee signatures on a protest letter and ultimately caused Google to let its defense contract expire in March 2019. The company quietly published AI principles pledging it would not develop AI for weapons or covert surveillance. That felt, at the time, like a line in the sand.

The line didn’t hold. On April 28, 2026, The Information reported that Alphabet’s Google had signed a classified amendment to its existing Pentagon contract, granting the U.S. Department of Defense access to its Gemini AI models on classified networks for, in the contract’s own language, “any lawful government purpose.” Google confirmed the deal to Reuters the same day. Within 24 hours, more than 600 of Google’s own employees, including over 20 directors and vice presidents and senior researchers from Google DeepMind, had signed an internal letter urging CEO Sundar Pichai to reverse course.

This is not a normal government technology contract. The classified networks in question are air-gapped, meaning they have zero connectivity to the outside internet. Google has acknowledged it cannot monitor how its AI is used once Gemini is deployed there. The company’s public safety commitments, its model usage policies, its ability to push updates or pull a compromised system, all of it disappears the moment the model crosses into those networks.


The Deal, Explained

The agreement builds on an existing relationship. In December 2025, the Pentagon launched GenAI.mil, a platform that gave roughly 3 million military and civilian DoD personnel access to Gemini for handling IL-5 data, the classification tier for information that’s sensitive but not formally classified. At that launch, DoD Under Secretary for R&D and CTO Emil Michael explicitly stated that classified data access was the next goal.

The April 2026 amendment delivers exactly that. Google now grants the Pentagon API-level access to its commercial Gemini models on classified infrastructure. The contract language, “any lawful government purpose,” is deliberately broad and mirrors the phrasing that Anthropic’s CEO Dario Amodei publicly refused to accept back in February 2026, citing autonomous weapons and mass surveillance concerns.

“We believe that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible approach to supporting national security.”

Google Spokesperson, Alphabet/Google — Reuters, April 28, 2026

That statement, carefully worded, does a lot of work. It references “industry-standard practices,” but those practices assume connectivity, monitoring, and the ability to intervene. None of those conditions exist on air-gapped classified networks.

What is an air-gapped network? A classified air-gapped system has zero external internet connectivity. Data physically cannot travel in or out via standard network paths. AI models must be transported as frozen, encrypted packages via classified courier. Once deployed, the provider cannot monitor queries, push safety updates, adjust outputs, or revoke access.

Inside the Air Gap: What Google Actually Can’t Control

This is where the technical reality gets uncomfortable. On a standard cloud deployment, Google can watch for policy violations, apply content filters, push model updates, and terminate access if something goes wrong. On a classified air-gapped network, the model is essentially frozen in place, a snapshot of Gemini at the moment of deployment, with no ongoing oversight from the company that built it.

The employee letter puts this plainly. Signatories wrote that on air-gapped classified networks, “Google cannot monitor how its AI is used, making ‘trust us’ the only guardrail against autonomous weapons and mass surveillance.” That’s not hyperbole. It’s a technical description of the actual constraint.

Capability Standard Cloud Deployment Air-Gapped Classified Deployment
Usage monitoring Full query/response logging None. Google has zero visibility.
Safety filter updates Pushed remotely, near real-time Impossible. Model is frozen at deployment.
Model updates Continuous improvement cycles Requires physical re-deployment via classified courier
Access revocation Immediate remote kill switch No remote mechanism exists
Policy enforcement Terms of service apply DoD interprets “lawful purpose” independently
Autonomous weapons use Detectable via usage patterns Undetectable and unverifiable

The contract also reportedly requires Google to assist in adjusting AI safety filters for classified use cases. The specifics of what “adjusting” means in practice have not been made public, which is precisely the kind of opacity that has the employee base alarmed.

Key constraint: Once Gemini is deployed on a classified air-gapped network, Google’s published AI usage policies, its ethical commitments, and its safety monitoring capabilities become legally unenforceable and technically impossible to apply. The DoD defines what “lawful” means in that environment.

The Employee Revolt: 600+ Signatures and Counting

The internal opposition moved fast. According to Bloomberg, employees began circulating a letter on April 26, the day before the deal went public, suggesting word had leaked internally before the official announcement. By April 27, 580 people had signed. Within 24 hours of The Information’s report on April 28, The Washington Post counted more than 600 signatories.

What makes this round of opposition different from 2018 isn’t the number. It’s the seniority. Over 20 directors and vice presidents signed the letter, alongside senior DeepMind researchers. These aren’t junior engineers venting frustration. These are people with enough organizational standing to know what they’re putting on the line by attaching their names to an internal protest against a CEO decision.

✍️

2018 Project Maven

4,000+ employee signatures. 12+ resignations. Google walked away from the contract by March 2019.

✍️

2026 Pentagon Deal

600+ signatures within 48 hours. 20+ directors and VPs among signatories. Deal confirmed anyway.

⚖️

The Key Difference

In 2018, Google hadn’t yet signed. In 2026, the classified amendment was already done when protests began.

Google has not signaled any intention to reverse the decision. The company’s public position, that API access with “industry-standard practices” is responsible, hasn’t shifted. But the protest letter does something strategically important: it creates a documented internal record that senior staff raised specific concerns before any potential future misuse. That matters if the deal eventually produces something that forces a public accounting.

The Project Maven Shadow: How Google Got Here

It’s worth running the tape on how this company went from refusing to renew a drone-targeting AI contract in 2018 to signing a classified “any lawful purpose” Pentagon deal in 2026. The trajectory isn’t accidental.

After Project Maven, Google published formal AI principles that explicitly ruled out weapons applications and covert surveillance. For several years, those principles functioned as a genuine constraint on the company’s defense business. Then the competitive landscape shifted.

OpenAI and Microsoft aggressively pursued military and intelligence contracts starting around 2023. The Pentagon’s CDAO started moving real money, not pilot programs, toward frontier AI companies. By July 2025, the DoD had awarded $200 million contracts to OpenAI, Google, Anthropic, and xAI for agentic AI workflows. Sitting out was no longer commercially neutral.

“AI adoption is changing the Defence Department’s ability to support operations and maintain its position globally.”

Doug Matty, Chief Digital and AI Officer, U.S. Department of Defense — DoD CDAO Announcement, July 14, 2025

Then came January 2026, when Defense Secretary Pete Hegseth announced DoD would integrate Elon Musk’s Grok into both classified and unclassified systems, while explicitly naming Gemini as already powering GenAI.mil. The signal from the Pentagon was clear: companies that engaged would get contracts. Companies that didn’t would watch competitors fill the gap.

Google’s classified deal is, in no small part, a response to that competitive pressure. It’s not the company that left Project Maven in protest. It’s the company that watched OpenAI and xAI move into classified military AI and decided it couldn’t afford to stay out.

Who Signed, Who Refused: The AI Industry Split

The Google deal crystallizes something that’s been building for two years: the AI industry is now openly divided on military work, and each company’s position is hardening into something that looks a lot like a permanent strategic identity.

Anthropic drew the sharpest line. In February 2026, CEO Dario Amodei publicly rejected the Pentagon’s “any lawful purposes” contract language, specifically over autonomous weapons and mass surveillance concerns. The DoD reportedly responded by designating Anthropic a “supply chain risk” and initiating a six-month phase-out of the company from existing contracts.

“Without appropriate oversight, fully autonomous weapons cannot be trusted to exercise the judgment that highly trained professional military personnel demonstrate daily. They require deployment with adequate safeguards, which do not currently exist.”

Dario Amodei, CEO, Anthropic — Anthropic Statement, February 26, 2026

xAI, by contrast, moved in the opposite direction entirely. Defense Secretary Hegseth’s January 2026 announcement confirmed Grok’s integration into classified systems without the public hand-wringing that surrounded Google’s deal. OpenAI has been equally willing, having won a standalone $200 million DoD contract in June 2025, the first officially listed on the DoD procurement site.

Company Pentagon Position Key Action Consequence
Google Engaged (classified) Signed “any lawful purpose” amendment, April 2026 600+ employee protest; reputational scrutiny
OpenAI Engaged (classified) $200M standalone DoD contract, June 2025 Normalized military AI sales; minimal internal protest
xAI (Grok) Engaged (classified) DoD classified + unclassified integration, Jan 2026 No public employee opposition reported
Anthropic Refused classified terms Rejected “any lawful purpose” language, Feb 2026 Designated “supply chain risk”; 6-month DoD phase-out

The unnamed Pentagon official who spoke to press framed the multi-vendor approach as intentional: having Google, OpenAI, and xAI all under contract “could provide the military with greater flexibility and help prevent any single entity from monopolizing contracts.” That’s a reasonable procurement rationale. It also means the DoD has no single chokepoint where an ethics objection could halt classified AI use.

The $13.4 Billion Spending Wave Behind This Deal

To understand why Google signed, you need to see the money. The DoD’s FY2026 budget request included $13.4 billion earmarked specifically for AI, a figure that represents a sevenfold increase over the $1.8 billion allocated in FY2025. It’s the largest single-year AI investment in U.S. defense history and the biggest standalone technical line item in a total defense request of $892.6 billion.

Budget context: The DoD’s $13.4 billion FY2026 AI budget is larger than Anthropic’s entire annualized revenue of approximately $14 billion as of February 2026. The Trump administration’s proposed 2027 defense budget of $1.5 trillion, with $1.1 trillion for core DoD operations, signals this trajectory isn’t reversing.

The spending breakdown reveals where the classified AI money is heading. The DoD’s CDAO has allocated $9.4 billion to aerial drones and UAVs in FY2026, the single largest AI spending category. Maritime autonomous platforms claim another $1.7 billion. Core AI and automation technologies take $200 million. The implication is direct: the biggest AI budget items are autonomous weapons systems, exactly the category Anthropic cited when it refused Pentagon terms.

  • $9.4 billion for aerial drones and UAVs, the primary AI spending category in FY2026
  • $1.7 billion for maritime autonomous platforms
  • $200 million for core AI and automation technologies
  • $13.4 billion total AI budget, up from $1.8 billion in FY2025
  • $1.5 trillion proposed total defense spending in 2027, with further AI expansion expected

For a company like Google, the commercial calculus isn’t complicated. Pentagon AI contracts are now among the most valuable in the technology sector. The company that captures classified AI infrastructure relationships today is positioned for contracts measured in billions over the next decade. Google watched OpenAI and xAI move in. Anthropic moved out, and immediately paid the price of a “supply chain risk” designation. The choice Google made wasn’t made in a vacuum.

“We will not knowingly supply a product that endangers America’s soldiers and civilians.”

Dario Amodei, CEO, Anthropic — Anthropic Statement, February 26, 2026

The Pentagon’s response to Amodei’s refusal sent an equally clear message to every other AI company watching: holding out on “any lawful purpose” language costs you the contract. Google appears to have calculated that cost and decided it was too high.

Frequently Asked Questions

What did Google agree to in its Pentagon AI deal?

Google signed a classified amendment to its existing DoD contract granting the U.S. military API-level access to Gemini AI models on classified, air-gapped networks for “any lawful government purpose.” The deal was reported by The Information on April 28, 2026, and confirmed by Google to Reuters the same day.

Why can’t Google monitor how the Pentagon uses Gemini?

Classified DoD networks are air-gapped, meaning they have zero external internet connectivity. Once Gemini is deployed on those systems, Google has no visibility into queries, outputs, or decisions. The company can’t push updates, adjust safety filters remotely, or revoke access through any technical mechanism.

How many Google employees opposed the Pentagon deal?

Over 600 Google employees, including more than 20 directors and vice presidents and senior DeepMind researchers, signed an internal letter urging CEO Sundar Pichai to reject the classified Pentagon contract. The letter circulated on April 26-27, 2026, before the deal was publicly reported.

Why did Anthropic refuse the same Pentagon contract terms?

Anthropic CEO Dario Amodei rejected the Pentagon’s “any lawful purposes” language in February 2026, citing the risk of enabling fully autonomous weapons and mass surveillance without adequate human oversight. The DoD subsequently designated Anthropic a “supply chain risk” and began a six-month phase-out of the company from its AI contracts.

What is the DoD’s AI budget for FY2026?

The Pentagon’s FY2026 budget includes $13.4 billion specifically for AI, a sevenfold increase from the $1.8 billion allocated in FY2025. The largest single AI spending category is aerial drones and UAVs at $9.4 billion, followed by maritime autonomous platforms at $1.7 billion.

What happened with Google’s Project Maven in 2018?

Project Maven was a Pentagon AI contract for drone-targeting imagery analysis. After more than 4,000 Google employees signed a protest petition and at least 12 resigned, Google announced in June 2018 it would not renew the contract. The contract expired in March 2019, and Google published AI principles pledging it would not develop weapons AI.

Which other AI companies have Pentagon classified contracts?

OpenAI won a standalone $200 million DoD contract in June 2025. xAI’s Grok was announced for integration into classified and unclassified DoD systems in January 2026. Google’s classified Gemini deal was confirmed in April 2026. Anthropic is being phased out of DoD contracts after refusing classified terms.

What is GenAI.mil and how does it relate to the new deal?

GenAI.mil is a DoD platform launched in December 2025 that gives approximately 3 million military and civilian personnel access to Gemini for handling sensitive but unclassified data. The April 2026 classified amendment extends this relationship to fully classified networks, the next step DoD officials had explicitly signaled they intended to pursue.

What Comes Next

Google’s classified Pentagon deal doesn’t exist in isolation. It’s a data point in a much larger consolidation happening between the U.S. government and frontier AI companies, one that is moving faster than any public policy framework can keep up with. The FY2026 AI defense budget is seven times what it was a year ago. The proposed 2027 figures suggest that number keeps climbing. And the companies sitting across the table from the DoD are now, for all practical purposes, defense contractors, regardless of how their investor decks describe them.

The employee revolt at Google is real, and it matters as a signal. But the 2026 protest differs from 2018 in one critical way: the contract was already signed when the letter went out. In 2018, internal pressure changed a pending decision. In 2026, it arrived after the fact. That sequencing may not be coincidental. The company learned from Maven that employee opposition, if given enough runway, can alter outcomes. This time, the decision came first.

What the industry is watching now is whether Anthropic’s principled refusal proves to be commercially sustainable or quietly untenable. The “supply chain risk” designation is a serious penalty. If Anthropic eventually reverses course under revenue pressure, it signals that the “any lawful purpose” terms are effectively unavoidable for any AI company that wants to do serious business with the U.S. government. If Anthropic holds and the DoD comes back to the table with modified language, it means pushback works. That outcome seems less likely given current momentum, but it’s not zero.

Watch For
01 Congressional scrutiny of classified AI contracts: Senate Armed Services Committee hearings on autonomous weapons AI are expected in Q3 2026. Any testimony on the specific Gemini deployment parameters could force rare public disclosure of classified contract terms.
02 Anthropic’s six-month DoD phase-out window: The clock started in February 2026. By August 2026, Anthropic will either be fully out of Pentagon contracts or will have negotiated modified terms. That outcome sets a precedent for every future AI company that tries to hold a line on autonomous weapons language.
03 Google employee departures: In 2018, at least 12 engineers resigned over Project Maven. Watch whether any high-profile exits follow the 2026 letter, particularly among the 20+ directors and VPs who signed. Senior departures would carry significantly more reputational and operational weight than the 2018 precedent.
04 The $1.5 trillion 2027 defense budget proposal: If Congress advances anything close to the administration’s proposed figures, AI defense contracts will grow well beyond the current $13.4 billion line item. The companies locked into classified relationships now will be positioned to capture that expansion first.
Stay ahead of the curve. More on AI policy, defense tech, and the industry’s biggest decisions at NeuralWired.
Explore AI Policy

Leave a Reply

Your email address will not be published. Required fields are marked *