ANALYTICAL BRIEFREF: AIDF-0320-TF|SOURCE: OSINT / MIT TECHNOLOGY REVIEW / DEFENSE MEDIA / CORPORATE FILINGS
UPDATED 20 MAR 2026
THE FEED

CLASSIFIED MINDS

The Pentagon Wants to Train AI on State Secrets — And Every Major Tech Company Is Lining Up to Help

SUBJECT Pentagon Classified AI Training Program
REGION United States / Global
PRIORITY HIGH
ANALYST OPEN SOURCE
STATUS ANALYSIS COMPLETE
MAR 2026 — MIT Technology Review reveals Pentagon plans to train AI models on classified military data — first time for LLM companies ///MAR 2026 — Pentagon live-demos Maven kill chain at Palantir AIPCon 9: "Left click, right click, left click" from detection to strike ///MAR 2026 — NGA director confirms Maven will transmit "100% machine-generated" intelligence to combatant commanders by June 2026 ///MAR 2026 — Google tells DeepMind staff it is "leaning more" into national security contracts — eight years after walking away from Maven ///MAR 2026 — Pentagon reaches agreements with OpenAI and xAI for classified network access; Anthropic blacklisted as supply chain risk ///JAN 2026 — Defense Secretary Hegseth issues memo directing Pentagon to become an "AI-first warfighting force" ///MAR 2026 — MIT Technology Review reveals Pentagon plans to train AI models on classified military data — first time for LLM companies ///MAR 2026 — Pentagon live-demos Maven kill chain at Palantir AIPCon 9: "Left click, right click, left click" from detection to strike ///MAR 2026 — NGA director confirms Maven will transmit "100% machine-generated" intelligence to combatant commanders by June 2026 ///MAR 2026 — Google tells DeepMind staff it is "leaning more" into national security contracts — eight years after walking away from Maven ///MAR 2026 — Pentagon reaches agreements with OpenAI and xAI for classified network access; Anthropic blacklisted as supply chain risk ///JAN 2026 — Defense Secretary Hegseth issues memo directing Pentagon to become an "AI-first warfighting force" ///

BEYOND THE VEIL

WASHINGTON — 17 MARCH 2026 | MIT TECHNOLOGY REVIEW

Pentagon Plans to Let AI Companies Train Models on Classified Data

On March 17, MIT Technology Review revealed that the Pentagon is building secure environments where AI companies will train military-specific versions of their foundation models on classified data.[1] This is not the same as deploying AI in classified settings — which is already happening. This is something fundamentally different: the models themselves will learn from state secrets, embedding surveillance reports, battlefield assessments, and intelligence analyses directly into their neural weights.[1]

Currently, models like Claude operate in classified environments through Palantir's infrastructure — they can answer questions about classified documents, but they don't learn from them.[1] The shift to training means creating AI systems that have internalized classified knowledge. The models become, in effect, classified assets themselves — unable to be shared, open-sourced, or deployed outside secure facilities. A new category of artifact: the clearanced model.[1]

A U.S. defense official confirmed that training would occur in accredited secure data centers, with classified data paired with copies of commercial AI models.[1] The Pentagon intends to first evaluate performance on unclassified data like commercial satellite imagery before moving to classified sources.[1] But the trajectory is unmistakable: the Pentagon wants AI that doesn't just read intelligence — it wants AI that thinks like intelligence.

CURRENT STATE
READ-ONLY
AI models answer questions about classified data but don't learn from it[1]
NEXT STATE
TRAINING
Models will be trained directly on classified military data[1]
MAVEN MILESTONE
JUN 2026
100% machine-generated intelligence to combatant commanders[2]

You can imagine a model that has access to some sort of sensitive human intelligence — like the name of an operative — leaking that information to a part of the Defense Department that isn't supposed to have access.

— Aalok Mehta, Wadhwani AI Center, Center for Strategic and International Studies[1]

FROM TOOL TO WEAPON

The distinction between a model that reads classified data and one trained on it is the difference between a translator and a spy. A translator sees a document, renders it useful, and forgets. A trained model absorbs the document into its architecture — the knowledge becomes inseparable from the system. Every future response is shaped by what it has consumed. The classified information doesn't sit in a database that can be revoked. It lives in billions of neural weights, diffused beyond extraction.

Three Phases of Military AI: Deploy → Train → Autonomize

This creates unprecedented security challenges. Aalok Mehta, formerly of Google and OpenAI's policy teams, told MIT Technology Review that the greatest risk is cross-compartment leakage — classified information surfacing to users who lack the appropriate clearance level.[1] If multiple military departments share a model trained on compartmented intelligence, the model itself becomes a classification violation waiting to happen. Traditional information security assumes data can be contained, accessed, or deleted. Neural networks don't work that way.

The Pentagon's plan reveals a deeper truth about where military AI is heading. Phase one (2024-2025) was deployment: putting commercial models into classified networks to answer questions. Phase two (2026) is integration: training models that think in the language of military intelligence. Phase three — unstated but inevitable — is autonomy: AI systems that generate intelligence, recommend actions, and eventually execute them, all informed by classified knowledge no human analyst could hold in their head simultaneously.

LEFT CLICK, RIGHT CLICK, LEFT CLICK

FINDING 01 // KILL CHAIN ON STAGE

Three days before the classified training revelation, the Pentagon's Chief Digital and AI Officer Cameron Stanley gave an extraordinary live demonstration at Palantir's AIPCon 9.[3] Using satellite imagery and multiple data feeds including flight-tracking systems, he showed how Maven's kill chain works: detect a target, narrow to a specific vehicle in a parking lot, identify the optimal weapon system — in this case a .50-caliber M2 Browning on a Stryker — and authorize the strike. "Left click, right click, left click," he said.[3] The entire process that once required eight or nine separate systems and took hours was compressed to seconds in a single interface.

FINDING 02 // THE HUNDRED PERCENT

Maven is approaching a threshold that has no precedent in military history. NGA Director stated in September 2025 that by June 2026, Maven will begin transmitting "100 percent machine-generated" intelligence to combatant commanders.[2] Not AI-assisted. Not AI-augmented. Fully machine-generated intelligence — analysis produced without human involvement, delivered directly to the officers who decide where bombs fall. The contract ceiling has been raised to $1.3 billion through 2029, up from $480 million, as demand from combatant commands surged.[2]

FINDING 03 // AGENTIC WARFARE

The RobotToday analysis framework describes Maven's current state as "agentic AI" — systems that reason across multiple steps, use tools, and make sub-decisions autonomously rather than following pre-programmed rules.[2] At the AFA Warfare Symposium in February 2026, a USAF colonel described watching Anduril's YFQ-44A autonomous combat aircraft switch between two entirely different AI mission software systems mid-flight — without landing, without a human touching the controls.[2] The question is no longer whether AI is in the kill chain. It's whether the kill chain still has room for humans.

GOOGLE COMES HOME

In 2018, 3,100 Google employees signed an open letter: "We believe that Google should not be in the business of war." Google dropped Project Maven. It became the defining moment of tech worker activism against military AI — the subject of our earlier brief on Maven's arc.[4] Eight years later, Google is back.

From "Not in the Business of War" to "Leaning More In" — Eight Years

At a January 2026 DeepMind town hall, VP of Global Affairs Tom Lue told staff the company was "leaning more" into national security work.[5] Google updated its AI principles in 2025 to remove a previous pledge not to use its technology for weapons or surveillance.[5] CEO Demis Hassabis — who once feared how Google might weaponize DeepMind — told employees he was "very comfortable" with the balance being struck.[5] This month, Google won a contract to deploy AI agents across the Pentagon's unclassified networks.[5]

The reversal is total. Google dropped Maven in 2018 because employees said war was a red line. In 2026, those red lines have been redrawn — or erased. Google's justification mirrors the industry consensus: the work involves "back office type operations" like summarizing information and extracting text from contracts.[5] But employees have raised concerns about mission creep — particularly given Google's tools supplied to the Israeli government. An open letter from Google and OpenAI employees in February 2026 called on their companies to set limits.[5] Their employers are moving in the opposite direction.

THE AI ARMS BAZAAR

The classified training program emerges against a backdrop of total realignment in the Pentagon's AI supply chain. Anthropic, the company that built Claude — the only frontier model operating in classified networks — has been designated a supply chain risk by Defense Secretary Hegseth and effectively blacklisted from all federal agencies.[6] President Trump ordered all agencies to stop using Anthropic products within six months.[6] Anthropic has sued the Pentagon, the Executive Office of the President, and multiple federal agencies.[6]

Into the vacuum rush the willing. OpenAI reached a classified network agreement with the Pentagon, declaring three "red lines" — no autonomous lethal weapons, no mass surveillance of Americans, no high-stakes automated decisions — then amended the terms days later under employee backlash.[7] xAI, Elon Musk's company, secured classified deployment alongside OpenAI.[1] Google expanded its Pentagon footprint with AI agents on unclassified networks.[5] Palantir continues to run Maven — the backbone of it all — with a $1.3 billion ceiling and growing.[2]

The message from the Pentagon is unambiguous: any AI company that wants defense revenue must accept defense terms. Anthropic tried to negotiate restrictions on mass surveillance and autonomous weapons. The Pentagon's response was not negotiation — it was replacement and blacklisting.[6] The DOJ argued in court that Anthropic "can't be trusted with warfighting systems."[8] The precedent is set: in the AI arms bazaar, the customer dictates terms. And the customer wants models trained on secrets.

THE CLASSIFIED MODEL PROBLEM

The Irreversibility Problem. Once an AI model is trained on classified data, that knowledge cannot be extracted or revoked. You cannot "untrain" a neural network. This means every classified model becomes a permanent security asset — and a permanent security liability. If a model trained on signals intelligence is compromised, there is no patch. The intelligence is distributed across billions of parameters with no delete button. Traditional cybersecurity assumes data can be isolated, encrypted, and destroyed. Trained neural weights break every one of those assumptions.

The Proliferation Risk. Today, OpenAI and xAI operate in classified environments. Tomorrow, they train on classified data. The models they produce cannot leave secure facilities — but the techniques, architectures, and training methodologies can. Every engineer who works on a classified model carries transferable knowledge. Every algorithmic insight discovered during classified training can be independently rediscovered or reverse-engineered. The Pentagon is creating a new category of dual-use technology: training expertise that is classified in practice but unclassifiable in principle.

The Accountability Gap. Maven is already generating targeting intelligence with minimal human oversight. By June 2026, it aims for 100% machine-generated intelligence to combatant commanders. When that intelligence is produced by models trained on classified data — data no external auditor can access — who validates the output? The models become black boxes trained on material that oversight bodies cannot examine. Congressional committees with appropriate clearances can review intelligence reports. They cannot audit the weights of a neural network trained on those reports. The model becomes the analyst, and the analyst is unauditable.

HOW THIS CONNECTS

This brief sits at the convergence of three existing FieldBrief threads:

The Maven Thread: Epic Fury showed Maven in combat — generating 1,000 targets in 24 hours. Maven's Arc traced the system from Google's protest to Palantir's deployment. This brief reveals Maven's next evolution: from processing intelligence to generating it entirely autonomously, powered by models that have learned from the classified archive itself.

The Schism Thread: Anthropic refused unrestricted military use of Claude. The Pentagon blacklisted them. Now OpenAI, xAI, and Google are filling the gap — each with fewer restrictions than the last. The classified training program is the logical endpoint: AI companies don't just deploy for the military, they build bespoke military minds.

The Autonomy Thread: Ghost Spectrum showed Anduril's autonomous EW. Epic Fury showed AI-compressed kill chains. The classified training program completes the circuit: autonomous systems powered by AI that has internalized classified knowledge, generating intelligence and targeting recommendations without human analysis. The loop isn't just compressed — it's closing.

THE FEED

For sixty years, the boundary between intelligence and artificial intelligence was clear. Humans collected secrets. Machines processed data. Analysts connected dots. Commanders made decisions. The classified training program dissolves every one of those boundaries.

When you train a model on classified intelligence, the model doesn't just know the answers — it develops the instincts. Pattern recognition shaped by decades of surveillance reports. Threat assessment informed by real battlefield outcomes. Targeting logic refined by actual kill chain data. The resulting system doesn't think like a commercial chatbot with a security clearance. It thinks like an intelligence officer who has read every classified document ever produced — and never forgets any of them.

The Pentagon's January 2026 memo called for an "AI-first warfighting force." The classified training program is how you build one. Not by putting AI assistants in classified rooms, but by feeding the classified room into the AI. The model becomes the institution. The weights become the archive. The feed becomes the mind.

By June 2026 — three months from now — Maven intends to deliver 100% machine-generated intelligence to the commanders running the Iran campaign. Those machines may soon be trained on the very secrets they're tasked with protecting. The implications should keep you up at night. They probably won't keep the Pentagon up at all.

References & Source Material

  1. [1]MIT Technology Review, "The Pentagon is planning for AI companies to train on classified data, defense official says," 17 March 2026
  2. [2]RobotToday, "Future Warfare: The Autonomy Spectrum," 15 March 2026; NGA Director statement, September 2025; Maven contract ceiling via Breaking Defense, November 2025
  3. [3]Business Insider, "The Pentagon provided a rare inside look at Palantir's Project Maven," 17 March 2026; Palantir AIPCon 9 demonstration video
  4. [4]FieldBrief, "Maven's Arc: The Conscience Pipeline," 2026; Google employee open letter, April 2018
  5. [5]Business Insider, "Google told staff worried about Pentagon AI deals that the company is leaning more into national security contracts," 19 March 2026
  6. [6]Business Insider / The Guardian / TechPolicy.Press, multiple reports on Anthropic-Pentagon dispute, February-March 2026
  7. [7]CBS News, "How the military is using AI in war," 18 March 2026; OpenAI classified network agreement and subsequent amendment
  8. [8]WIRED, "Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems," 17 March 2026
CONNECTIONS
ZOOM OUT