ANALYTICAL BRIEFREF: AIGV-0313-SC|SOURCE: OSINT / LEGAL FILINGS / CORPORATE STATEMENTS
UPDATED 13 MAR 2026
THE SCHISM

ANTHROPIC VS THE PENTAGON

From $200M Contract to Supply Chain Risk — Who Controls AI in Warfare?

SUBJECT AI Governance in Defense
REGION United States
PRIORITY CRITICAL
ANALYST OPEN SOURCE
STATUS ACTIVE LITIGATION
FEB 24 2026 — Hegseth gives Anthropic CEO Dario Amodei until Friday 5:01 PM to grant "unfettered access" or face consequences ///FEB 28 — Deadline passes; Anthropic holds red lines. Hegseth designates Anthropic a "supply chain risk" — same category as Huawei ///FEB 28 — OpenAI announces Pentagon deal for classified networks within hours of Anthropic blacklist ///MAR 5 — Trump: "I fired [Anthropic] like dogs, because they shouldn't have done that" ///MAR 9 — Anthropic files federal lawsuit alleging First Amendment violation and unlawful retaliation ///MAR 10 — Google announces Gemini AI agents for Pentagon. MAR 13 — Palantir CEO confirms Claude still in use despite ban ///FEB 24 2026 — Hegseth gives Anthropic CEO Dario Amodei until Friday 5:01 PM to grant "unfettered access" or face consequences ///FEB 28 — Deadline passes; Anthropic holds red lines. Hegseth designates Anthropic a "supply chain risk" — same category as Huawei ///FEB 28 — OpenAI announces Pentagon deal for classified networks within hours of Anthropic blacklist ///MAR 5 — Trump: "I fired [Anthropic] like dogs, because they shouldn't have done that" ///MAR 9 — Anthropic files federal lawsuit alleging First Amendment violation and unlawful retaliation ///MAR 10 — Google announces Gemini AI agents for Pentagon. MAR 13 — Palantir CEO confirms Claude still in use despite ban ///

THE ULTIMATUM

WASHINGTON, D.C. — 24 FEBRUARY 2026 | AXIOS EXCLUSIVE

Hegseth Gives Anthropic 72 Hours to Surrender AI Safeguards

On February 24, 2026, Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to deliver an ultimatum: give the military unfettered access to Claude for "all lawful purposes" by 5:01 PM Friday, February 28 — or face designation as a "supply chain risk" and lose all government contracts.[1] The demand was specific: Anthropic must remove two contractual restrictions that limited military use of Claude — prohibitions on mass domestic surveillance and fully autonomous weapons that operate without human oversight.[2]

Amodei refused. In a public statement posted February 26, he wrote: "We cannot in good conscience agree to allow the Department of War to use our models in all lawful use cases" — citing the two red lines as non-negotiable.[3] He added: "I believe deeply in the existential importance of using AI to defend the United States and other democracies... However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values."[4] The Friday deadline passed. Within hours, everything changed.

CONTRACT VALUE
$200M
Two-year prototype agreement signed July 2025, now terminated[5]
DESIGNATION
SUPPLY CHAIN RISK
Same category as Huawei and Kaspersky — usually reserved for foreign adversaries[6]
REPLACEMENT
OPENAI
Announced Pentagon deal same day as Anthropic blacklist[7]

Anthropic is in trouble because I fired [them] like dogs, because they shouldn't have done that.

— President Donald Trump, Politico interview, 5 Mar 2026[8]

WHAT ANTHROPIC REFUSED

The dispute was not about whether Anthropic would serve the military — it was about two specific prohibitions the company refused to remove from its terms of service. Understanding the precision of these red lines is essential to understanding the magnitude of what followed.

The Pentagon Wanted Everything

Red Line 1: Mass Domestic Surveillance. Anthropic prohibited the use of Claude to enable surveillance of American citizens at scale. This restriction did not prevent the military from using Claude to monitor adversary communications, analyze foreign intelligence, or process battlefield data. It specifically targeted inward-facing surveillance — the use of AI to monitor the U.S. population.[3]

Red Line 2: Fully Autonomous Weapons. Anthropic prohibited the use of Claude in weapons systems that could select and engage targets without human oversight. This did not prevent AI-assisted targeting — the kind Maven Smart System already provides. It specifically prohibited the removal of the human from the decision to kill.[3]

Anthropic had already developed Claude Gov — a specialized version with relaxed restrictions for national security use. Claude Gov was "less prone to refuse requests that would be prohibited in the civilian context, such as using Claude for handling classified documents, military operations, or threat analysis."[9] The company had loosened nearly every other guardrail. It held firm on exactly two. Amodei later wrote that "to our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date."[4]

FROM DEADLINE TO BLACKLIST

FINDING 01 // THE DESIGNATION

When Anthropic did not capitulate by the 5:01 PM deadline on February 28, Hegseth formally designated the company a "supply chain risk" — a classification typically reserved for foreign adversarial firms like Chinese telecom Huawei or Russian cybersecurity firm Kaspersky.[6] The designation went beyond canceling the $200 million contract. It prohibited all Pentagon contractors, suppliers, and partners from using Anthropic products — creating a cascading ban that threatened to cut the company off from the entire defense industrial base.[10]

FINDING 02 // THE EXECUTIVE ORDER

Trump signed an executive order extending the ban across all federal agencies, calling Anthropic a "radical woke company" and declaring the government would not use AI models that impose "ideological restrictions."[11] The GSA removed Anthropic from the government's OneGov procurement agreement and the USAI.gov marketplace. Civilian agencies including HHS, NASA's Jet Propulsion Laboratory, and national laboratories were ordered to unwind all Anthropic-based solutions.[12]

FINDING 03 // THE OPENAI REPLACEMENT

Within hours of the Anthropic blacklist — on the same Friday evening — OpenAI announced it had reached a deal with the Pentagon to deploy ChatGPT in classified military systems.[7] The timing was surgical. OpenAI CEO Sam Altman had agreed to terms Amodei had rejected: full access for all lawful uses, no carve-outs for surveillance or autonomy.[13] Amodei later accused OpenAI of spreading "straight up lies" about the deal, writing in a leaked internal message: "The main reason [OpenAI] accepted and we did not is that they cared about placating employees, and we actually cared about preventing abuses."[14]

PROJECT MAVEN'S JOURNEY

SEVEN YEARS

From Google Protest to Claude Kill Chains

The full arc of Project Maven tells the story of Silicon Valley's capitulation to the Pentagon — and the single company that tried to hold the line.

2018: Google walks away. Over 3,000 Google employees signed an open letter demanding the company withdraw from Project Maven, the Pentagon's AI-driven drone surveillance program. Google complied, published AI principles barring weapons development, and became the symbol of tech resistance to military AI.[15]

2024: Google walks back. Google quietly reversed its position and began pursuing military contracts. By March 2026, Google announced Gemini AI agents for Pentagon use — starting on unclassified networks, with discussions underway for classified and top-secret access.[16] Eight pre-built agents would automate tasks for the military's 3 million staff. The company that once refused to help analyze drone footage now wants to put autonomous AI agents throughout the Department of War.

2024-2025: Anthropic fills the gap. Through its Palantir partnership and $200M contract, Anthropic became the only AI company with models in classified Pentagon networks.[17] Claude was integrated into Maven Smart System — the direct descendant of the Project Maven that Google had abandoned. Anthropic had accomplished what Google refused and then went even further: its technology was used in the classified operation to capture Maduro[18] and would become central to the Iran targeting campaign.[19]

2026: Anthropic holds the line — and pays the price. When the Pentagon demanded the last two guardrails be removed, Anthropic said no. And in the space of a single Friday afternoon, it went from the Pentagon's preferred AI partner to a supply chain risk equivalent to a Chinese telecommunications company.

The Palmer Luckey Position

The defense tech establishment lined up against Anthropic with striking unanimity.

THE PLAYERS AND THEIR POSITIONS — MARCH 2026 ──────────────────────────────────────────────────────────────── PRO-PENTAGON (No restrictions on military AI use): OPENAI (Sam Altman) → Signed Pentagon deal same day as Anthropic ban → Full access, no carve-outs for surveillance/autonomy → "We believe democratic nations should lead in AI" PALANTIR (Alex Karp) → Maven Smart System operator / Claude integrator → "Once the war starts, we're not interested in debating" → Still using Claude despite ban (as of Mar 13) ANDURIL (Palmer Luckey) → $6B+ in global defense contracts → "You cannot decide who you want to sell and not" → "There is no moral high ground in using inferior technology" GOOGLE (Sundar Pichai) → Reversed 2018 Maven protest position → Gemini agents for Pentagon announced Mar 10 → Full circle from "don't be evil" to DoW agents xAI (Elon Musk) → Grok offered for military use → Aligned with Trump administration ──────────────────────────────────────────────────────────────── ANTHROPIC (Dario Amodei) — ALONE: → "We cannot in good conscience agree" → Two red lines: no mass surveillance, no autonomous weapons → $200M contract terminated → Designated supply chain risk → Banned from all federal agencies → Filed federal lawsuit Mar 9 → Claude still running in kill chains via Palantir ──────────────────────────────────────────────────────────────── THE IRONY: Anthropic built Claude Gov with fewer guardrails than any competitor offered, was the FIRST in classified networks, and is being punished for the TWO things it wouldn't do — while its model continues running the war.

Palmer Luckey, Anduril's founder, articulated the Pentagon-aligned position most bluntly: "You cannot decide who you want to sell and not when it comes to [defense]."[20] Luckey has built a $6 billion defense empire on the premise that Silicon Valley's moral qualms about military technology are both naive and dangerous. His argument: AI will be used in warfare regardless — the only question is whether American AI or Chinese AI sets the terms. "There is no moral high ground in using inferior technology," he told Fox News.[21]

    THE LAWSUIT

    On March 9, 2026, Anthropic filed suit against the Pentagon, the Department of Defense, and related federal agencies in federal court, alleging the supply chain risk designation was unlawful retaliation.

    HIGH PROBABILITY

    First Amendment Claim

    Anthropic argues the supply chain risk designation punishes the company for exercising its right to set terms of service — a form of compelled speech. The government is effectively demanding a private company remove contractual language it deems ideologically inconvenient. If this precedent holds, any company negotiating with the federal government can be blacklisted for refusing to agree to the government's terms.[6]

    HIGH PROBABILITY

    Exceeds Statutory Authority

    The "supply chain risk" designation under 10 USC §4819 was designed for foreign adversary-linked entities that pose security threats through their products — Huawei backdoors, Kaspersky data exfiltration. Anthropic is an American company that refused a contractual term. The designation has never been used against a domestic firm for a commercial dispute.[6]

    HIGH PROBABILITY

    Cascading Commercial Damage

    The designation doesn't just kill the $200M contract — it forces every Pentagon contractor to stop using Claude. Defense tech companies are already dropping Anthropic.[22] The total commercial impact could reach "hundreds of millions of dollars" beyond the original contract, per Anthropic's filing. The company is seeking an emergency stay while the case proceeds.[23]

    MEDIUM PROBABILITY

    The Palantir Paradox

    On March 12 — three days after Anthropic filed suit — Palantir CEO Alex Karp confirmed at AIPCON that Palantir is still using Claude in Maven Smart System, including for Iran operations.[24] The Pentagon's most important AI targeting platform is still running the blacklisted company's model. Karp said Palantir plans to add other LLMs, but the immediate operational reality is that the military designated Anthropic a supply chain risk while simultaneously depending on its technology to fight a war.

    CONTRACT
    $200M (Jul 2025)
    ULTIMATUM
    72 Hours (Feb 24)
    BLACKLIST
    Supply Chain Risk
    LAWSUIT
    Federal Court (Mar 9)

    FROM PARTNERSHIP TO WARFARE

    JUN 2018
    Google withdraws from Project Maven after 3,000+ employees protest. Publishes AI principles barring weapons and surveillance. Sets the template for "responsible AI" in defense.[15]
    NOV 2024
    Anthropic partners with Palantir to deploy Claude on classified government networks via AWS. Palantir becomes first to bring a frontier AI model into classified environments.[17]
    JUL 2025
    Anthropic awarded $200 million, two-year DoD prototype contract through CDAO. Press release touts "accelerated mission impact across U.S. defense workflows with partners like Palantir."[5]
    JAN 2026
    Operation Absolute Resolve captures Maduro. WSJ reports Claude was used during the classified operation — first confirmed military combat use of a commercial AI model.[18]
    MID-FEB 2026
    Pentagon begins demanding Anthropic remove all use restrictions for "all lawful purposes." Negotiations intensify over the two red lines: mass surveillance and autonomous weapons.[1]
    24 FEB 2026
    Hegseth meets Amodei. Delivers ultimatum: comply by 5:01 PM Friday or face "supply chain risk" designation. 72-hour clock starts.[1]
    26 FEB 2026
    Amodei publishes statement: "We cannot in good conscience agree." Confirms red lines on mass surveillance and autonomous weapons. States Anthropic "continued good-faith conversations" but will not budge.[3][4]
    28 FEB 2026
    Three events in rapid succession: (1) Deadline passes — Anthropic holds firm. (2) Hegseth designates Anthropic a supply chain risk, banning it from all Pentagon contractors. (3) OpenAI announces classified Pentagon deal the same evening.[6][7] Operation Epic Fury launches against Iran the same day — with Claude still integrated in Maven.
    4-5 MAR 2026
    Anthropic receives formal letter confirming supply chain risk designation. Trump tells Politico: "I fired [Anthropic] like dogs." Calls them a "radical woke company." Executive order extends ban to all federal agencies.[8][11]
    9 MAR 2026
    Anthropic files federal lawsuit alleging First Amendment violation, exceeding statutory authority, and seeking emergency stay.[6]
    10 MAR 2026
    Google announces Gemini AI agents for Pentagon — eight years after abandoning Maven. Full circle.[16]
    12 MAR 2026
    Palantir CEO Karp confirms at AIPCON: "Our products are integrated with Anthropic" and Claude is still running in Maven. Plans to add other LLMs.[24] Anthropic seeks appeals court stay.

    BOTTOM LINE

    The Anthropic-Pentagon schism is not a commercial dispute. It is a constitutional confrontation over who controls the most powerful technology ever created — and whether a private company can set moral limits on how the government uses it.

    The precedent being set is extraordinary. A company that developed Claude Gov with loosened restrictions, deployed to classified networks, supported the Maduro capture, and powered the Iran targeting campaign is being treated identically to Chinese telecommunications firms — because it would not agree to two specific terms: unlimited domestic surveillance capability and fully autonomous lethal decision-making. Every other AI company watching this dispute has received a clear message: comply fully or be destroyed. OpenAI got it immediately. Google got it within days.

    The deepest irony is that Claude is still running the war. As of March 13, 2026, Palantir's Maven Smart System — the AI engine behind 5,500+ strikes in Iran — continues to use Anthropic's model.[24] The Pentagon designated the company that powers its kill chain as a national security threat. It banned the very technology it depends on to fight its war. And it did so not because Claude failed to perform — but because Anthropic's CEO said there were two things he wouldn't let it do.

    Project Maven's full arc — from Google's 2018 walkout to Claude's 2026 kill chains — is a seven-year story of Silicon Valley learning that you can either set the terms of military AI or the military will set them for you. Anthropic is the last company to learn this lesson. It is unlikely to be the last to pay for it.

    The main reason [OpenAI] accepted and we did not is that they cared about placating employees, and we actually cared about preventing abuses.

    — Dario Amodei, leaked internal message, reported by TechCrunch, 4 Mar 2026[14]

    References & Source Material

    1. [1]"Exclusive: Hegseth gives Anthropic until Friday to back down on AI safeguards," Axios, 24 Feb 2026
    2. [2]"'Incoherent': Hegseth's Anthropic ultimatum confounds AI policymakers," Politico, 26 Feb 2026
    3. [3]"Anthropic CEO Amodei says Pentagon's threats 'do not change our position' on AI," CNBC, 26 Feb 2026
    4. [4]"Statement from Dario Amodei on our discussions with the Department of War," Anthropic, 26 Feb 2026
    5. [5]"Anthropic awarded $200M DOD agreement for AI capabilities," Anthropic, Jul 2025
    6. [6]"Anthropic sues Trump administration over Pentagon blacklist," CNBC, 9 Mar 2026
    7. [7]"OpenAI sweeps in to snag Pentagon contract after Anthropic labeled supply chain risk," Fortune, 28 Feb 2026
    8. [8]"Trump Rages at AI Giant He Fired 'Like Dogs' After They Rejected Pentagon Demands," Mediaite, 5 Mar 2026
    9. [9]"Anthropic-Pentagon battle shows how big tech has reversed course on AI and war," The Guardian, 13 Mar 2026
    10. [10]"It's official: The Pentagon has labeled Anthropic a supply-chain risk," TechCrunch, 5 Mar 2026
    11. [11]"Trump orders US government to cut ties with Anthropic; Hegseth declares supply chain risk," ABC News, 28 Feb 2026
    12. [12]"U.S. Government Bans Use of Anthropic Products: What This Means for Government Contractors," Taft Law, Mar 2026
    13. [13]"OpenAI's 'compromise' with the Pentagon is what Anthropic feared," MIT Technology Review, 2 Mar 2026
    14. [14]"Anthropic CEO Dario Amodei calls OpenAI's messaging around military deal 'straight up lies,'" TechCrunch, 4 Mar 2026
    15. [15]"Google workers protest Pentagon AI contract with open letter," New York Times, 4 Apr 2018
    16. [16]"Google to Provide Pentagon With AI Agents for Unclassified Work," Bloomberg, 10 Mar 2026
    17. [17]"Anthropic and Palantir Partner to Bring Claude AI Models to AWS for U.S. Government Intelligence and Defense Operations," Palantir, Nov 2024
    18. [18]"US used Anthropic's Claude during the Venezuela raid, WSJ reports," Reuters, 13 Feb 2026
    19. [19]"Centcom commander touts use of AI in fight against Iran during Operation Epic Fury," DefenseScoop, 11 Mar 2026
    20. [20]"Palmer Luckey to Silicon Valley: You cannot decide who you want to sell and not," Times of India, Mar 2026
    21. [21]"Anduril's Palmer Luckey makes an ethical case for using AI in war," Business Insider, Dec 2025
    22. [22]"Defense tech companies are dropping Claude after Pentagon's Anthropic blacklist," CNBC, 4 Mar 2026
    23. [23]"Anthropic seeks appeals court stay of Pentagon supply-chain risk designation," Reuters, 12 Mar 2026
    24. [24]"Palantir is still using Anthropic's Claude as Pentagon blacklist plays out, CEO Karp says," CNBC, 12 Mar 2026
    25. [25]"The Dissonance of Anthropic CEO Dario Amodei," The Atlantic, 12 Mar 2026
    26. [26]"Anthropic was the Pentagon's choice for AI. Now it's banned and experts are worried," CNBC, 9 Mar 2026
    27. [27]"Anthropic-Pentagon Feud Over AI Technology Is a Bad Sign," Foreign Policy, 25 Feb 2026
    28. [28]"Where things stand with the Department of War," Anthropic, 5 Mar 2026
    29. [29]"Anthropic just sued the Pentagon. The outcome could reshape the AI race with China," Fortune, 12 Mar 2026
    CONNECTIONS
    ZOOM OUT