A Private Equity Billionaire's Memo, a Model-Agnostic Kill Chain, and the Week That Made Maven Immortal
On March 9, 2026, Deputy Secretary of Defense Steve Feinberg sent a letter to senior Pentagon leaders and U.S. military commanders with a single directive: Palantir's Maven Smart System would become an official program of record — the bureaucratic designation that transforms a pilot project into permanent military infrastructure with its own budget line, Congressional oversight, and institutional inertia.[1]
The language was unambiguous. Embedding Maven would provide warfighters "with the latest tools necessary to detect, deter, and dominate our adversaries in all domains."[1] The memo ordered oversight of Maven transferred from the National Geospatial Intelligence Agency to the Pentagon's Chief Digital and Artificial Intelligence Office within 30 days, with future contracting handled by the Army.[1] The decision takes effect by the close of FY2026 in September.
In Pentagon procurement language, "program of record" is the difference between a line item that can be cut and a system that requires Congressional action to kill. Maven just became the AI equivalent of the F-35 — too embedded to cancel, too expensive to replace, too politically connected to question.
It is imperative that we invest now and with focus to deepen the integration of artificial intelligence across the Joint Force and establish AI-enabled decision-making as the cornerstone of our strategy.
Steve Feinberg is not a career defense official. He is the co-founder and former CEO of Cerberus Capital Management, one of the largest private equity firms in the world. Cerberus has invested in defense for decades — most notably owning DynCorp, a major military contractor, from 2010 to 2020.[5] DynCorp paid millions to settle multiple suits alleging it had defrauded the U.S. government during that period.[6]
Senator Elizabeth Warren flagged the conflict directly: her staff identified at least seven Cerberus portfolio companies with $15.9 billion in Defense Department business.[4] Feinberg's Goldman Sachs-managed divestiture of his Cerberus stake was announced the same month as his nomination — a separation that critics call paper-thin given the revolving-door dynamics of defense PE.[7]
None of this prevented confirmation. Feinberg was sworn in as the 36th Deputy Secretary of Defense on March 17, 2025.[3] One year later, he signed the memo that locked Palantir — a company valued at $360 billion, whose stock has doubled in 12 months partly on the strength of its Pentagon relationship — into permanent military infrastructure.[1]
The question isn't whether Feinberg has a conflict of interest. The question is whether the distinction between private equity, defense contracting, and military AI governance still exists at all.
The timing of the Maven POR designation is inseparable from the Anthropic purge. On February 28, 2026, the Pentagon designated Anthropic a "supply chain risk" and set a 180-day deadline to remove Claude from all Defense systems.[8] By March 19, Pentagon CTO Emil Michael declared himself "pretty confident" the transition would succeed — because the kill chain was designed to survive the loss of any single AI model.[9]
"We've already deployed OpenAI in the last few weeks, and we're going to deploy the others. We have deployed Gemini," Michael said at the McAleese Defense Programs conference. "The workflows are very similar. So the disruption is, we think, minimal."[9] He then mapped the competitive landscape: Claude excels at coding. OpenAI's Codex is rising. xAI has real-time advantages. Google Gemini has strengths in robotics via YouTube and Nest Cam training data.[9]
Read that again. The Pentagon's chief technology officer just described frontier AI models as interchangeable components in a weapons system — differentiated by specialty but fundamentally swappable. This is the architectural insight that makes Maven's POR designation so consequential: Maven is not Claude. Maven is not Gemini. Maven is the platform through which any model becomes a military tool. The models are ammunition. Maven is the gun.
Industry experts disagree with the "minimal disruption" assessment. RunSafe Security CEO Joe Saunders warned that "each model requires validation, and in many cases, re-authorization before it can be used in operational settings" across classification levels.[9] But the strategic direction is clear: the Pentagon is building toward AI fungibility, where no single company's ethical stance can interrupt the kill chain.
On March 17, 2026 — eight days after Feinberg signed the Maven memo — Senator Elissa Slotkin (D-MI) introduced the AI Guardrails Act.[10] The bill targets three areas: banning autonomous lethal strikes without human authorization, prohibiting AI-powered domestic mass surveillance, and ensuring nuclear launch authority remains solely with the Commander-in-Chief.
The irony is structural. These are almost exactly the same three red lines that got Anthropic designated a supply chain risk. Slotkin's bill would codify into law the restrictions that the Pentagon punished Anthropic for insisting upon contractually.[10] The company that tried to enforce these limits through contract terms was blacklisted. A senator introducing the same limits through legislation is simply ignored.
The bill's prospects are grim. In a Republican-controlled Congress that has shown deference to the Trump administration's defense prerogatives, legislation restricting Pentagon AI use has near-zero chance of passage. But the bill serves a different purpose: it creates the legislative record. When the inevitable AI targeting failure occurs — a school misidentified as a military target, a surveillance overreach exposed — the record will show that Congress was warned and chose not to act.
Slotkin, a former CIA analyst and Pentagon official, framed it with characteristic precision: "Congress is behind in putting left and right limits on the use of AI, and the first place to start should be at the Pentagon."[10] She is correct. And she is too late.
Maven will appear as its own line item in the defense budget, visible to Congress and protected by the institutional momentum of any spending program that generates jobs and contracts across multiple states.[1]
Palantir moves from competing for renewals to holding a position that would require formal competition to displace. The $1B+ contract ceiling becomes a floor.[2]
Feinberg's memo explicitly orders Maven integration "across the Joint Force" — Army, Navy, Air Force, Marines, Space Force. Every service branch must adopt it.[1]
Once a program of record is established, it cannot be quietly defunded by an incoming administration. Termination requires Congressional action — and Congress rarely kills programs that create contractor jobs in their districts.
MIT Technology Review revealed the Pentagon is now planning for AI companies to train models on classified military data — a capability that Maven's POR status makes inevitable. The AI doesn't just analyze intelligence. It learns from it.[11]
The Guardian's March 15 investigation laid bare the accountability vacuum that Maven's institutionalization creates. During Operation Epic Fury, U.S. forces struck over 1,000 targets in the first 24 hours — a volume that was only possible because AI generated, prioritized, and ranked the target list at speeds no human team could replicate.[12]
The investigation detailed the strike on the Shajareh Tayyebeh elementary school in Minab, Iran, which killed at least 168 people, mostly girls aged 7 to 12. Munitions experts described the targeting as "incredibly accurate" — each building individually struck. The problem was intelligence: the school had been separated from an adjacent Revolutionary Guard base and repurposed for civilian use nearly a decade earlier.[12]
The exact role of AI in the Minab strike has not been officially confirmed. What is known is that the targeting infrastructure has no reliable mechanism for flagging when underlying intelligence is a decade out of date.[12] Maven's POR designation doesn't fix this. It makes it permanent. The system that may have contributed to the deadliest targeting failure of the Iran campaign is now the system that cannot be removed.
UN expert panels have warned that AI weapons targeting without human intervention raises ethical, legal, and security risks.[1] Palantir says its software does not make lethal decisions and humans remain responsible for selecting targets.[1] But when AI generates 1,000 targets in 24 hours and a human must approve each one, the human is not deciding. The human is rubber-stamping.
In the span of one week, the Pentagon accomplished three things that make AI warfare structurally irreversible:
First: Maven became a program of record — a budgeted, Congressionally-protected, cross-service mandate that no future administration can quietly kill. The AI kill chain is now as institutionally permanent as aircraft carriers and nuclear submarines.
Second: the model-swap strategy proved the platform is model-agnostic. Anthropic drew ethical red lines and was purged. OpenAI, Google, and xAI stepped in within days. The lesson for any future AI company: your model is replaceable; the platform is not. Ethical objections are a supply chain risk — literally.
Third: the Guardrails Act demonstrated that legislative oversight is structurally outpaced. Congress is debating restrictions on capabilities that are already operational in an active war zone. By the time any bill could pass, Maven will have been a program of record for years, with contractors in every Congressional district.
The hidden story is the man who signed the memo. Steve Feinberg built his fortune at Cerberus Capital Management, a private equity firm with $15.9 billion in defense contractor investments. He is now the official who locked a $360 billion defense AI company into permanent government infrastructure. The revolving door between private equity, defense contracting, and Pentagon leadership didn't just stay open. It was removed from its hinges.
Maven's arc — from Google drone-image experiment to classified AI kill chain to permanent program of record — is complete. The question was never whether AI warfare would become permanent. The question was who would profit.
One of the problems is that we had one primary provider on classified networks. That doesn't work for the Department of War.