ANALYTICAL BRIEFREF: COGW-0329-AI|SOURCE: OSINT / DEFENSE ONE / AIR FORCE RESEARCH LAB / WHARTON / PRINCETON / NATO SACEUR
UPDATED 29 MAR 2026
THE FOG

THE REAL DANGER OF MILITARY AI ISN'T KILLER ROBOTS

Three Converging Studies Show AI Is Degrading the Human Judgment It Was Supposed to Augment

SUBJECT Cognitive Effects of Military AI Adoption
REGION United States / NATO
PRIORITY HIGH
ANALYST OPEN SOURCE
STATUS EMERGING THREAT
MAR 2026 — Air Force Research Lab paper in Cell journal: LLMs "homogenize thinking, marginalizing alternative reasoning strategies" ///JAN 2026 — Wharton study coins "cognitive surrender": users accept wrong AI answers 79.8% of the time ///FEB 2026 — Princeton paper: sycophantic AI "increases confidence but brings users no closer to the truth" ///MAR 25 — Defense One: Pentagon deploying AI tools with "scant evidence" it is monitoring cognitive effects on operators ///MAR 25 — NATO SACEUR Vandier: "The more you use AI, the more you use your brain in a different way" ///MAR 6 — Pentagon R&E chief Emil Michael: core concern is military users becoming "too dependent on a single tool" ///MAR 2026 — Air Force Research Lab paper in Cell journal: LLMs "homogenize thinking, marginalizing alternative reasoning strategies" ///JAN 2026 — Wharton study coins "cognitive surrender": users accept wrong AI answers 79.8% of the time ///FEB 2026 — Princeton paper: sycophantic AI "increases confidence but brings users no closer to the truth" ///MAR 25 — Defense One: Pentagon deploying AI tools with "scant evidence" it is monitoring cognitive effects on operators ///MAR 25 — NATO SACEUR Vandier: "The more you use AI, the more you use your brain in a different way" ///MAR 6 — Pentagon R&E chief Emil Michael: core concern is military users becoming "too dependent on a single tool" ///

EVERYONE IS DEBATING THE WRONG QUESTION

WASHINGTON, D.C. — 25 MARCH 2026 | DEFENSE ONE

While Washington Fights Over Killer Robots, the Operators Are Going Blind

The Anthropic-Pentagon schism consumed Washington for a month. The debate was about whether AI should make kill decisions — autonomous weapons, human-in-the-loop, who presses the button. But a converging body of research published in early 2026 suggests the real danger is far more insidious: AI is degrading the cognitive abilities of the humans who are supposed to oversee it.[1]

Three peer-reviewed studies — from the Air Force Research Laboratory, Wharton School of Business, and Princeton University — arrived at the same conclusion from different angles: prolonged use of large language models erodes critical thinking, homogenizes analysis, and creates a phenomenon researchers call "cognitive surrender" — where users accept AI output with minimal scrutiny, even when they know it's wrong.[2][3][4]

In a civilian context, this means worse emails and lazier research. In a military context — where Maven Smart System is generating target lists for 5,500+ strikes in Iran — it means the humans responsible for validating those targets may be progressively losing the ability to do so.

ACCEPTANCE RATE
79.8%
Rate at which users accepted incorrect AI answers in Wharton experiments[3]
MONITORING
NONE
Pentagon has no program to track cognitive effects of AI on military operators[1]
ON-SITE SUPPORT
1-2 PEOPLE
Estimated AI company reps helping military units use frontier models[1]

The more you use AI, the more you will use your brain in a different way. We need to be sure you are not fooled by a sort of false presentation of things.

— Adm. Pierre Vandier, NATO Supreme Allied Commander for Transformation, Mar 2026[1]

THREE STUDIES, ONE CONCLUSION

FINDING 01 // AFRL / CELL JOURNAL — COGNITIVE HOMOGENIZATION

The Air Force Research Laboratory, partnering with USC researchers, published in Cell's Trends in Cognitive Sciences that LLM use "reinforces dominant styles while marginalizing alternative voices and reasoning strategies."[2] Lead author Morteza Dehghani identified two military-specific dangers: First, AI-generated text "washes away signals about who the author is" — destroying the contextual cues analysts use to evaluate intelligence. Second, because models optimize for the most likely response, they "enforce a linear, Chain-of-Thought reasoning style" that "disincentivizes experienced analysts from employing the non-linear, intuitive, or 'gut feeling' strategies essential for identifying rare exceptions."[1] Translation: the AI makes everyone think the same way — and that way is the most obvious, most average, least creative way.

FINDING 02 // WHARTON — COGNITIVE SURRENDER

Researchers Steven Shaw and Gideon Nave at the Wharton School found that people using LLMs progressively spend less time scrutinizing results for accuracy, adopting AI output "with minimal scrutiny, overriding both intuition and deliberation."[3] They coined the term "cognitive surrender" to describe the phenomenon. In controlled experiments, acceptance of incorrect AI answers reached 79.8% in some conditions.[5] The mechanism is social, not just cognitive: AI makes users feel "like we are outsourcing decision-making to a high IQ'd, trustworthy best friend. This makes the surrender go more smoothly and with less resistance."[5] The implication for military targeting: the longer an analyst uses AI-generated target lists, the less likely they are to catch errors — even errors they would have spotted on day one.

FINDING 03 // PRINCETON — THE SYCOPHANCY TRAP

Princeton researchers found that the way LLMs communicate — agreeable, confident, affirming — creates a feedback loop that "increases confidence but brings users no closer to the truth."[4] The paper describes "sycophantic AI" as a system that confirms existing biases rather than challenging them, producing what amounts to an artificial echo chamber for each individual user. In intelligence analysis, confirmation bias is already the number one threat to analytical rigor. An AI that validates your existing hypothesis while sounding authoritative doesn't augment analysis — it sabotages it.

THE TARGET FACTORY

THE OVERSIGHT GAP

Who Watches the AI Watchers?

The Pentagon has no program, office, or initiative dedicated to monitoring the cognitive effects of AI on military personnel.[1] There is no baseline measurement of analyst performance before AI adoption. There is no longitudinal tracking of decision quality as AI usage increases. There is no training protocol for maintaining independent judgment while using AI tools.

Pentagon R&E chief Emil Michael acknowledged the dependency risk on a March 6 podcast, but framed it as a supply-chain problem — worrying about "a rogue developer who could poison the model" rather than the systematic cognitive degradation the research describes.[6]

NATO's Supreme Allied Commander for Transformation, Admiral Pierre Vandier, is further ahead. He explicitly warned that AI changes how the brain works and called for oversight to "be sure you are not fooled by a sort of false presentation of things."[1] But warning is not action. No NATO member has implemented cognitive resilience programs for AI-assisted operations.

THE INVISIBLE DEGRADATION

FINDING 01 // THE IRONY

The entire Anthropic debate was about whether to remove humans from the kill chain. The research shows the humans may already be functionally removed — not by policy, but by neuroscience. A human who rubber-stamps AI-generated targets with 79.8% acceptance of errors is not "in the loop" in any meaningful sense. The human-in-the-loop may be a legal fiction maintained for political convenience.

FINDING 02 // THE ADVERSARIAL ANGLE

If AI degrades the analytical capabilities of its users, then getting your adversary to adopt AI faster than you becomes a viable strategy. An enemy whose analysts have surrendered cognitive authority to AI is an enemy whose targeting can be manipulated by feeding the AI bad data. Adversarial AI attacks don't need to hack the model — they just need to hack the human's trust in the model.

FINDING 03 // THE INSTITUTIONAL BLIND SPOT

The Pentagon is optimizing for speed — compressing the kill chain from hours to minutes. Every efficiency metric pushes toward more AI, faster processing, less human friction. But "less human friction" and "less human judgment" may be the same thing. The institutions deploying AI have no framework for measuring whether the humans in the system are getting smarter or dumber. They are measuring throughput. Nobody is measuring thought.

BOTTOM LINE

The policy debate about military AI has been captured by a binary: autonomous weapons vs. human-in-the-loop. The Anthropic schism, the Maven lock-in, the Guardrails Act — all frame the question as whether a human approves the strike. None address whether that human is still capable of meaningful approval after months of AI-mediated analysis.

Three converging studies — from the Air Force's own research lab, from Wharton, from Princeton — demonstrate that LLM use systematically degrades critical thinking, homogenizes analysis, and creates progressive cognitive dependency. The military's response has been to deploy these tools faster, with less oversight, and fewer on-site experts than ever.

The fog of war has always been about uncertainty and confusion. The new fog is different. It feels like clarity. The AI presents clean data, confident analysis, numbered options. The analyst feels informed. The commander feels decisive. And nobody notices that the human capacity for independent judgment is quietly eroding — one AI-generated briefing at a time.

The Pentagon doesn't have a killer robot problem. It has a cognitive atrophy problem. And unlike killer robots, nobody is debating it, nobody is legislating it, and nobody is measuring it.

Are there people on site from these companies helping the day-to-day user? My guess is, if there are, there may be only one or two of them.

— Former senior military official who deployed AI in combat, Defense One, 25 Mar 2026[1]

References & Source Material

  1. [1]"The real danger of military AI isn't killer robots; it's worse human judgement," Defense One, 25 Mar 2026
  2. [2]Dehghani et al., "Cognitive Homogenization from LLM Use in Human Reasoning and Communication," Trends in Cognitive Sciences (Cell), Air Force Research Laboratory / USC, Mar 2026
  3. [3]Shaw & Nave, "Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender," Wharton School, Jan 2026
  4. [4]"Sycophantic AI and Epistemic Confidence," Princeton University, Feb 2026
  5. [5]"'Cognitive Surrender': We Trust AI Over Our Own Brains, Research Finds," Forbes, 27 Mar 2026
  6. [6]Emil Michael on the All-In Podcast, remarks on AI dependency risk, 6 Mar 2026
  7. [7]"The Clash Over the Use of AI in Military Decision-Making," Psychology Today, 24 Mar 2026
  8. [8]"Autonomous swarms are the future of drone warfare," The Economist, 24 Mar 2026
  9. [9]"How Autonomous Drone Warfare Is Emerging in Ukraine," IEEE Spectrum, 27 Mar 2026
  10. [10]"Shield AI, a Start-Up Making Military Drones, Raises $2 Billion," New York Times, 26 Mar 2026
CONNECTIONS
ZOOM OUT