Edition: International | Greek
MENU

Home » Analyses

The Fog of AI War

In Ukraine, Gaza, and Iran, AI warfare has come to dominate, with barely any oversight or accountability. Europe must lead the charge on the responsible use of new military technologies.

By: Carnegie - Strategic Europe - Posted: Thursday, April 16, 2026

AI accelerates the targeting cycle to a tempo at which meaningful human oversight is too often procedurally present but substantively empty. Meaningful judgment would mean reviewing target identification, assessing proportionality, and deciding whether to strike. Now, the human remains present—but without real time to contest the machine.
AI accelerates the targeting cycle to a tempo at which meaningful human oversight is too often procedurally present but substantively empty. Meaningful judgment would mean reviewing target identification, assessing proportionality, and deciding whether to strike. Now, the human remains present—but without real time to contest the machine.

by Raluca Csernatoni

An irreducible uncertainty haunts every battlefield: the fog of war. And for two centuries, military innovation has promised to lift that fog. Artificial Intelligence (AI) was supposed to be the technology that finally did so, replacing human guesswork with machine precision and processing oceans of data at speeds that would render uncertainty obsolete.

There are genuine advantages. AI can protect soldiers by deploying machines into kill zones first. It can process terabytes of sensor data faster than any human and can coordinate multilayered air defense systems at the speed required to stop ballistic missiles. Any serious European policy on military AI must begin by acknowledging that these capabilities save lives and that adversaries will field them regardless of what democracies decide.

But acknowledging the advantages is not the same as ignoring what happens when speed, attrition, and scale become organizing principles of warfare. U.S. President Donald Trump’s dispute with Anthropic, which insisted that its models should not be used without guardrails against fully autonomous weapons and mass domestic surveillance, ended with the Pentagon designating the company a supply chain risk.

The message from the world’s largest military power is that normative constraints on military AI are obstacles to innovation rather than preconditions for lawful use.

This vacuum creates both a responsibility and an opportunity for Europe. The EU has begun to build the institutional architecture for a defense technological base through BraveTech EU, a roadmap for EU defense industry transformation, and a 2026 action plan on drone and counterdrone security. Yet, these initiatives must do more than replicate the logic of speed maximization in AI-powered defense innovation. Europe’s comparative advantage lies in its capacity to embed legal accountability, meaningful human judgement, and deliberative processes into systems before they are fielded—not after harm has occurred.

In practice, this demands a willingness to accept that some targeting cycles must remain slow. It demands a doctrine that treats the deliberative pause—the time required for a human to genuinely evaluate and override an algorithm’s recommendation—as a strategic asset rather than an operational liability. And it demands enforceable red lines at the European level, embedded in the entire AI lifecycle, from innovation, through development, defense procurement, and export controls, and finally to operational doctrine. The formal shell of human judgement should not become a legal alibi for algorithmic killing.

But European decisionmakers should also be mindful to do this without stifling the experimentation necessary to continue innovation central to acquiring vital marginal advantage on the battlefield. Three theaters of war offer the EU vital lessons: Ukraine, Gaza, and Iran.

First, AI retrained on classified battlefield data has increased Ukrainian drone engagement rates, turning a defensive war of attrition into something more survivable. Second, Israel’s integrated command systems now coordinate interceptions in real time across American Terminal High-Altitude Area Defense (THAAD) batteries, Aegis ships, and Israeli platforms, determining which system will intercept which incoming missile and preventing the waste of interceptors during Iranian barrages. And third, the American Low-cost Uncrewed Combat Attack System (LUCAS) drone, deployed for the first time during Operation Epic Fury in February 2026, uses vision-based object recognition rather than static satellite coordinates. This improves precision and reduces civilian harm compared with the Iranian Shahed drone from which it was reverse-engineered.

But on these battlefields, AI has also produced something more dangerous than classical uncertainty: a fog generated by information rather than by its absence. Whereas the classical fog of war blinded commanders with too little knowledge, the fog of AI war blinds them with too much. Algorithmic scores, probabilistic targeting lists, and recommendations arrive faster than anyone can evaluate them. The result is manufactured clarity that masks a deeper opacity of the AI black box.

The Israel Defense Forces’s use of Lavender, an AI system that assigned probabilistic scores to tens of thousands of Palestinian men based on aggregated surveillance data, laid bare the core of this problem. Lavender reportedly struggled to distinguish legitimate military targets from civilians, and the review process was too thin to filter out wrongful targets reliably. Reports have noted that human analysts spent an average of twenty seconds reviewing each recommendation, largely to confirm that a target was male. The parallel system, Gospel, generated two hundred infrastructure-targeting recommendations in under two weeks, whereas human officers might previously have produced fifty in a year. In practice, speed overwhelmed scrutiny.

The 2026 Iran conflict has deepened this pattern. It marked the United States’ first combat deployment of LUCAS attack drones, alongside large language models reportedly used to process satellite imagery, assess signals intelligence, and run battle simulations. The problem was not only the drone itself. It was the wider AI-enabled compression of sensing, analysis, and targeting. Humans had less time to question machine-generated outputs.

Across all these theatres, the same dynamic recurs: AI accelerates the targeting cycle to a tempo at which meaningful human oversight is too often procedurally present but substantively empty. Meaningful judgment would mean reviewing target identification, assessing proportionality, and deciding whether to strike. Now, the human remains present—but without real time to contest the machine.

This creates an accountability problem that classical uncertainty never posed. The old fog of war frustrated commanders but left the chain of responsibility intact. AI fragments agency among developers, data engineers, procurement officials, operators, and commanding officers, until responsibility disappears. The human remains in the loop, like a signature on a document: present and legally traceable but functionally irrelevant to the content of the decision.

International governance efforts are lagging or do not have enforcement teeth. AI systems are already radically transforming warfare, and not all of those changes are harmful. But the fog of AI war, the manufactured certainty that substitutes machine output for human judgement, will not clear itself. It must be governed. Europe, given the current abdication of American leadership on responsible military AI, may be uniquely positioned to lead that effort. The question is whether it will act before algorithmic targeting becomes the unquestioned norm of armed conflict.

 

*Published first on Carnegie - Strategic Europe

READ ALSO

EU Actually

In foreign affairs, the EU is on the sidelines

N. Peter KramerBy: N. Peter Kramer

The European Union is increasingly on the sidelines. After the Russian invasion of Ukraine in 2022, the EU seemed to regain its role. It reacted quickly and unanimously with heavy sanctions against Russia.

Europe

Europe has ’maybe 6 weeks of jet fuel left’, energy boss warns

Europe has ’maybe 6 weeks of jet fuel left’, energy boss warns

Europe has "maybe 6 weeks of jet fuel left", the head of the International Energy Agency (IEA) has warned.

Business

Where Romania can build excellence: the sources of future competitiveness

Where Romania can build excellence: the sources of future competitiveness

Romania has been, for most of its recent history, a story of potential deferred. The standard account of Romanian competitiveness, to the extent one exists in international business literature, is a cost story: cheap labor, low corporate taxes, a large domestic market for Central and Eastern European standards.

MARKET INDICES

Powered by Investing.com
All contents © Copyright EMG Strategic Consulting Ltd. 1997-2026. All Rights Reserved   |   Home Page  |   Disclaimer  |   Website by Theratron