
In the rapidly shifting landscape of 21st-century conflict, the traditional “fog of war” is being replaced by the cold, calculated precision of the algorithm. The ongoing hostilities involving American and Israeli forces—stretching from the decimated urban corridors of Gaza to the high-stakes theater of the 2026 war on Iran—mark a definitive paradigm shift. We have entered the era of “decision compression,” where the interval between detecting a target and pulling the trigger is no longer measured in hours, but in seconds.
The Factory of Targets: Gaza as a Proving Ground
For the past three years, Gaza has served as a grim laboratory for AI-assisted targeting. Central to the Israel Défense Forces’ (IDF) strategy are two systems: Habsora (The Gospel) and Lavender. While military officials initially touted these as tools for “surgical precision,” investigative reports and human rights data paint a picture of an “automated target factory.”
Habsora is designed to identify buildings and infrastructure linked to militant activity. By processing vast amounts of surveillance data, it generates targeting recommendations at a rate far exceeding human capacity—at times producing 100 targets per day compared to the 50 targets a human analyst might identify in a year.
Complementing this is Lavender, an AI-powered database that, at its peak, reportedly listed 37,000 Palestinian men suspected of links to Hamas or Palestinian Islamic Jihad. According to intelligence sources, the system was given broad leeway during the initial weeks of the conflict. The “human in the loop” often acted as little more than a “rubber stamp,” spending as little as 20 seconds to authorize a strike. This shift from human-led intelligence to machine-driven “kill lists” has been a primary driver in the widespread destruction of Gaza’s residential blocks.
Hunting Commanders: The “Where’s Daddy?” Protocol
The use of AI has extended beyond infrastructure to the granular tracking of individuals. To target Hamas commanders, the IDF utilized a system colloquially known as “Where’s Daddy?”.
This software cross-references telecommunications data, social media connections, and real-time movement tracking to identify when a target enters a specific location—frequently their family home. The ethical controversy of this system lies in its timing: rather than striking commanders while they were in active military tunnels or command centres, the AI was often programmed to alert the military when the target returned home to sleep or visit family.
In these instances, the military reportedly accepted “collateral damage” ratios of dozens of civilians for a single mid-level commander. The AI’s ability to process facial recognition and “pattern of life” data allowed for a relentless pursuit, but at the cost of erasing the distinction between combatants and non-combatants in densely populated areas.
The Minab Tragedy: The Ghost of Outdated Data
As the conflict expanded into Iran on February 28, 2026, the risks of automated warfare were laid bare in the southern city of Minab. A US-led strike, intended for the Islamic Revolutionary Guard Corps (IRGC) Naval Headquarters, instead decimated the Shajareh Tayyebeh girls’ primary school.
The tragedy, which claimed the lives of over 165 schoolgirls aged 7 to 12, has been traced to a catastrophic failure in intelligence synchronization. Preliminary investigations suggest the strike relied on a wrong satellite image from 2013. In that year, the school building was indeed part of the IRGC campus. However, by 2016, the school had been officially shifted outside the headquarters’ perimeter.
While the US military’s Maven Smart System and other AI tools are designed to streamline targeting, they remain subservient to the data fed into them. In Minab, the “precision” of a Tomahawk missile was rendered lethal by an obsolete coordinate. The school, decorated with colorful murals and hosting hundreds of children during the morning session, was “triple-tapped” by missiles that followed an outdated digital ghost, proving that even the most advanced AI cannot account for human negligence in data verification.
The Most Lethal Weapon: Why AI Dominates the War on Iran
The war on Iran has solidified AI as the most lethal weapon in the modern arsenal, not because it creates better bombs, but because it enables scale and speed previously thought impossible.
* Swarm Intelligence: In March 2026, Israel deployed AI-driven drone swarms over Tehran. Launched from “mother platforms,” these micro-drones use facial recognition to identify and neutralize specific security personnel in real-time.
* Predictive Threat Analysis: US Cyber Command and Israeli Unit 8200 have utilized AI to paralyze Iranian command-and-control networks, timing kinetic strikes to coincide with the precise moment Iranian air defenses are digitally blinded.
* Logistical Superiority: Beyond the frontline, AI manages the “kill chain” by autonomously rerouting supplies and selecting the most efficient munitions for 1,000 targets in a single night—a task that would take a human staff weeks to plan.
The Moral Calculus
The integration of AI into the US-Israel military operations has created a new reality: a war that moves at the “speed of thought.” However, as the ruins of Minab and the leveled neighbourhoods of Gaza show, speed is not a substitute for judgment. When we offload the moral responsibility of life and death to an algorithm, we risk creating a world where accountability vanishes into the code.
The challenge for the international community is no longer just about controlling the proliferation of missiles, but about governing the data that directs them. Without transparent oversight and a firm “human-in-the-loop” requirement, the “precision” of AI will continue to be a hollow promise for the civilians caught in the crosshairs.
~Hasnain Naqvi is a former member of the history faculty at St. Xavier’s College, Mumbai