The artificial intelligence (AI) did not start the war. It did not author the decades of sanctions, covert operations, and proxy conflicts that preceded it. It did not sign the executive order. It did not authorise the Tomahawk. And yet, in the first hours of Operation Epic Fury on February 28th 2026, an AI system woven into the Pentagon’s targeting infrastructure helped process and prioritise approximately 1,000 strike coordinates — more than double the air power deployed across the entire opening phase of the 2003 invasion of Iraq. One of those coordinates pointed at a girls’ primary school in Minab, southern Iran, where between 168 and 180 people were killed, most of them children aged seven to 12. AI warfare became a controversial topic since then.
This is the moment Trump dropped a missile on an elementary school in Minab, Iran, obliterating it in an instant — murdering 168 little girls and 24 teachers.
— sarah (@sahouraxo) March 8, 2026
You can hear the screams in the background…
Horrifying pic.twitter.com/SvpE09jKMz
The world asked whether the machine was responsible. It was asking the wrong question.
The rollout of AI in warfare was supposed to be a carefully managed, ethically supervised process with human oversight at every stage. Instead, the Pentagon deployed an AI-assisted targeting system — built on stale intelligence not updated since at least 2016 — at machine speed, across thousands of targets, with the human review window compressed to seconds per decision. The oversight framework was always subordinate to the speed imperative. The children of Shajareh Tayyebeh Elementary School paid the price for that order of priorities.
Kill chain architecture
For years, Project Maven existed in the institutional shadows of the Pentagon — a pilot programme, perpetually on the verge of being tested rather than deployed. That changed on February 28th 2026.
On that morning, the Maven Smart System — built by Palantir Technologies under a $1.3bn Pentagon contract — processed satellite imagery, drone feeds, radar data, and signals intelligence in near real time, collapsing what previously required a staff of roughly 2,000 analysts during the Iraq War into approximately 20 military personnel operating a single interface. Running inside that system was Claude, a large language model developed by Anthropic, a San Francisco-based AI safety company. Claude’s role, according to multiple sources familiar with the system, was not to select targets. It was to synthesise and summarise vast quantities of unstructured intelligence data — essentially performing the analytical labour of hundreds of junior intelligence officers in seconds.
That distinction — between “decision support” and “targeting” — has become the definitional battleground of the entire debate. NBC News separately confirmed two sources with direct knowledge of the system, saying the Palantir AI platform identifies potential targets in Iran while Claude helps analysts process and sort that intelligence. A source familiar with Anthropic’s government work told NBC that Claude “does not directly offer targeting recommendations,” framing it as a decision support system. Whether that distinction holds in practice when a system produces prioritised strike coordinates at machine speed is a question neither Palantir nor the Pentagon has publicly answered.
What is not disputed is the outcome. The Maven Smart System, using Claude, semi-autonomously ranked targets by strategic importance and drafted automated legal justifications for each strike, according to reporting by Military Times. In the first 24 hours of Operation Epic Fury, it generated hundreds of strike coordinates — a battlefield tempo without precedent in the history of modern warfare.
The building that was struck in Minab — the Shajareh Tayyebeh primary school — had been catalogued in a Defence Intelligence Agency database as a military facility associated with an adjacent Islamic Revolutionary Guard Corps (IRGC) naval compound. The wall separating the school from that compound, visible in satellite imagery, was constructed between 2013 and 2016. The database had not been updated. Within days of the strike, organisations, such as The New York Times, verified via satellite imagery that the school had been a functioning civilian institution, active on social media and with its own website.
The machine did not create that database entry. Humans did, and humans failed to update it for at least a decade.
Speed doctrine’s consequences
The intellectual architecture behind Maven predates the war in Iran by several years, and its logic has always been the same—compress the kill chain.
The benchmark that drove Maven’s development was the 2003 invasion of Iraq, where roughly 2,000 people worked the targeting process for the entire war. During Scarlet Dragon — the multi-year live-fire exercise used to test and refine the system — 20 soldiers using Maven handled the same volume of work. By 2024, the system’s stated operational goal was 1,000 targeting decisions per hour, or one decision every 3.6 seconds per individual operator.
That figure is not a boast. It is a confession. No human cognitive process — no legal review, no ethical weighing, no cross-referencing against a no-strike list — operates reliably at 3.6 seconds per cycle. The speed doctrine does not augment human judgement. It structurally undermines the conditions under which human judgement is possible.
This is not a theoretical concern. The Department of War’s (erstwhile Department of Defense) own data indicates that Maven can correctly identify objects at roughly 60% accuracy overall — compared with 84% for human analysts. Yet the operational tempo it enables has been embraced precisely because it is faster, cheaper, and requires fewer people than conventional intelligence work. The accuracy trade-off was an acceptable cost — until 168 children were killed when their school was attacked.
The more immediate worry is whether human reviewers can keep up with AI as it compiles more targets quicker than ever before, or succumb to pressure to approve decisions without the necessary time to vet them for potential civilian danger, according to Semafor’s reporting on the aftermath. That is not speculation. It is a structural feature of the system as deployed.
The Anthropic dispute adds a further dimension. The company — which developed Claude explicitly under an “AI safety” mission and built ethical constraints into the model — reportedly refused Pentagon demands to remove guardrails against autonomous weapons and mass domestic surveillance. The Pentagon blacklisted Anthropic as a “supply chain risk”. Just hours before the Iran bombing campaign began, Donald Trump announced that US government agencies would be barred from further using Anthropic’s technology. Defence officials continued to use it regardless, reportedly because Claude was the only frontier-scale AI model then operating within certain classified Pentagon networks.
"THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.
— The White House (@WhiteHouse) February 27, 2026
The Leftwing nut jobs at Anthropic… pic.twitter.com/aIEx92nnyx
The company that tried to maintain ethical limits on its technology was penalised for doing so. The war proceeded without interruption.
The Pentagon has labeled AI company Anthropic a supply chain risk after a clash with the Trump administration over the military's use of artificial intelligence. pic.twitter.com/8Kl0IcIwqJ
— The Associated Press (AP) (@apnews633) March 27, 2026
Accountability gap: Who answers for algorithm?
The question of legal accountability in AI-assisted warfare is not new. It has been debated in academic papers, UN committees, and NGO briefings since at least 2013. Operation Epic Fury has, finally, forced it into the political mainstream.
121 House Democrats, led by Representatives Sara Jacobs, Jason Crow, and Yassamin Ansari, sent a formal letter to Defence Secretary (or War Secretary, perhaps) Pete Hegseth demanding answers, including whether artificial intelligence was involved in selecting the Shajareh Tayyebeh school as a target, and if so, whether a human verified the accuracy of that target. Mr Hegseth has not publicly responded to the letter’s substantive questions. The Pentagon has refused to confirm or deny the AI system’s role in the specific strike.
That silence is itself significant. In any conventional strike involving a documented civilian casualty at this scale, a chain of accountability would — in principle — extend from the weapon to the operator to the commander to the legal review to the political authorisation. Maven’s architecture compresses that chain into a software interface. Selecting targets, in the words of one source cited by Semafor, “functions like the life-and-death version of placing a takeout order.” When something goes wrong in a takeout order, the app bears no liability. The analogy, intended to reassure, instead reveals the problem precisely.
Craig Jones, an expert on modern warfare, has said that AI technology helps militaries speed up the “kill chain” — reducing a massive human workload of tens of thousands of hours into seconds and minutes — and automating human-made targeting decisions in ways that open up profound legal, ethical, and political questions. Those questions have no credible answers in the current international legal framework. The Geneva Conventions were written for a world in which a human being could be held responsible for each decision to fire. That world may already be obsolete.
The West, which authored those conventions and has long claimed their mantle, is the party presently operating furthest outside their spirit.
Geopolitics: Asymmetry that matters
The framing that has dominated Western coverage of AI in warfare treats it as a bilateral or multilateral problem — a race between the United States, China, and Russia in which the risks are shared equally. That framing is false.
No comparable operational deployment of AI-assisted targeting systems by Russia in Ukraine, by China in any conflict, or by Iran in any theatre has been documented to the same level of public evidence. That is not to say such systems do not exist, or are not under development. Russia has deployed autonomous drone systems in Ukraine, and China’s defence AI investment is substantial. But the specific architecture of Maven — the integration of a commercial large language model into a live, operational kill chain generating 1,000 targets in 24 hours — has no confirmed equivalent outside the US military and its allies.
The Western narrative frequently invokes the spectre of Chinese or Russian AI warfare to justify its own escalation. Those allegations remain largely unsubstantiated in operational terms. What is substantiated, in granular detail, by Pentagon documents, congressional testimony, court proceedings, and investigative reporting by Bloomberg, Wired, the Washington Post, and NPR, is the American system and its consequences.
The strategic asymmetry is not only technological. It is financial, doctrinal, and institutional. Palantir built Maven into a targeting infrastructure that took over after Google abandoned the original Project Maven contract in 2018 following a walkout by more than 4,000 employees who opposed building AI for the Pentagon’s targeting systems. The moral objection was registered. The contract was transferred. The system was built anyway. This is not a story about a technology that escaped human control. It is a story about a technology that was built, funded, tested over years, deployed at scale, and defended by officials who understood precisely what it was for.
Following the Minab school strike, Deputy Secretary of Defence Steve Feinberg signed a memo formalising AI’s role in military decision-making, designating Maven an official programme of record and pushing adoption across all US military branches by September 2026. The massacre of 168 children in a primary school did not delay the programme. They did not produce a pause, a review, or a suspension. They produced a formalisation.
The machine isn’t the Author. The author is still in office
The debate about AI in warfare has been structured, partly by design, as a debate about the machine. Was Claude responsible? Was Maven reliable? Did the algorithm make an error? These are not unimportant questions. But they perform a critical misdirection: they place the moral and political weight of an illegal strike on a software package, rather than on the human institutions that authorised, funded, deployed, and refuse to pause the system after it contributed to mass civilian death.
The AI did not start the war. It did not build the database that classified a primary school as a military target for a decade. It did not ignore satellite imagery showing 170 girls being dropped off by their parents on a weekday morning. It did not sign the memo formalising the programme five days after the bodies were counted. Humans did all of those things. Humans who have names. Humans who hold offices. Humans who, in a functioning system of international law, would be required to answer for them.
What AI has done is remove the friction — the time, the people, the cognitive labour — that once made mass targeted killing operationally difficult. It has made industrialised war cheaper, faster, and more legible to the bureaucratic mind. And it has done so in the hands of the same power that lectures the world on rules-based order, humanitarian law, and the sanctity of civilian life.
The machine did not pull the trigger. But it made pulling the trigger easier than ever before. And the people who built it, funded it, deployed it, and refused to pause it when children died — they are not algorithms. They are accountable. They should be treated as such.
East Post is an independent geopolitical analysis portal covering South Asia and global power dynamics for international audiences. Views expressed are analytical and do not constitute endorsement of any state or non-state actor.
Join our channels on Telegram and WhatsApp to receive geopolitical updates, videos and more.
