In January 2020, a US drone strike killed Iranian General Qasem Soleimani on a road outside Baghdad International Airport. The decision to fire was made by humans. The authorization came from the President of the United States. The accountability, however uncomfortable, was clear.
Now imagine the same strike — same road, same target — but the decision to fire was made by an algorithm. The system identified the target, assessed the threat level, calculated collateral damage probability, and executed the strike within milliseconds. No human pulled the trigger. No human had time to.
Who do you hold responsible?
This is not a hypothetical designed for philosophy classrooms. It is a question that military engineers, legal scholars, and weapons developers are actively avoiding answering — because the answer, honestly given, would complicate the development of systems they are racing to build.
The Machines Already Making Decisions
The debate over autonomous weapons is often framed as a future problem. It is not. Autonomous and semi-autonomous weapons systems are already deployed, already operational, and already making decisions that result in death.
Israel’s Harpy drone — in service since the 1990s — is designed to autonomously detect, track, and destroy radar systems without human intervention. South Korea has deployed autonomous gun systems along the DMZ capable of identifying and engaging targets without a human operator. The Phalanx close-in weapon system, used by the US Navy and dozens of allied navies, can autonomously engage incoming missiles and aircraft when operating in automatic mode.
These systems exist on a spectrum. Some require human authorization for each engagement. Others operate autonomously within defined parameters. The line between “human in the loop” and “human out of the loop” is not sharp — it is a gradient that military developers are quietly moving along in one direction.
The Legal Black Hole
International humanitarian law — the body of rules governing armed conflict — was built around a fundamental assumption: that a human being makes the decision to use lethal force and can therefore be held accountable for that decision.
The Geneva Conventions, the Rome Statute establishing the International Criminal Court, the laws of war developed over centuries — all of them assume a chain of human command and human responsibility that autonomous weapons systems structurally disrupt.
When an autonomous system kills a civilian it was not supposed to kill, the accountability question produces what legal scholars call a “responsibility gap.” The programmer who wrote the targeting algorithm did not intend to kill that specific person. The commander who deployed the system did not make the specific decision to engage. The manufacturer who built the weapon did not authorize its use in that context. And the algorithm itself cannot be prosecuted.
The result is that an act that would constitute a war crime if committed by a human soldier — the deliberate or reckless killing of a civilian — may produce no accountability at all when committed by an autonomous system. This is not a bug in the legal framework. It is an absence where a framework has not yet been built.
The Gaza Precedent
In 2023 and 2024, investigative reporting by +972 Magazine and Local Call revealed that the Israeli military had deployed AI-assisted targeting systems — referred to internally as “Lavender” and “Where’s Daddy?” — that generated target lists for airstrikes at a scale and speed that human review could not meaningfully process.
The Lavender system reportedly generated a list of approximately 37,000 Palestinians identified as potential targets based on algorithmic analysis of surveillance data. Human operators were described as spending as little as 20 seconds reviewing each target before authorizing a strike. The system was reportedly calibrated to accept a significant number of civilian casualties per target killed.
The Israeli military disputed aspects of the reporting. But the core revelation — that AI was being used to generate target lists at industrial scale, with human oversight reduced to a rubber stamp — was not effectively refuted.
This is what the accountability gap looks like in practice. Not a robot deciding to start a war. A system that processes targeting decisions faster than human conscience can keep up with — and a institutional structure that found this acceptable.
The Meaningful Human Control Standard
The international community’s best attempt at a standard for autonomous weapons is the concept of “meaningful human control” — the requirement that a human being exercise genuine oversight over decisions to use lethal force.
The problem is that “meaningful” is undefined. Different states interpret it differently, and the states most actively developing autonomous weapons systems have a strong incentive to interpret it as permissively as possible.
A human who approves 300 AI-generated targeting decisions per hour is technically “in the loop.” Whether that constitutes meaningful control is a different question — one that the militaries operating these systems prefer not to ask too loudly.
The Campaign to Stop Killer Robots, a coalition of NGOs and experts, has been pushing for a legally binding international treaty on autonomous weapons since 2013. Twelve years later, no such treaty exists. Negotiations at the UN have produced discussion papers and expressions of concern. They have not produced binding commitments from the states that matter most.
The Competitive Trap
Why are states reluctant to agree to restrictions on autonomous weapons? The answer is the same logic that drives every arms race: the fear that restraint will be unilateral.
If the United States agrees to keep humans meaningfully in the loop on all lethal decisions, and China does not, then in a future conflict where decisions need to be made at machine speed, the United States loses the engagement before its human operators have finished reviewing the targeting data.
This logic is compelling in a narrow military sense and catastrophic in a broader human sense. It produces a race to the bottom on human oversight — each state reducing its accountability standards to match the least accountable competitor — with the end point being autonomous systems making lethal decisions at scale with no meaningful human control anywhere in the chain.
The competitive trap is real. But it is worth noting that it has been escaped before. Chemical weapons, biological weapons, blinding laser weapons — all represent categories where the international community agreed that certain capabilities were too dangerous to deploy regardless of competitive pressure. The agreements are imperfect and not universally observed. But they exist, and they have constrained behavior.
The question for autonomous weapons is whether the political will to build equivalent constraints can be assembled before the technology is so deeply embedded in military doctrine that removing it becomes unthinkable.
The Manufacturer’s Silence
The companies building autonomous weapons systems — defense contractors, AI developers, robotics firms — have largely avoided the accountability question by framing their products as tools whose use is determined by their customers.
This position is legally defensible and morally insufficient. A manufacturer who builds a system specifically designed to make lethal decisions autonomously, markets it on that basis, and sells it to militaries that will deploy it in conflict zones, is not a neutral tool provider. They are a participant in the decisions the system makes.
Some technology companies have pushed back. Google employees protested the company’s involvement in Project Maven — a Pentagon AI program for analyzing drone footage — forcing Google to decline to renew the contract in 2018. Microsoft employees raised similar concerns about a military contract for augmented reality headsets. These internal pressures have produced some constraints on some companies in some contexts.
They have not produced industry-wide standards or legal accountability frameworks. The development continues, largely in the space between what is technically possible and what has been legally prohibited — which is to say, with almost no constraints at all.
The Question That Needs an Answer
When an autonomous weapon kills the wrong person — and it will, because all weapons systems produce errors, and autonomous systems produce errors at scale — someone needs to be accountable.
Not to satisfy an abstract legal principle. But because accountability is the mechanism through which militaries learn from mistakes, through which victims receive acknowledgment, and through which the incentives to build more careful systems are created. Without accountability, the incentive structure points in one direction: faster, more autonomous, less oversight.
The answer to “who is responsible when AI kills” is not technically complicated. It requires political decisions about where in the chain — programmer, commander, manufacturer, deploying state — accountability is assigned. Those decisions have not been made because making them would constrain systems that powerful actors want to build without constraint.
The machines are not waiting for the answer. They are already in the field.
If this analysis interests you, read next: How Algorithms Decide What You Believe About War

