Time Is Running Out to Stop Killer Robots Commentary
Time Is Running Out to Stop Killer Robots

Picture a swarm of drones entering a village programmed to fly without a human operator with instructions to shoot or immobilize anyone it deems to be holding a weapon. While this might sound like a scene from Lana Wachowski’s latest Matrix film, the technology to build these killer robots is already here. Unless states act urgently to ban this weaponry with a new international treaty, widespread use of these killer robots could be just around the corner.

Eighty-five states have been debating what to do about this technology since 2013 through the Convention on Certain Conventional Weapons (CCW) in Geneva. Over the last five years, states have been tasked with clarifying, considering, and developing specific frameworks to deal with killer robots, also known as lethal autonomous weapons systems, as part of the CCW’s Group of Governmental Experts (GGE), which is now convening ahead of the Sixth CCW Review Conference beginning December 13. As a new report by Human Rights Watch and the Harvard Law School International Human Rights Clinic illustrates, existing law is insufficient to govern killer robots. If states cannot reach consensus this month to begin negotiating a binding treaty through the CCW, they should quickly select a new forum to begin negotiations.

Most states in the CCW, including individual countries from Africa, Asia-Pacific, Europe, Latin America, and the Middle East, in addition to the Non-Aligned Movement, have argued that a new treaty is necessary because existing international law cannot adequately address the legal, ethical, accountability, and security concerns posed by killer robots. A new treaty, as proposed by human rights experts, should preserve meaningful human control over the use of force, ban any autonomous weapon system that targets people, and include positive obligations to ensure meaningful human control is maintained in the use of any other system that uses sensor technology to select and engage targets. 

From a legal perspective, killer robots likely cannot be used in compliance with the principle of distinction—a requirement under international law that attackers distinguish between combatants and non-combatants. Today’s soldiers frequently do not wear uniforms to blend in with civilian populations. Even those wearing uniforms may be unlawful targets if they are wounded or have surrendered. The principle of distinction therefore requires the ability to assess conduct and not just appearance, a fundamentally qualitative process that cannot be programmed into a robot and often requires interpreting subtle cues, such as body language.  

Similarly, the use of robots would face even greater challenges complying with the principle of proportionality—the requirement under international law that prohibits attacks for which the expected civilian harm would be excessive compared to the anticipated military advantage. This balancing requires “common sense and good faith,” moral and ethical reasoning, human qualities that cannot be programmed into a robot. Moreover, no two circumstances are identical and it would be impossible to pre-program a robot with the infinite scenarios it would face in armed conflict. A new treaty could address these concerns by requiring meaningful human control over the use of force, preventing machines from assuming responsibility over the legality of an attack.

As for accountability, the international system relies on the ability to attribute attacks to specific actors. This is a problem for killer robots, which would select and engage targets without relying on a human to make critical decisions about whom and when to attack. Who, then, would stand trial if a killer robot mistakes a child for a combatant when it mistakes a toy turtle for a rifle? Prosecuting the robot would be pointless. The military commander? Perhaps the manufacturer or the individual programmer? Holding a commander, manufacturer, or even the programmer accountable would prove tricky if that person wasn’t able to control (or even predict) the robot’s actions. In addition, even a well-programmed robot is susceptible to hacking; who would be held accountable if a robot suddenly turns on its own soldiers or civilians after being hacked by some unknown actor, or even worse, by another robot? Requiring meaningful human control would ensure that a human can ultimately be held accountable under international law for attacks gone awry.

Killer robots also raise fundamental ethical problems. Replacing humans with robots in the kill-chain reduces human targets to mere data points, dehumanizes conflict, and removes the restraint of human compassion. As philosopher Peter Asaro has argued, this results in a deprivation of basic human dignity because robots lack the uniquely human capacity to understand the value of life and rationally conclude that killing is justified. These ethical considerations have always been a part and parcel of international law but were never explicitly applied to killer robots. This is because, until now, the prospect of weapons systems operating autonomously was mere science fiction. But by requiring meaningful human control and by prohibiting robots from targeting human beings, new law would preserve the ethical and humanitarian dimensions of international law.

Even in the absence of the above concerns, killer robots would undermine global security by raising the spectre of more armed conflict. They threaten to touch off an arms race as countries rush to keep pace with their adversaries, increasing the chance that these weapons would reach the hands of state and non-state actors that hold little regard for civilian protection or the laws of war, or that have criminal intentions. These unintended spillover effects are even more likely with robots than with other weapons systems because robots, unlike other weapons technology like nuclear weapons, do not require rare materials. Moreover, while killer robots could lower their side’s military casualties, this may make states that own these robots less hesitant to engage in armed conflict at the expense of the soldiers and civilians of other countries. A new treaty banning killer robots would avert an arms race, prevent the unintentional spread of these weapons, and avoid reducing the barriers to armed conflict.   

Despite these concerns, a minority of states in the CCW, most notably India, Russia, Israel, and the United States, claim that international humanitarian law is sufficient to regulate killer robots such that no new treaty is needed. This dispute only bolsters the need for new law: a dangerous and confusing weapons landscape would arise if some states continue developing this dangerous technology while others abstain. 

Unfortunately, despite support for a new legally binding instrument from most CCW states, a preventative ban is unlikely to emerge from the body because it operates by consensus. If states are unable to reach consensus at the CCW’s Sixth Review Conference on a mandate to negotiate such an instrument, they should immediately switch to a new forum to negotiate a treaty. States could go to the U.N. General Assembly, which produced the Treaty on the Prohibition of Nuclear Weapons in 2017, or initiate an independent treaty process similar to the one that banned landmines in 1997 or cluster munitions in 2008.

Unlike the CCW, neither the General Assembly nor an independent process requires consensus. In one of these alternative forums, states can aim high and pursue a ban on killer robots without fearing a veto. This is especially important for small and middle-sized states in the Global South that have been disproportionately harmed by armed conflict, and, as Palestine has pointed out, been the primary target for the use and testing of weapons systems. Consensus forums like the CCW, by contrast, tend to produce weak instruments because states must accommodate the position of the lowest common denominator.

Treaties produced through independent forums or the General Assembly can have significant influence regardless of whether major military powers join them. For example, the United States opposed the independent process to ban cluster munitions and pressured allies to do the same, but it has not used cluster munitions since 2003. Similarly, although no state with nuclear weapons has ratified the TPNW, political scientists anticipate the treaty will slowly change nuclear weapons from a symbol of prestige to one that is more shameful. In fact, campaigns like ‘Don’t Bank on the Bomb have already begun pressuring banks in nuclear-weapons-possessing states to divest from companies that produce these weapons. A treaty on killer robots could likewise set high international standards that would not only bind states parties, but also influence states which are not parties, along with non-state actors. Furthermore, as Ousman Noor, government relations manager for Stop Killer Robots, explained to the authors in an interview last month, “The mere existence of an international treaty can help pressure banks to divest from companies producing these weapons and will help dissuade universities and programmers from assisting their governments develop these weapons, even if they have not signed the instrument.”

Time is running out for states to prevent killer robots from becoming mainstream. If states are unable to reach consensus to negotiate a treaty banning and regulating this dangerous weaponry at the CCW this month, they should move quickly to a new forum and begin treaty negotiations.

 

Theo Wilson is a 2L at Harvard Law School and holds a Master in Public Affairs from Princeton University.

 

Nick Fallah is a 2L at Harvard Law School with an undergraduate degree in political science from Tufts University.

 

David Hogan is a 3L at Harvard Law School and an advanced clinical student with the Harvard International Human Rights Clinic. He graduated from Middlebury College with a Bachelor of Arts in History.

 

Suggested citation: Theo Wilson, Nick Fallah and David Hogan, Time Is Running Out to Stop Killer Robots, JURIST – Student Commentary, December 7, 2021, https://www.jurist.org/commentary/2021/12/Theo-Wilson-Nick-Fallah-David-Hogan-killer-robots-artificial-intelligence-international-law/.


This article was prepared for publication by Sukrut Khandekar, a JURIST staff editor. Please direct any questions or comments to him at commentary@jurist.org


Opinions expressed in JURIST Commentary are the sole responsibility of the author and do not necessarily reflect the views of JURIST's editors, staff, donors or the University of Pittsburgh.