The Human Rights Implications of “Killer Robots” Commentary
The Human Rights Implications of “Killer Robots”
Edited by:

JURIST Guest Columnist Bonnie Docherty of Human Rights Watch discusses fully autonomous weapons and their human rights implications…

Fully autonomous weapons, which could select and fire on targets without meaningful human intervention, have the potential to revolutionize the nature of warfare, bringing greater speed and reach to military operations. In the process, though, this emerging technology could endanger both civilians and soldiers.

Nations have been considering the multiple challenges these weapons would pose to the laws of war, also called international humanitarian law. But little attention has been given to the implications for human rights law. If these weapons were developed and used for policing, for example, they would threaten the most basic of these rights, including the right to life, the right to a remedy and the principle of human dignity.

Fully autonomous weapons, also known as autonomous weapons systems or “killer robots,” do not yet exist, but research and technology in a number of countries are moving rapidly in that direction. Because these machines would have the power to determine when to kill, they raise a host of legal, ethical and scientific concerns. Human Rights Watch and Harvard Law School’s International Human Rights Clinic are advocating for a pre-emptive prohibition on fully autonomous weapons. The Campaign to Stop Killer Robots, a global coalition of 52 nongovernmental organizations coordinated by Human Rights Watch, is making the same call.

The prospect of fully autonomous weapons has generated widespread international debate, and on June 12, the UN Human Rights Council will get an update about the subject from Christof Heyns, the UN special rapporteur on extrajudicial, summary or arbitrary executions. In 2013, when Heyns issued a longer report [text], the Human Rights Council became the first international body to discuss the humanitarian hazards of fully autonomous weapons. The council’s continued attention would be helpful. Its focus on the human rights angle would reinforce the concerns that these weapons could unlawfully threaten humans during peace as well as war. This year Heyns is urging
 the council to “remain engaged” in the issue and to “make its voice heard as the international debate unfolds.”

Despite the involvement of the Human Rights Council, most of the international discussion so far has taken place in the context of disarmament and/or international humanitarian law. Most notably, nations that are party to the Convention on Conventional Weapons (CCW) held a four-day experts meeting in May about what they call “lethal autonomous weapons systems.” This convention is an instrument of international humanitarian law that applies only to armed conflict. Even so, many countries emphasized at the meeting that the human rights implications of these weapons should also be addressed.

A new report by Human Rights Watch and the Harvard clinic, Shaking the Foundations, looks at these implications. It addresses the possibility that countries could adapt fully autonomous weapons for use in a range of law enforcement situations, including controlling public opposition and conducting counterterrorism operations. Such use would trigger human rights law, which applies in times of both peace and armed conflict and generally has stricter rules on the use of force than international humanitarian law.

The report concluded that fully autonomous weapons could compromise the foundations of international human rights law. It found that the weapons could jeopardize the right to life, which prohibits the arbitrary deprivation of life and is viewed as a prerequisite for all other rights. To be lawful, killing must be necessary, a last resort and proportionate to the threat involved. Robots would be ill-suited to establish whether a situation meets these criteria, which require qualitative determinations. The weapons could not be pre-programmed to handle every situation, and they would most likely lack human qualities, such as judgment, that human law enforcement officials rely on to comply with these three criteria in unexpected situations.

Fully autonomous weapons could also deny the right to a remedy. There are serious doubts that any meaningful accountability would be possible for the actions of such weapons. Punishing a robot is hardly an option. And absent the intent to commit a crime, a military commander or law enforcement officer who deploys such a weapon would probably not be found guilty under existing international criminal law because the person would find it difficult to foresee how an autonomous robot would act. Programmers and manufacturers would have ways to escape civil liability. The United States, for example, grants defense contractors immunity for harm caused by their weapons, and victims in any jurisdiction would often lack the resources and access to courts needed to bring a civil suit. This accountability gap would impair the law’s ability to deter harmful acts, and victims would be left unsatisfied that anyone could be punished for the suffering they experienced.

Finally, fully autonomous weapons could undermine the principle of dignity, which lies at the heart of international human rights law and declares that every human is worthy of respect. An inanimate machine could not truly respect the value of a human life or comprehend the significance of its loss. Allowing a machine to make determinations about when to take life away would vitiate the importance attached to such decisions and degrade human dignity.

Heyns, the UN expert, has shared the concerns expressed by Human Rights Watch and the Harvard clinic. In his new report for the Human Rights Council, available on the council’s website, he writes about the “far-reaching potential implications for human rights, notably the rights to life and human dignity.” In his 2013 report, he explored in detail these two issues as well as the dangers of a lack of accountability.

Heyns calls for the international community to examine fully autonomous weapons from all perspectives. “It will lessen the chances for the international community to find a sustainable and comprehensive solution to the matter of autonomous weapons systems if it were to be dealt with only in either the disarmament or the human rights context, with the one lacking the perspective of the other on this vital issue,” he writes in his 2014 report to the Human Rights Council. Heyns is right to oppose limiting the discussion to one forum when fully autonomous weapons raise such a diversity of issues.

The member countries of the Human Rights Council should embrace their role in this multifaceted effort and take advantage of this week’s meeting to voice their concerns about fully autonomous weapons, especially under human rights law. They should also start to articulate national positions on what to do about the weapons. Ideally, all countries should heed Heyns’s 2013 recommendation to adopt immediate national moratoria on the weapons. They should take such actions with an eye to ultimately adopting an international treaty that bans, under all circumstances, the development, production and use of fully autonomous weapons.

Bonnie Docherty is a senior researcher in the Arms Division at Human Rights Watch and a lecturer on law at Harvard Law School’s International Human Rights Clinic.

Suggested citation: Bonnie Docherty, The Human Rights Implications of “Killer Robots”, JURIST-Hotline, June 9, 2014, http://jurist.org/hotline/2014/june/bonnie-docherty-autonomous-weapons.php.


This article was prepared for publication by Jason Kellam, a Section Editor with JURIST’s commentary service’s. Please direct any questions or comments to him at professionalcommentary@jurist.org


Opinions expressed in JURIST Commentary are the sole responsibility of the author and do not necessarily reflect the views of JURIST's editors, staff, donors or the University of Pittsburgh.