Fully autonomous weapons, which could select and fire on targets without meaningful human intervention, have the potential to revolutionize the nature of warfare, bringing greater speed and reach to military operations. In the process, though, this emerging technology could endanger both civilians and soldiers.
Nations have been considering the multiple challenges these weapons would pose to the laws of war, also called international humanitarian law. But little attention has been given to the implications for human rights law. If these weapons were developed and used for policing, for example, they would threaten the most basic of these rights, including the right to life, the right to a remedy and the principle of human dignity.
Fully autonomous weapons, also known as autonomous weapons systems or "killer robots," do not yet exist, but research and technology in a number of countries are moving rapidly in that direction. Because these machines would have the power to determine when to kill, they raise a host of legal, ethical and scientific concerns. Human Rights Watch and Harvard Law School's International Human Rights Clinic are advocating for a pre-emptive prohibition on fully autonomous weapons. The Campaign to Stop Killer Robots, a global coalition of 52 nongovernmental organizations coordinated by Human Rights Watch, is making the same call.