Ethan Konschuh, MJLST Staffer
While technological progress has been the hallmark of the twenty-first century, the rise has been especially drastic in weapons technology. As combatants in armed conflicts rely more and more heavily on automated systems pursuing such goals as safety, efficiency, and effectiveness on the battlefield, international law governing the use of force in armed conflicts is under threat of becoming outdated.
International law governing the application of force in conflicts is premised on notions of control. Humans have traditionally been the masters of their weapons: “A sword never kills anybody; it is a tool in a killer’s hand.” However, as automation in weapons increases, this relationship is becoming tenuous- so much so that some believe that there is not enough control to levy responsibility on anyone for the consequences of the use of these weapons. These actors are calling for a preemptive ban on this technology to avoid the possibility of the offloading of moral responsibility for war crimes. Others, however, believe that there are frameworks available that can prevent this gap in responsibility, and allow for the realization of the aforementioned benefits of using autonomous machines on the battlefield.
There are three general categories of policies proposed regarding the regulation of using these machines. One has been proposed by Human Rights Watch (HRW), International Committee for Robot Arms Control (ICRAC), the International Committee of the Red Cross (ICRC), and other NGO’s and humanitarian organizations have called for a preemptive ban on all autonomous weapons technology, believing that human input should be a pre-requisite for any targeting or attacking decision. The second regulatory regime has been espoused by, among others, the United Kingdom and Norther Ireland, who claim that there would be no military utility in employing autonomous weapon systems and agree they will never use them, effectively agreeing to a ban. However, the way that they define autonomous weapon systems belies their conviction. The definition put forth by these actors defines autonomous weapon systems in a way that effectively regulates nothing:
“The UK understands [an autonomous weapon system] to be one which is capable of understanding, interpreting and applying higher level intent and direction based on a precise understanding and appreciation of what a commander intends to do and why. From this understanding, as well as a sophisticated perception of its environment and the context in which it is operation, such a system would decide to take – or abort – appropriate actions to bring about a desired end state, without human oversight, although a human may still be present.”
This definition sets the threshold of autonomy so high that there is no technology that currently exists, or will likely ever exist, that would within its purview. The third policy framework was put forth by the United States Department of Defense. This policy regulates fully autonomous weapon systems (no human action connected to targeting or attacking decisions), semi-autonomous weapon systems (weapon depends on humans to determine the type and category of targets to be engaged), and human-supervised autonomous weapon systems (weapon can target and attack, but a human can intervene if necessary). This policy bans all fully autonomous weapon systems, but allows for weapons that can target and attack as long as there is human supervision, with the ability to intervene if necessary.
The debate surrounding how to regulate this type of weapons technology is continually gaining traction up in the face of advances approaching the threshold of autonomy. I believe the U.S. policy is the best available policy to prevent the responsibility gap while preserving the benefits of using automated weapons technology, but others disagree. Whichever policy is ultimately chosen, hopefully an international agreement is reached before it is too late, and your favorite sci-fi movies become all too realistic.