Last month, the United Kingdom’s Ministry of Defense announced that it would ban the development of fully autonomous weapons systems (FAWS). This announcement came after 116 technology experts, including Elon Musk, sent a letter to the United Nations warning nation-states of the dangers of artificial intelligence. The letter has three main warnings: (1) the scale and speed of armed conflict will increase, (2) dictators and terrorists can use them against innocents, and (3) the AI behind fully autonomous weapons systems can be hacked.

The letter to the UN was not the first call for nation-states to ban preemptively the development and use of FAWS. In late 2012, Human Rights Watch launched a campaign calling for a treaty to preemptively ban FAWS. FAWS are complex, lethal weapons that have been coded with an artificial intelligence (AI) that allow them to target and attack without outside human intervention. One of the main arguments against the use of FAWS is that there is no evidence that these weapons will have the ability to discriminate properly between civilians and combatants in true battlefield conditions. This potential for unstable behavior of the FAWS in addition to the easier decision to enter war makes the burden of armed conflict fall on civilians more than is acceptable. Some nation-states have adopted this reasoning and have either called for a ban or support a treaty to regulate FAWS.

Some legal experts have stated that such preemptive bans will not work for several reasons: (1) the nation-states and other actors who would abuse these systems will not comply with a treaty, (2) a bright-line ban will be difficult to design and implement as weapons become iteratively more autonomous, and (3) it is impossible to say prospectively that FAWS will increase or decrease civilian suffering in times of armed conflict. Other legal experts argue that the law of armed conflict is designed to balance military necessity with minimizing hardship. Because there will be FAWS that fall on the right side of this balance, an absolute ban would not be appropriate.

The United States’ Department of Defense issued a directive on FAWS in 2012 which established a standard for constant reviews of the systems in order to “minimize failures that could lead to unintended engagements or to loss of control of the system to unauthorized parties.” While this is not a ban on development or use of FAWS, the directive takes a more flexible approach to the technology by requiring there to always be “appropriate levels of human judgment.” This flexibility will allow autonomous technologies to be developed and then judged whether they will be compliant with the laws of armed conflict.

FAWS also raise the question of who will be liable for mistakes made in combat scenarios—the operation commander? the tech who services it? the person who coded the targeting AI?—and under what mode of responsibility if there is to be international criminal responsibility. It will be unclear who pulls the trigger and who has the requisite mens rea in these cases.

In the end, without a treaty the MoD’s announcement and the DoD’s directives are just pieces of state practice that may or may not eventually evolve into customary international law. Whether this practice crystallizes into treaty or customary law before the technology required for full autonomy is developed remains to be seen. It is certain that even while questions surround lethal FAWS, fully autonomous systems are being pursued to serve in other parts of the battlefield. As the letter to the UN concluded, “Once this Pandora’s box is opened, it will be hard to close.”

J. Christopher Gracey

Comments are closed.