Human Rights Watch is one of the driving forces behind the Campaign to Stop Killer Robots, who make headlines from time to time in their current quest to get “lethal autonomous weapon systems” banned under the Convention on Certain Conventional Weapons. I have no particular beef with HRW, it’s an admirable organisation, but since the newspapers are in full-on “reprint the press release” mode after the publication of an open letter calling for a ban on autonomous weapons signed by a lot of scientists, it’s probably worthwhile pointing out that HRW has choices to make. Like: choosing between a world where states can make (usually imperfect) interventions to prevent mass atrocity, and a world where they can’t. The TL;DR version of this blog post is: if you ban autonomous weapons then aircraft carriers become floating junk, next time someone starts massacring people don’t expect anyone to ride to the rescue. That’s not to say the “the West” has a particularly admirable track record in atrocity prevention, but most of the arguments that now happen usually pre-suppose that if Western political elites could be coerced or persuaded, then they would have the technical means to deliver military forces to some point on the planet where very bad things are happening to civilians.
The problem with the autonomous weapons debate as it currently stands is that, for the most part, it ignores the current bits and pieces of automatic and autonomous systems that are part and parcel of everyday military life. Like the Phalanx Close-In-Weapon-System (CIWS) and other bits of gear that are designed to shoot down incoming missiles. Because shooting down missiles is something that humans are physically incapable of doing, outside of Hollywood. If one would like the capability to shoot down a missile, then you need a largely autonomous system to do the heavy lifting of identifying, tracking and targeting that missile, and the human being “in the loop” is largely reduced to something to whom a weapon system says: “Hey, meatbag, press the button so that I can save your life.”
In theory, you don’t even need the human. Phalanx, and like systems, are tied to command and control systems such as AEGIS, which can be set to an automatic mode with user-defined “If… then…” routines doing the work. Like “If a missile is heading towards this ship, then please shoot it down as soon as possible.” The reason this is necessary is that one doesn’t want to entrust the ability to protect a ship to a person who is, well, liable to die when the anti-ship missiles start flying. Having a system the keeps working despite casualties is a sensible design for a military system. “But wait,” cry the detractors, “We’re not talking about missiles, we’re talking about machines that can make the decision to select and kill human beings (insert lengthy disclaimer about drones being controlled by human beings here).” That may be true, but from a machine’s point of view (and this is perhaps the core of the problem) the means of identifying an object as a missile is not too different from identifying a human being. If someone does conjure up a weapon system to run around killing human beings, then the difference is likely to be most evident in the sensors designed to detect human beings (over, say, a supersonic missile) and the code that interprets the information derived from those sensors, than in the actual process of going from detection to destruction. The difference between “automatic” and “autonomous” is merely the capability of the system to sense different objects, and what to do with them once it senses them. A system designed to identify humans and avoid them is one rule-change away from a system designed to identify humans and kill them. Program an autonomous weapon system to shoot down missiles and it’ll carry that out to the best of its technical limits, just as if you programmed it to shoot readers of young adult fiction over the age of 29, which, I think, is why the ethicists (and adult Harry Potter fans) are correct when they point out that autonomous weapon systems are disturbing. So why not ban them? The problem, returning to the Phalanx CIWS, is that they’re here to help, and in certain situations, autonomous systems are impossible to replace.
The problem with aircraft carriers is that they are quite expensive, relatively rare, and vulnerable to missiles designed to kill them. America has ten Nimitz class aircraft carriers. They are the cornerstone of American power projection worldwide. By way of comparison, Russia has one, and China has one. I’ll leave it to my colleagues in KCL’s Naval History Mafia (err, “Laughton Naval History Unit“) to debate how good any of these actually are. America’s carriers are so expensive that it takes over half a billion dollars to de-commission one. Of course the alternate route to decommissioning an aircraft carrier is to hit one with enough missiles to sink it. Logically enough, this is China’s approach to America’s 10:1 advantage in aircraft carriers. For this reason, anyone seeking to deter America needs some kind of long range anti-ship missile capability. For America (or anyone else using an aircraft carrier) you need defensive capabilities mounted on your aircraft carrier and support ships that stand a chance of shooting down said missiles, otherwise they become a bit useless in contested areas.
Contested areas are important, partly because the kind of regimes that carry out massacres usually have powerful friends. Consider Syria. Way back when in 2013, when meaningful international intervention was still a possibility, Russia transferred advanced anti-aircraft missile systems and anti-ship missile systems to the Syrian government in order to effectively forestall said intervention. In effect, Russia escalated the likely cost of international intervention by providing Assad with an asymmetric capability. Perceived costs are important because: politics matters. To return to HRW and autonomous weapons: there is a big difference between persuading America to intervene in a situation, and persuading America to intervene in a situation which puts one of its aircraft carriers at risk.
So here’s the issue as I see it: if you want to ban the military use of autonomous weapon systems, then you’re going to need to ban the kind of autonomous systems that are currently in service, and any that are being developed to combat anti-ship missiles in future. If you ban those kind of point defence systems, then any kind of power projection becomes very, very risky and costly for the country involved, so even though America has a poor track record, don’t expect them to help in future if a brutal regime is killing its citizens. This ushers in a world where states like China and Russia can effectively prop up any regime that they like, and, given the studied neglect-to-care about human rights in either country, this reduces the capability of states that purportedly care about human rights to intervene in the world at large. This lack of capability to intervene will reduce the incentive for would-be human rights abusers to adhere to the vaguest interpretation of compliance with human rights standards. This is a legal, political and technical issue – given the makeup of the UN Security Council – but at the moment Western states still have a technical means to intervene (if not the legal authority to do so, or the political will), forcing them to abandon the autonomous systems that they use to defend their prime military assets would deprive them of that. As disturbing as autonomous weapons are, is a world where dictators can massacre their populations without fear of reprisal better or worse?
Oh, and just to muddy the waters a bit: they already figured out how to point Phalanx at small surface ships that would probably contain human beings.