Policy, War, and Technology: I, for one, welcome our robot overlords

Greetings KOW readers from sunny Aberdeen–the ‘Granite City’–where while killing time before examining an about to be born PhD I turn to a little light blogging. A few articles caught my eye this morning which I think are noteworthy and between which there is a thread of connecting logic (maybe, or perhaps a too early rise has scrambled my brain).

The first is by the Strategic Studies Insititute’s Antulio Echevarria II, a guy with whom I think most readers of this blog will be familiar, who has a short piece in the latest issue of Prism asking the simple question ‘What is Wrong With the American Way of War?‘ The consensus view, he notes, is that there is something wrong with it–whatever that may be. His answer is contrarian and, to me, rather persuasive: actually from a systemic point of view there’s nothing wrong with it that is also not more or less wrong with other national ‘ways of war’–British, French, German, etc. In a nutshell, the problem is policy. Here’s what I think is the key snippet:

A review of the literature regarding the key political decisions concerning the war in Iraq shows that it was largely because of politics that American policy initially tried too hard to keep the war it wanted rather than winning the war it had. History, in fact, suggests that the American way of war has never been apolitical. One may disagree with what American policy has been over the years, or what it was at the beginning of the millennium, but it clearly influenced the conduct of operations in Iraq and Afghanistan throughout every stage. What American policy wanted to achieve initially in Iraq and Afghanistan was simply too much to expect solely from any way of war, particularly one that was in many respects still evolving from a way of battle.

Further on in the last line he writes, ‘By the next conflict there will be a newer American way of war, but the need to align, and realign, policy aims and real capabilities is the one continuity that will require constant attention.’ You should read the whole thing  but once you’re done that have a look at this even shorter paper by Steve Metz also of the Strategic Studies Institute who puts a finger on a likely candidate for the ‘newer American way of war’ in The Future of Roboticized Warfare. Metz strikes a number of similar conclusions, in particular the idea that we really ought to be rather more cautious about how we use the word ‘revolutionary’. Says he:

The new weapons that sprouted on the battlefields of World War I ultimately revolutionized warfare. At the time of their appearance, however, most of them were used in a very traditional way, making old-fashioned infantry and artillery more effective rather than ushering in new ways of fighting. Airplanes spotted targets for artillery batteries, scouted for the infantry and provided close air support. There were some attempts at strategic bombing, but due to the limited payload and range of the aircraft of the time, it had little effect. Tanks, which first appeared in 1917, operated with infantry units as moveable machine gun nests or bunkers. In other words, the appearance of these new weapons initially represented innovation but not revolution. It wasn’t until after the war that military theorists recognized the revolutionary potential of tanks and planes if they were used properly. 

Robotics has the potential to do to warfare in the 21st century what these other weapons did to it in the 20th. I like, however, that again he doesn’t get all breathless about the new technology and how ‘it’s going to change everything!’ as too often is the case. On the contrary, says Metz:

It is easy to become enraptured by new technology and lose sight of reality. In 1921, for instance, Italian Gen. Giulio Douhet predicted that airplanes would make other forms of military power obsolete. Those who today make similar predictions about robots fully replacing humans in armed conflict are likely to be proven wrong as well, at least for the foreseeable future. Nonetheless, we are now at a point where a revolution in military robotics is technologically feasible. 

War changes all the time. It’s a ‘true chameleon’, said Clausewitz. And yet also, he says, it really doesn’t change in its essence as an act of force aimed at the realisation of this or that desire of policy. It seems to me that in different ways both Metz and Echevarria are reinforcing this point. Metz also points out that though the United States leads the field at the moment it’s not necessarily the case that the ‘new way of war’ need be a ‘new American way of war’. It might be someone else’s. Metz asks some good questions:

Would a U.S. military heavily based on robots be politically easier for a future president to deploy? And if so, would that be a good thing? Do Americans truly want a military that is easier to use? Do they want to delegate life-and-death decisions to machines? In the broadest sense, can the robot revolution even be stopped, or is it inevitable?

One thing which occurs to me about the current debate is that we have the luxury of debating these things now because currently we are using this technology in an ‘asymmetric’ way against opponents who have no (or very markedly fewer) means of responding in kind. But the question ‘do we want to delegate…’ is not likely to be one we will dwell long on if we were envisaging conflict against an equally well-equipped and capable enemy. In that case, there would be no debate–I should imagine–because the stakes would be effectively delegate or lose. Look at the automation of naval detection, targeting and firing systems for an example.

That said, much of the debate at present is about the current use of drone technology (I know not exactly robotics but part, I think, of the same package of developments) for targeted killing in an ‘asymmetric’ context. On this issue I thought this piece by Joshua Foust ‘Targeted Killing, Pro and Con: What to Make of US Drone Strikes in Pakistan‘ makes a good contribution. The essay is a reaction to a report ‘Living Under Drones‘ which is also worth reading–it provides the ‘con’ in this pro et contra. Foust offers the ‘pro’ which, tepid as it may be, seems to me unassailably logical when you get down to it: what’s the alternative?

Which brings me to the last of Metz’s question of whether the robot revolution be stopped? To which the answer is no. Technology points that way. Practice points that way. And policy does too.

UPDATE: Ooops. Forgot to put the link in to Foust’s article. Fixed now.

Share
Standard

22 thoughts on “Policy, War, and Technology: I, for one, welcome our robot overlords

  1. Paul T. Mitchell says:

    David; I think you are correct about wars of survival and your example of naval defence systems is also another good example. I would argue that any form of perimeter defence is ripe for automation. But that is about it. Perimeter defence is a rather simple target environment: on land one could simply post warning signs that anyone beyond a specific area is subject to lethal response. At sea and in the air, the target environment is also limited in terms of the actors present and their identities are “relatively” non-ambiguous. However, these are both rather limited contingencies. As Echevarria pointed out in his essay, the full up, toe to toe with the Russkies conflicts represent only a small fraction of the conflicts the US has engaged in and the same could be said for many militaries. At a recent conference in Singapore, Christopher Coker argued about the increasing “moral capabilities” of robots. Here is a story on that: http://chronicle.com/article/Moral-Robots-the-Future-of/134240/ However, your colleague, Christopher Dandeker immediately followed him with a presentation in which he discussed the practical difficulties of educating soldiers on their ethical responsibilities on the battlefield. My question to Coker was if it is so difficult to educate soldiers on ethics, how can we operationalise ethical decision making into an algorithm that can be programmed into a machine? I didn’t get a good answer to that. So, if the war “problem set” includes far more cases than simply wars of survival and perimeter defence, what does that say for the future of roboticization?
    Second, war remains a very human phenomenon. The interest in roboticization is essentially an effort to reduce human resource constraints. There is much in this story as there was in the development of the machine gun, as John Ellis has argued in his Social History of the Machine Gun. In effect, roboticization is one more attempt to instrumentalise war. But by fully dehumanising conflict (especially against opponents who remain fully humanised) would we not be committing the same errors of judgement that von C criticised von Bulow and Lloyd for – the algebrae-zation of conflict? I can think of how opponents might “hack” the moral algorithms in terms of their own behaviour on the battlefield (much as non-state actors have hacked our own legalistic approaches to conflict). I can also imagine the information campaigns that would accompany such operations (you think the present outrage against drones is bad?). In sum, roboticization and warfare is one more techno-illusion dreamed up by romantics wishing the world were more perfect.
    BTW: there must be something in the water. I told my web editor at WLU I would do something on robots last week. Steve beat me to the punch and you stole my title line!

    • On the issue of the practical difficulties of educating soldiers on their ethical responsibilities on the battlefield, these are not easy to resolve but they have to be confronted. This is especially because of the ‘compression in the levels of command’ produced by media technologies: or the fact that small acts at very junior levels of authority can rapidly have strategic impact on the reputation of military units, the mission and even the reputation of the military institution involved as well as the state politically responsible for the mission. Last week this point came up at the excellent talk given by David Fisher on the paperback publication of his Morality and War, Can War be Just in the Twenty-First Century? Oxford University Press. Here there is an important exercise to be done in applying just war principles [and military virtues] to the practical training of soldiers at every level of command. Indeed, Fisher pointed out that such training is relevant to the profession of politics: those political leaders who task the armed forces to go to war and who monitor its conduct need the same kinds of moral guidance. Ethical training may not be easy but it can be done. As David pointed out during the discussion at the event, the key objective is to ensure that soldiers on operations – even when these are new experiences for them – feel their soldier training ‘kicking in’. This is not just about professional fighting skills [on which see the insightful work of Anthony King from Exeter] but the ethical skills too that will guide soldiers into doing the right thing – not just technically but ethically as well.

  2. Ed (the real one) says:

    [Drones]: what’s the alternative?

    In the asymmetric “war on a tactic”: to drain the swamp. The major Muslim grievance is Palestine. Force both sides to negotiate reasonably. That may be more politically expensive with regards to one side than the other, but it is certainly possible. The positive pay-off would change the world for the better.

    • David Betz says:

      What would be a ‘reasonable’ negotiation, in your view? Which side would find this more ‘politically expensive’? You are so sure of the ‘positive pay-off’ of whatever it is you are suggesting why?

    • Ed (the real one) says:

      Pushing the Israelis would be politically expensive in America. A reasonable negotiation would be closely based on the Geneva Accord.

      Are you seriously disputing that resolving the Palestine “issue” to the reasonable satisfaction of the Palestinian Arabs would take away the major recruiting tool and talking-point for Islamist nutters?

      Or, we could just carry on bombing shepherds and hoping something turns up.

    • David Betz says:

      Did Mohammed Bouazzizi light himself on fire for Palestine? How about Khaled Said, he was beaten to death by the police for being a zionist, no? The Bahraini uprising, to pick one at random, that’s about Israel? Yes, I am disputing that point. Show your work. Begin by explaining what the Palestine issue is. Thanks.

      These ‘Islamist nutters’ you talk about, they’re the problem, yes? You’d be happy with the Geneva Accord, apparently. Peace in our time. Would they?

    • Ed (the real one) says:

      None of those people were Islamist nutters. Perhaps you didn’t read what I actually wrote. Try again. Show your reading comprehension.

    • David Betz says:

      The problem is your brain not my reading comprehension. Your words were ‘major Muslim grievance is Palestine’. Try again. Or go away. Whatever.

    • Ed (the real one) says:

      I will try to spark some comprehension, per your request. Let me know if you don’t understand any of the words used here.

      First I wrote “The major Muslim grievance is Palestine.” This implies the number-one grievance of Muslims around the world, qua Muslims. If there were a poll of Muslims all around the world asking what grievances they have, do you dispute that number one for the global Muslim population would be Palestine? If not, what do you think it would be?

      Given that I had already specified in my follow-up that I was referring to the grievances of Islamist nutters, it actually escapes me what else I could specify to assist you.

      To clarify one point: resolving Palestine might not de-recruit the already-committed, partly thanks to the miracle of cognitive dissonance (action deepens commitment). However, it would seriously damage jihadist recruitment of the not-yet-committed, and also damage anti-American/Western jihadist rhetoric.

      Oh, and to specify one other thing. You asked “what the Palestine issue is”. Bit disturbed that someone who purports to be an authority on international relations / war studies needs to ask, but I’m here to help. The issue is the narrative for Islamists provided both by the perceived and actual continuing military occupation / humiliation of Arab Muslim people in the West Bank and the Gaza Strip. These are located on the eastern edge of the Mediterranean, a sea to the south of Europe.

    • David Betz says:

      OK, Ed. How about we both cut out the snark and see if there’s anything left to discuss? If not, then we quit.

      There are such surveys. The Pew Global Attitudes Project is one. Have a look:

      http://www.pewglobal.org/2012/07/10/most-muslims-want-democracy-personal-freedoms-and-islam-in-political-life/

      If you do I think that you will find that the grievances of Muslims run deep from the quotidian to the prosaic (they’d like to have control over their lives and be materially better off) and not so deep on the international political. Look at chapter 5 on ‘views of extremism’.

      As for your reasoning, sorry, but I think this is a perfectly fair reading of what you wrote:

      First cut, you talk about ‘major Muslim grievance’ — i.e., all Muslims, or ‘Muslims qua Muslims’ as you later put it.

      Second cut, you talk about ‘Islamist nutters’ — i.e., a subset of Muslims.

      Third cut, you talk about ‘not yet committed jihadists — i.e., a subset of ‘Islamist nutters’.

      It’s hard to argue with you sensibly when you shift levels of analysis like an F1 driver shifts gears.

      Thank you. I know where the place in question is. I do not purport to be an expert on it though. The post above is about drones and robots, after all. You, however, to judge from your assertions, know an awful lot. I am merely seeking enlightenment. Because from the books I’ve read ‘resolving Palestine’ is made out as a really, really difficult thing whereas you appear to have got it solved.

      It bears noting too–maybe this is just a stylistic point, but there it is, I’m drawn by trivialities. Normally when people put a word in inverted commas that is obviously not a quote–as in your words, the ‘Palestinian “issue”‘–it signals that they mean that word in an imprecise or euphemistic manner. That being the case my question what you think the ‘Palestinian “issue” is’ is really quite apposite.

    • Ed (the real one) says:

      OK, Ed. How about we both cut out the snark and see if there’s anything left to discuss? If not, then we quit… It’s hard to argue with you sensibly when you shift levels of analysis like an F1 driver shifts gears.

      Apparently you’re about as good at cutting out the snark… as you are at snark. Pick one or the other, eh.

      You have selected several statements by me about different parts of a situation, and brilliantly pointed out that each of them says slightly different things. Good work. Less facetiously, can you with a straight face claim that you are assuming good faith here?

      You are right to point out that Muslims are more concerned with local than with global issues. I was not suggesting otherwise. The Pew survey did not in fact ask questions about Palestine, but I am sure you knew that. Obviously, I believe that Palestine is not the first waking thought of every Muslim in the world. However, its power as a rallying call for “the ummah” is evident from the fact that it is literally used as a rallying call by AQ (cf “Jews and Crusaders”). Despite their recent dip in popularity, those guys do know their audience.

      Re your “second” vs “third cut” apparent confusion: you are putting the brackets in the wrong place. I was referring to (not yet) (committed jihadists): those who are open to being persuaded, but have not yet committed. I am sorry that you were confused.

  3. Mike Wheatley says:

    Curious here:
    “If there were a poll of Muslims all around the world asking what grievances they have…”
    This seems testable. Has this already been done? Ideally by country? (I’d guess it wouldn’t be for Mindanao, for example, not that they are a significant % of the global Muslim population.)

    I suppose an additional variable would be: X is the major grievence in nation A, but nation A is not grieved enough to be militant about it.

    And for added complication: what if they think X is the grievence, when it is actually just the catalyst, for a subconcious grievence? Are there good techniques to get that soft of data out from a survey?

  4. Paul T. Mitchell says:

    How did a discussion on robots and warfare devolve into this? Actually, the debate raises real questions about the feasibility of autonomous systems. David concludes that technology, practice and policy all point the way to an unstoppable force. That all sounds far too materialistic to me, and the above debate points to the ideals which shape our very human struggles and naturally define the context in which operations take place. Sadly, the example two articulate and rational people who cannot agree on the nature of a social phenomenon raises real questions as to how to program a robot to make sense of it all. Again, within limited operational parameters, autonomy makes sense. Perimeter defence, navigation and collision avoidance? Check. Shoot that person over there? Hmmm.

  5. Pingback: Canadian Military History – “Three Laws Safe?” Autonomous Robots and Warfare by Dr. Paul T. Mitchell

  6. John says:

    Perimeter defence, navigation and collision avoidance? Check. Shoot that person over there? Hmmm.

    One of the advantages of autonomous systems is their speed of reaction. An autonomous system doesn’t necessarily need to recognise hostile intent (who is the threat?) if it can react quickly enough to hostile action. The potential accuracy, speed, robustness and lethality of an autonomous system, combined with a lack of (inherent) survival instinct means that it could usefully let the potential aggressor fire first and still win the engagement.

  7. Pingback: What we’ve seen so far: The Year 2012 in Review | Kings of War

  8. Exactly how long did it acquire u to write “Policy, War, and Technology: I, for one, welcome our robot overlords | Kings of War”?
    It has got loads of really good information and facts. Many thanks ,Crystal

  9. Peter T says:

    The problem with any kind of remote warfare (strategic bombing, drones, robots…) is that they divorce violence from its political aims on the ground. As such, they invite a fairly simple reflexive response – the subjects hate you more and, even though they might avoid overt action against you (for fear of immediate violent retaliation) they look for ways to avoid cooperation, or to hinder you politically. How, for instance, would robots have helped in the situation described in the classic War Comes to Long An? The US/ARVN had overwhelming military preponderance. The VC response was to simply keep reminding the population that they were there, that they shared many of their aims (eg on land distribution), and that they could and would punish those who actively opposed them. Result – apparent pacification, no real peace. As in, to take another example, many areas in Afghanistan.

    It could help to put this as a tactical problem: the US invades and occupies Iran. Some hundred thousand Iranians sit down in a central square in protest. What do the robots do? If they kill a lot of people, and another hundred thousand sit down the next day, do they do it again? And again the day after?

    • Mike Wheatley says:

      The whole “remote warfare causes more hatred” meme is commonly presented as “fact”, and used to rebut “modern warfare” as “hopeless”, but when you look at it, it doesn’t have supporting evidence.
      Compare the British vs. Germans, on one hand, to the Tutsis vs. Hutus, at the other extreme. The former hate each other much less.

      In your later examples, the outcomes are equally true whether the killing is done by robots with guns, or human soldiers with swords. In both cases, the failing is in the strategy.
      What are you trying to achieve?
      No, really, think very carefully: exactly what are you really trying to achieve?

      Why the f*@% would America want to occupy Iran? To live there? There is plenty of real estate in the US.

      To control their oil? By control, do you mean “increase” or “decrease” their oil exports? Or do you mean, change the destination of their exports? Or to sell it to the US for a discount? (Ironically, the way the old european empires did, before the US ended it.) (…In the latter case, the cheapest way is to hijack & steal their tankers on the open ocean.)

      Or to prevent them from attacking Israel and/or Arabs? (In which case, hit their military, money, and most xenophobic demagogues. Occupation is not desirable.)

  10. Peter T says:

    Mike

    Read it again. I did not say “drones cause hatred” – I said they (and similar means) tend to divorce violence from politics. The aim of war, as Montgomery noted, is not victory but peace. Sometimes – as with the wars against the Native Americans – the peace is that of the grave, but that’s hard to do in this modern age. So if your violence is not of a kind that, in whatever particular circumstances prevail, leads to some kind of peace, then you have failed – you are locked into a silent war. Drones and bombing and so on can work – if complemented by suitable other means in suitable political circumstances. But too great a reliance on them tends to obscure the need for the other means and also obscure analysis of whether the politics are suitable. Robots would take these trends further. It’s not inevitable, but it would take a hitherto unexampled sophistication in US politics to resist the temptation.

    As for the Iran scenario: 1. It’s a scenario. 2. I was not under the impression that all the US/Israeli sabre-rattling (remember “Baghdad, then Damscus or Tehran”?) of the last decade or so was just empty noise.

    • Mike Wheatley says:

      I’m sorry I got fooled by this bit:
      “…remote warfare (strategic bombing, drones, robots…) is that they divorce violence from its political aims on the ground. As such, they invite a fairly simple reflexive response – the subjects hate you more”
      - Which I refute.

      Further more:
      “But too great a reliance on them tends to obscure the need for the other means and also obscure analysis of whether the politics are suitable. Robots would take these trends further.”
      - I disagree.

      Traditional man-centric warfare is based on training people to kill, and includes development of adrenalin and aggression. It is from these trained killers that the war leaders are then selected.
      Conversely, remote warfare emphasises careful design and development by engineers, and good engineering principles include peer reviews and testing. Adrenalin has no place here, nor charisma, or the heroic leadership of men in combat.
      An extreme future military would have no soldiers, just support engineers, and its leaders would necessarily be drawn from that pool.

      When an engineer is asked by the customer “I want to invade Iran”, the best response is “Hmm, that sounds like you are specifying a solution rather than a requirement, so let me ask: why do you want to do that? What are you really trying to do?” Because customers are forever making the mistake of specifying their requirements in terms of what they think the solution is, rather than saying what they really want. (Quite often because they haven’t really thought clearly about what they really want.)

  11. “Policy, War, and Technology: I, for one, welcome our robot
    overlords | Kings of War” was a quite nice posting, . Keep writing and I am going to continue to keep browsing!
    Thanks -Hayden

Be sensible, be polite

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>