Add to Technorati Favorites

Weekly Index
Research Sites
CALENDAR

  • Features
  • Categories
  • Resources
  • About

search

Last 100 Entries
« Response to Brynen | Main | Robots as Strategic Corporals »
Thursday
02Apr

Weapons Bans and Autonomous Battlefield Robots

As the resident lawyer of this august group, I suppose I should say something about the laws of war as applied to battlefield robots.  Not that you would not be able to find a gazillion lawyers who would say something different.  However:

First, in an earlier post, I laid out some very different robot issues, driven largely by differences in technology - remote platform robots under real time control versus genuinely autonomous firing systems, for example.  The legal issues involved are so different that there is not a lot of point to discussing them in the same breath.  As my earlier post was about remote platform targeted killing, let me make this one about autonomous systems. Second, then, the most important legal issue is target discrimination.  Sometimes that is presented as a second level argument that an autonomous system is inherently illegal because it takes the human out of the loop.  The basic argument is the same as that developed for landmines - inherently indiscriminate because they cannot be aimed across time.  I'm pleased to say that Monica Schurtman and I were the first to offer that as a legal argument, back in the early 1990s.  I also think it does not really work as applied to autonomous robots, because the question is not whether or not there is a person in the loop - the question is whether or not the weapons system is, in fact, in how it operates, indiscriminate, and so much so that it can be treated as 'inherently' indiscriminate, meaning that it cannot be aimed so as to distinguish between combatants and noncombatants.  We do not know how the technology will turn out, and we cannot at this point judge the actual legal question, which is not 'human in the loop', but instead 'can it discriminate?'  As I have said in other blogs, for all we know, a hundred years from now, it might be clinching evidence of malpractice for a doctor to ignore the advice of the diagnostic computer or do the surgery as a human rather than turning on the machine and letting the robot do it.  And it might be a war crime for a fallible, emotional human to decide whether or not to fire the weapon, rather than letting the machine make the decision.  The point is that the legal issue is not actually human in the loop - it is discrimination in fact.

Third, we have been talking in this discussion as though banning weapons were a central issue in the laws of armed conflict.  It is not.  International humanitarian law is largely about the conduct of the actors involved, on the battlefield, and how they use weapons that fall into the category of being capable of being aimed - and capable of being not aimed.  As a matter of historical development in the field, banning weapons was briefly in vogue at the very beginning of the IHL movement - St Petersburg Declaration, etc. - but rapidly moved away from that to the actual use of weapons no matter what they were.  The reasons were, first, that it turned out to be nearly impossible to ban a weapon and have it stick if it had genuine military utility, particularly in wars where things were close enough that you might lose without every advantage on the margin.  And beyond that, banning weapons depended upon a notion of 'inherently indiscriminate' - which itself depends upon shifting technology.  Especially through the long period in which the fundamental concern of weapons designers was to increase firepower, destruction, and lethality - rather than discrimination - weapons systems were regarded as not inherently indiscriminate if there was some, pretty much any, possibility of being able to aim them.  The weapons that were banned - chemicals and then biologicals, e.g. - were regarded as really not capable of being aimed, because wind or disease vectors could shift things around.  It was possible to hook into that rationale to argue that landmines could not be aimed, not in a spatial sense, but in a temporal one.  But you couldn't say that about a machine gun or a mortar or really big artillery.  And that is leaving aside the special legal issues raised by strategic bombing in WWII or nuclear weapons.  Banning weapons systems remains very much the exception in the law of war, not the rule.  It would surprise me if it turned out to be different for robotics.

Okay, now I'm going to say something crude, and rude, and I apologize if I offend people.  But in these areas, I am not an intellectual but a practical lawyer, and I must say that the critiques in many of the posts and, for that matter, parts of Singer's book, so far as I understand them, quite astonish me. Possibly I don't understand the theory behind all this, but ... well, enough...  So:

Fourth, many of the posts - and a lot of Singer's book - talk about the dark sides of robots on the battlefield, whether in terms of autonomy or remote platforms, etc.  From the military lawyer's standpoint, that is interesting, but only as a kind of counterculture to the main thing:  Viz., whether or not you think the technology will work as intended (and utterly unlike landmines), the purpose here is target discrimination.  One can talk about ways in which it reduces disincentives to killing, and how it increases the anxiety of the person fearing death from the sky, and so on.  But the fundamental point is that you develop these weapons because, for whatever reasons - humanitarian, political, military, whatever - they allow you more precisely to target, and to do so without your own people at risk and the incentive to increase firepower to protect your people.  As a lawyer in these fields, I'm having trouble seeing the bad in this - or, to the extent that I understand the highly theoretical critiques sometimes being offered, they seem to me pale shadows of the main thing, which is discrimination.  And okay, maybe the technology doesn't work out with respect to autonomy or other things, and you have, in fact, produced something that is indiscriminate.  

But the development of the technology is what it is because you are looking for something that targets more narrowly, with less firepower, and less collateral damage.  Look:  I was part of the NGO movement, running the arms division for Human Rights Watch that was telling the US military that it not only had to stop using landmines - it had to develop sensor technologies to give its missiles more precision, and more precision, and more precision - and that if you didn't invest in the technologies to do that, eventually, not then but someday, we'd start accusing you of proto-human rights violations for negligence in not developing the most discriminating technology possible.  A military doesn't develop this kind of technology if its concerns are merely firepower, destructiveness, and increased lethality.  Its motives might or might not be humanitarian - but motives are not the point; the point is that the technology is geared along the axis of increased discrimination.  A lot of the objections, so far as I can tell, in my quite unenlightened, unintellectual way, come down to saying that war should not be turned into assassination.  But that's what perfect target discrimination is - and again, as a non-intellectual in all of this, it seems to me that perfect war is target selection perfected to the point of assassination. Because that is how you achieve the polar opposite of cannon fodder.  

Fifth, then, if you ask why humanitarian law types feel conflicted about this, it is because, so far as I can tell, they have a residual concern about reducing disincentives for war; a certain sense that this is somehow 'unsporting'; a certain sense that it increases the use of illegal but quite rational responses such as human shields and sheltering among civilians by the un-teched side; a quite human tendency to root for the underdog sometimes coupled with a residual resentment of the users of these technologies, the United States and Israel; a fear that the technology will not turn out as planned; but - running exactly the opposite direction - the understanding that, after all, building weapons systems with greater and greater target discrimination down to the level of specific individual combatant is precisely what perfect humanitarian war technology is supposed to do.  This is what, starting in the microchip revolution of the seventies and eighties, after all, they began to demand of militaries - that rich militaries invest in discrimination in weapons systems.  Well, now they have, or anyway are in the process of doing so, or trying to do so, and so far as I understand objections here based upon theories that I don't know much about, some number of people want to complain that it might work too well?  

Maybe it won't work in fact - maybe the results will really be indiscriminate.  But many of the complaints being registered here and in Singer's book seem in my not-super-theoretical grasp not really about the fear that the technology will turn out to be indiscriminate.  On the contrary.  They are, rather, complaints that the discrimination is, or might become, so good - and that, naturally, the militaries doing this would like to do so coupled to a theory by which they can still win and, really, win more easily and effectively.  But I have to say, it seems to me somewhat difficult to go back to those same militaries and say - well, we like discrimination in targeting, but please not so much that you are picking names off a computer and killing people who are entirely and completely identified.  It no longer seems like ... war.  Rene Char called war "this time of damned algebra" - but surely he didn't mean by that a time of damned algorhythms running through facial recognition software. 

But this comes perilously close to saying that war is aesthetically, if not morally, unattractive when it loses all its anonymity.  Or to be perfectly vulgar about it, it is as though to say we prefer our dinner with pork, but only so long as we didn't personally know the pig.  

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (2)

Ken, well I don't fully agree with your discussion of why AWS might or might not be discriminate, but I'll post further on this momentarily. Let me say though, that you raise an interesting distinction in the debate between jus in bello arguments, on the one hand, about whether robots can fight in accordance with war law; and a completely different argument about whether we would want them to. Since I'm interested in humanitarian law developments, my frame of reference is definitely the former. It would be an interesting and ironic case of a norm effect if in fact these sort of constitutive ideas about war held human security folks back from pursuing rulesets in the context of their current regulative regime.

Apr 2, 2009 at 5:50 | Unregistered CommenterCharli Carpenter

Ken, if we are concerned with the Laws of War then we have already lost the fight. Debates over discrimination and proportionality are so easily twisted to fit ones needs to be laughable. Justifying actions based on Western concepts of what is just is quaint and dangerously naive. If we are defending discrimination and proportion with lawyers we have already lost the fight because global audiences will not take the time deliberate the finer points of the engagement and weigh the evidence.

Previous incidents of “technical failure,” or even “out of control” proxies (i.e. contractors) in Iraq, reflect onto the United States whether the blame is accepted or not. Deniable accountability, whether or not shrouded in law, is a myth in a world where influence matters and perceptions can trump fact.

Let's get this straight. The law of proportionality applies only to offensive actions. In other words, international law will permit certain acts if they are justified as a defensive action that may cause significant blowback in the war of information and influence. Let's face it, the Laws of War or International Humanitarian Law are "enforced" by normative behavior and public opinion anyways.

Let me be clear, going with jus in bello to justify actions or permit actions actually creates an environment that's too permissive. Moreover, the jus in bello debates are Western debates anchored in Western traditions.

Somebody, I think it was Charli, commented on the pre-emptive outlawing of technology in a rebuttal to Singer's "robots are too far ahead of the curve" argument. It's useful to remember that chemical warfare was outlawed prior to wide military deployment of gas. More important is to remember that the prohibition against the use was only when used against other signatories of the agreement. The implication being that all civilized nations signed, so therefore "non-civilized" nations were fair game. Fair use of the law? What about firebombing Dresden? Justified as a defensive act (a justification we had to push hard because American air crews needed a bit of convincing). The Laws of War are malleable and today, that malleability works only in the West if it works at all.

Apr 2, 2009 at 6:19 | Unregistered CommenterMatt Armstrong

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
|
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>