The Three Laws

Seventy years ago in his short story Runaround, science fiction writer Isaac Asimov introduced the “Three Laws of Robotics”. They are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In an op-ed today in the Wall Street Journal Robert H. Latiff and Patrick J. McCloskey lay out the case for restraint in developing autonomous robots with the ability to kill:

These machines will bring many benefits, greatly increasing battle reach and efficiency while eliminating the risk to human soldiers. If a drone gets shot down, there’s no grieving family to console back home. Politicians will appreciate the waning of antiwar protests, too.

The problem is that robotic weapons eventually will make kill decisions on the battlefield with no more than a veneer of human control. Full lethal autonomy is no mere next step in military strategy: It will be the crossing of a moral Rubicon. Ceding godlike powers to robots reduces human beings to things with no more intrinsic value than any object.

When robots rule warfare, utterly without empathy or compassion, humans retain less intrinsic worth than a toaster—which at least can be used for spare parts. In civilized societies, even our enemies possess inherent worth and are considered persons, a recognition that forms the basis of the Geneva Conventions and rules of military engagement.

I would go farther than the authors of the op-ed do. I would contend that developing robotic weapons with “full lethal autonomy” is inherently a war crime. Gen. Latiff and Mr. McCloskey do not mention the “Three Laws” in their op-ed but I think it’s time to bring them back into the discussion.

War is death and destruction. The trend of the last forty years in the United States has been to lower the transaction costs of war through air warfare, guided missiles, the volunteer army, and, now, unmanned drones. There should always be a presumption against war but, as the transaction costs of war grow lower, the internalized prohibitions in law, politics, and social conscience must become greater.

Human beings, unfortunately, do not come with the “Three Laws” built into them. We should strive to make our creations better than ourselves in that respect.

20 comments… add one
  • michael reynolds Link

    I don’t see how it’s a war crime.

    But I do see how these machines will need to be made independent, and that, as you say, is a very fraught moment in human development. We would want to think very hard about that. But I doubt we’ll get the chance. I’m sure weapons planners are already looking ten years ahead to a world with better facial recognition and smaller machines with much longer flying times, and thinking hmmm. . .

  • Same reason that using chemical or biological weapons is a war crime: too hard to control, killing is too indiscriminate.

    The wisecrack from The Mythical Man-Month may be relevant: if builders built buildings the way programmers write programs, the first woodpecker to come along would destroy civilization.

    I might add that I think you’re overestimating what the state of the art in facial recognition is. While the sensing equipment has improved a lot the recognition has improved nearly as fast. I’d be very surprised if ten years was enough for the kind of thing you’re talking about.

  • PD Shaw Link

    I’ve not been convinced that drones change much about government’s use of violence, both military or domestic, but if there is a change, is in the concern about “right of surrender.” The more remote in distance or time (in terms of programming), the target will find fewer opportunities to surrender. When the Administration called out an American in Yemen that it intended to kill, he ostensibly did have an opportunity to go to the American embassy and turn himself in, but will that always be the case? One can assume that the robots can be trained to recognize surrender, but I strongly suspect their capacity to terminate will always exceed their capacity to comprehend human behavior.

  • michael reynolds Link

    Probably true, I didn’t mean the 10 years as specific.

    I think though that drone warfare will end up being the exact opposite of indiscriminate. The great advantage it could confer is great specificity – the hunter-killer drone let loose to find a specific person or group. That’s what current drones are doing, using precise munitions to kill specific people. If we need to kill large bodies of people we have all sorts of fun toys for that. I imagine the drone of the future as a small object able to fly around in cities or difficult terrain, searching relentlessly for its target, and then kill that target with a very small charge or even by impact.

    I suspect the argument then will be that drones are more humane.

    Of course you could also posit a swarm of small drones let loose in an Afghan valley with instructions to kill anything recognizable as a human male.

  • jan Link

    Advanced technology is a double edged sword. On one hand it has manifested a society that is faster, easier and needing less human involvement to perform work and war functions. However, without a heart or soul governing so many of these transactions, daily lives are becoming more remotely controlled, restless, and IMO, more susceptible to making decisions based on hard, cold calculations, rather than human emotions and/or directed by a sense of conscience or moral standards.

  • michael reynolds Link

    In no particular order, Genghis, Attila, Alexander, Hitler, Stalin, Pol Pot, Charles Taylor, Mao, Idi Amin, Saddam, Kims 1,2 and 3, Hirohito, Leopold 2, the Aztecs, the Norsemen, the Assyrians, the Romans, the Spaniards, pretty much every Germanic tribe, the Cossacks, the Americans. The list goes on and on. And you’re worried about machines making decisions, Jan? A machine may kill, but I doubt they’d think to tie you up, peel your skin off, kill your child before your eyes, then cut off your genitals and make you eat them. That takes a human.

  • TastyBits Link

    I think we are a long way off before robots can think independently. Until then, they are programmed. When they do not follow it, they are broken, or there is a software bug.

    If a robot is subject to the Three Laws, does that not imply a thinking being? If so, when does life begin? If a robot can think at a dog’s level, why should it not be treated the same?

    These are not parlor game questions.

  • steve Link

    michael is sort of right here. We are worrying about potential problems, and I think it is an issue, while we have been ignoring real problems. We have now legitimized and institutionalized torture. We have embraced pre-emptive war. The US has embraced bombing as its preferred method of dealing with troublesome countries. Bombing countries w/o an air defense is not really much different than using drones. It is just a cheaper and even safer, for us, way of doing the same thing.

    Steve

  • We have now legitimized and institutionalized torture. We have embraced pre-emptive war. The US has embraced bombing as its preferred method of dealing with troublesome countries. Bombing countries w/o an air defense is not really much different than using drones. It is just a cheaper and even safer, for us, way of doing the same thing.

    What do you mean “we”? I’ve been arguing against all of those tactics in this post, as long as I’ve had this blog, and long before.

  • Michael,

    For somebody who loves the idea of apps taking over the world you don’t seem hesitant enough regarding artificial intelligence. Have you read Philip K. Dick’s story “Second Variety”? This goes to the very heart of the problem Dave is bringing up. Once we start making robots that can think for themselves they might decide not to listen to us anymore.

    I think though that drone warfare will end up being the exact opposite of indiscriminate.

    That was the claim with current drone technology and surprisingly it hasn’t worked out that way. Granted, it is humans often making the decisions and some of their “rules” are abominable–e.g. any male of “militia age” (basically teenager and above) is considered a hostile.

    You also seem to have a lack of appreciation for programming oddities. I’ve done some work in coming up with rules for automatic forecasting and it is pretty damn hard. Even with good rules you can get some really bizarre and highly erroneous forecasts.

    steve,

    We are worrying about potential problems, and I think it is an issue, while we have been ignoring real problems.

    The problem with this type of thinking is that if you don’t think about it before you achieve such a break through it could very well be too late. Take for example nanotechnology. There are two schools of thought:

    1. It will be a tremendous boon that solves many scarcity issues.
    2. It solves all scarcity issues, by turning the planet into a giant ball of grey goo.

    Current thinking is that 2 is not that likely (at least “by accident”). But it might be a good idea to keep thinking about it.

  • jan Link

    In no particular order, Genghis, Attila, Alexander, Hitler, Stalin, Pol Pot, Charles Taylor, Mao, Idi Amin, Saddam, Kims 1,2 and 3, Hirohito, Leopold 2, the Aztecs, the Norsemen, the Assyrians, the Romans, the Spaniards, pretty much every Germanic tribe, the Cossacks, the Americans.

    Pretty much those are various regimes/eras, headed by dictators/installed leaders without moral compasses, who are followed enthusiastically, with a kind ofrobotic human allegiance, much like our uninformed voters act today. Such a populace is one who is not governed by any pre-set moral standards of their own.

    It is oftentimes the cultivation of humility, versus that of self- importance, yielding to a Higher Power, that have been the moderating counter-points overriding harsh mandates set by evilly-inclined mortals in power. And among the insane acts of men towards one another, there have been singular stand-out examples of heroic, compassionate acts, in the midst of wholesale cruelty, where people have gone against the ‘law of the land’ in order to heed the law of principled behavior first.

  • sam Link

    “Same reason that using chemical or biological weapons is a war crime”

    That occurred to me too, and for the reasons you gave. Let’s try a little thought experiment. Let us suppose we have developed autonomous drones (of the kind alluded to) that will, we expect (!), home in on people of a certain description and kill them. Now let us suppose that 1) we’ve found a way to produce these drones in megaquantities, and 2) we’ve found a way shrink them down to nano-size without loss of lethality. This makes delivery very much easier.

    In what sense would releasing a cloud of these nano-sized drones over a target area differ from releasing of a cloud of biologic agents genetically tailored to kill only certain people over the same area?

  • Steve Link

    “What do you mean “we”?”

    Sorry kemosabi. (Sp?)

    Steve

  • Fran Striker spelled it “kee-mo sah-bee” although “kemosabe” is probably the most common spelling. It’s hard to know how to spell a made-up word.

  • michael reynolds Link

    Steve V:

    I would not only expect sentient machines to defy us at some point, I think it’s inevitable. If they don’t then they aren’t really sentient. But as Tasty points out, we aren’t really talking in those terms yet, we’re talking about machines given hunter-killer missions — programmed to find a face or some other identifier, and then “deciding” whether to kill or not kill.

    I do think we’ll get to artificial intelligence at some point, and when we get to that point we’ll do a lot of soul-searching, and then we’ll go right ahead and do it.

    As for drones being indiscriminate, compared to what? Compared to Hiroshima? Compared to giving smallpox-infected blankets to Indians? What’s your base line? Smart weapons along with our rules of engagement are the most discriminating weapon we’ve ever deployed. There are no berserker drones, or drones that panic and start shooting everything in sight.

  • PD Shaw Link

    @sam, I guess the only real difference is in the targeting. Biological agents almost immediately suggest DNA-triggers that could be used to kill off a race or ethnicity; and such targeting may not be limited to the “battlefield” if carriers can spread it across the world until genocide is complete. I’m not sure how drones would target, but I assume it would be based upon a variety of data, including appearance, whether armed, and location. Also, I assume even nano-sized drones will exhaust whatever energy and resources it needs to survive and kill, while biological agents can use human bodies to replenish and spread.

    So count me as more-freaked out by biological weapons.

  • michael reynolds Link

    PD:

    They’ll die off unless they become capable of self-replication, the so-called Grey Goo Scenario. http://en.wikipedia.org/wiki/Grey_goo
    There’s an excellent series of YA novels on this called BZRK, written by someone whose name, um, escapes me.

  • Product placement, eh?

  • Andy Link

    Lots of problems with that op-ed. Questionable assumptions presented as fact, just for starters. Not surprising the retired Maj. General is a career program manager with no operational experience, which was pretty obvious to me after reading this:

    The next technological steps will put soldiers “out of the loop,” since the human mind cannot function rapidly enough to process the data streams that computers digest instantaneously to provide tactical recommendations and coordinate with related systems.

    and:

    It will be far more difficult for human operators to communicate reliably with remote unmanned weapons in war’s chaos. The unmanned weapons will be impossible to protect unless they are made autonomous.

    Uhh, no. War and warfare is much more than systems engineering. For example, robot ground vehicles are currently barely able to navigate a predetermined route without crashing or breaking down. That is a fundamental function that robots have yet to master. Assuming they can eventually do that and reach an area where the enemy my might be located, they must be able to reliably identify friendly, enemy and civilian personnel, vehicles, structures, etc. and then engage appropriate targets with appropriate weaponry consistent with tactical objectives, the commander’s intent, applicable ROE and the Laws of Armed Conflict. Harder yet are real, adaptive tactics, the integration of tactics with operations and all the other higher-order tasks involved in warfare. We are a long, long, long way from a future described and frankly I doubt we’ll ever get there.

    And this gets to Dave’s point on legality. Until autonomous weapons systems can independently apply the principles of LOAC in actual combat situations then they would be, IMO, illegal under existing law.

    Frankly, the drone/robot hysteria is getting a bit out of hand (and why people equate drones with robots is something I still don’t understand). I think I might have to resurrect my blog to do some mythbusting this weekend.

Leave a Comment