Regulation

The Rise of Robots

Technocrats from many developed countries, especially Japan and South Korea, are preparing for the human–robot co-existence society that they believe will emerge by 2030. (Weng, Chen, and Sun)

As the case studies of robotic nurses and army robots demonstrate, we are heading toward a future where a greater dependence on robots is necessary – or at least highly beneficial. The example of self-driving cars also highlights the issues with making practical progress in robotics in accordance with appropriate regulation.

It may not be possible to make such robots ultimately safe under all conditions, so it is important to consider what can be done ensure sufficiently advanced and safe robots.

How do you regulate robots?

While the precise definition of “robot” may be a matter of contention (e.g. how much of its “own” intent a robot should exhibit), there are certain applications of technology that clearly fall under the category, with our case studies providing examples of specific applications. Such robots are generally designed to surpass human capabilities at certain tasks, and if a robot should be given power to make “decisions” (which it must, if it is to do anything effective), how do we assign responsibility for it to act correspondingly “safe”? It is a matter of regulation to specify what these robots should and should not do while carrying out their actions.

There are two important, related questions with regard to designing safe robots: How do we regulate the responsibilities of the robots, and how do we regulate the responsibilities of those who design, manufacture, and use them? While the first can be addressed in the context of Asimov’s laws, the latter is more complicated.

Asimov’s Laws of Robotics

Even in early anticipation of modern developments, the speculative world of science fiction offers many examples where advanced forms of artificial intelligence are designed to follow fundamental rules of safety – but end up doing so at the cost of the true safety of humans. This comes up in such classic examples as HAL, but perhaps no author has had as much influence on  exposition in the subject as Isaac Asimov, who created the now-famous “Three Laws of Robotics” are intended to guide the safe behavior of advanced intelligent robots.

Asimov’s Three Laws of Robotics (1940):

  1. First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. Second Law: A robot must obey orders given it by human beings, except when such orders conflict with the First Law.
  3. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These rules seem practical and prescriptive for ensuring a basic idea of what is “right” for robots to do, but aimed more at ensuring “safe” robots rather than fully “moral” or “ethical” ones. Roger Clarke’s 1993/1994 article in IEEE’s Computer Magazine notes that these rules have become entrenched in the development of robotics because they have “attained such a currency… among practicing roboticists and software developers.” Thus, whether these rules are really a good basis or not, they have provided the framework for discussing rules for such purposes. In addition, the continued relevance of the three laws gives weight to their sensibility and permanence.

Although Asimov sought to imagine possible futures that contained reasonable and beneficial robots, many of his stories demonstrate how the rules lead to complications and contradictions – including Runaround, the story in which he introduced the laws. As Clarke notes, Asimov wrote many stories that demonstrate that these rules are insufficient, and that at least one more important law should be introduced: the zeroth law.

  • Zeroth Law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

The introduction of “humanity” might be desirable, but it introduces a complicated ethical aspect to the laws, without providing full protection against undesirable loopholes. Clarke goes on to discuss how one might codify rules that describe what Asimov’s rules are presumably intended to ensure.

But can Asimov’s laws even be extended enough to provide a full foundation for safely operating robots?

The answer seems to be at most “maybe”. It seems easy to poke holes at any particular formulation, but the three laws seem like a reasonable basis, and they can be taken as a starting point for a system that could reflect upon the issues sufficiently. This might not entail very much for a self-driving car, but for a general AI robot it could very well entail the ability to reason about the rules a bit – in the manner of the meta-ethics suggested for automated decisions.