Our world is becoming increasingly automated, with many tasks handled by computers and robots. Although this bring many benefits, it is unclear who is responsible for the associated risks. We believe that automation is necessary for the world, and that an automated future can be safe. However, all of society needs to approach the creation of robots with a more fundamental approach to safety. This site covers case studies of contemporary robotic applications:
In addition, we look at the problem of regulation: How can the creation and operation of safe robots be guided by rules?
What emerges is not a simple, prescriptive answer. The world is changing, and robots are going to play important roles in our lives. In order for this to happen properly, we must all be aware of the risks associated with advanced machines.
Recent advances in technology have brought many benefits and changes to our society. As the software behind this technology becomes increasingly complex, humans are able to rely on the automation of tasks that would otherwise require much time and effort. However, this complexity also creates problems for which our solutions are steadily more dependent on the power of computers. In addition, technology has brought benefits that bring forward new issues that require ever greater technical creativity to solve.
For example, technological leaps in medicine have led to an increase in average life expectancy, but left the ever growing elderly population without adequate resources to support them. This is particularly a big problem in Japan, where nursing homes are increasingly turning to robots to relieve the labor force. Similarly, advances in business, military, and research have led to systems that are either too large or dangerous for humans to operate; algorithmically advanced robots may present a possible solution.
Today, robots already have quite a presence, but the technology is still young and the dangers of giving machines too much responsibility are not yet evident. However, as machines become more intelligent, they will be placed in situations where they will make decisions for humans. It is important to continue our development with a strong understanding of the risks. We cannot guarantee ultimate safety, but we can guard against them. Our research focuses on how to draw the line between power and safety when assigning responsibility to robots. We will also cover the extensive changes that will have to be implemented by engineers, manufacturers, and users of this technology in order to ensure that intelligent machines will not pose a serious risk to human users.
Who are we?
This site is a project by Kseniya Charova, Cameron Schaeffer, and Lucas Garron for the CS181 course at Stanford in Spring 2011.