"Hasta la vista, baby!" These memorable words, punctuated by an advanced cyborg in Terminator II, conjure forth images of near unstoppable killing machines from some Luddite's vision of a hellish future. In this future, mankind has been undone by his own creation: a sentient defense computer, Skynet. However, the human factor--ingenuity, perserverance, and sheer will to live--eventually triumph over the ruthless logic of Skynet. This dark and chilling future is averted and humanity is restored, thankfully left to killing itself on its own terms and without the aid of advanced, self-thinking machines. In this project, our group will seek to illuminate some of the ethical issues that closely shadow the development of Autonomous Weapons. Is the fiction of Terminator II anything the common citizen should be having serious nightmares over? And is autonomous weapons development anywhere near the point of being able to replace the human component on the battlefield?
In today's reality, we are far from achieving the mass production of autonomous killing machines constructed in the guise of a European body-builder. Indeed, we are still in the formative stages of perfecting experimental robots that can act independently to pick up a glass of water without spilling it. The United States military, largely through the Defense Advanced Research Projects Agency (DARPA) and various other Navy, Army, and Air Force research offices, is the single largest funder of work with robots and artificial intelligence. The benefits of being able to implement autonomous weapons on the battlefield have been made clear by election year politicians and fund-hungry military officers alike: less loss of human life and greater chances of more precise, coordinated, reliable, and success- ful warfare. But the negative possibilities have been largely ignored or poorly addressed. Can we afford to continue to research such advanced and potentially powerful technology without closely examing the related social issues?
We intend to present a brief history of autonomous weapons development and describe the current state of autonomous weapons capabilities. We will also attempt to explore some of the ethical and logistical issues raised in this field: Do we really have the capability to develop effective autonomous weapons? Can we trust autonomous machines that have been programmed to kill humans? And finally, can we afford to continue developing autonomous weapons, both economically and ethically, if these latter two questions cannot be answered satisfactorily?