Conclusion


In this discussion, we have highlighted the ethical issues relating to the history and development of autonomous weapons, citing both positive and negative instances of their use and deployment.

The argument that autonomous weapons, in some instances, can reduce or even eliminate the risks to soldiers because they are removed from the battlefield, is difficult to counter. Who would be opposed to seeing American soldiers less likely to suffer physical harm? With technology rapidly advancing even in second- and third- world countries, the hazards of war are increasing as well: NBC (nuclear, biological, and chemical) weapons are a real threat. Autonomous robots that could fight in these conditions, or aid in detection and decontamination, must be seriously considered.

Autonomous weapons development has primarily been mediated by the military. The Defense Advanced Research Projects Agency (DARPA) and the research offices of the Navy, Army, and Air Force (ONR, ARO, AFOSR), represent major funders of basic and applied research in robotics and "smart weapons" development. (CPSR Newsletter, Roth). The Pentagon spends more than $300 million a year on autonomous weapons development (CPSR Newsletter, Chapman).

The Gulf War provides a splendid example of an overwhelmingly successful tour-de-force and use of semi- and fully autonomous weapons. Desert Storm lasted a relatively short amount of time, and the casualties to American soldiers were minimal. The use of autonomous machines was a key factor in minimizing both the time and casualty elements of the operation.

On the other hand, there are uses of autonomous weapons that will always be very risky and subject to open debate. The July 1988 shooting of Iranian Flight 655 by the U.S.S. Vincennes with the aid of the Aegis Spy-1 radar system, as already mentioned, is one example of the failure of an autonomous weapon. The potential for, if not tendency towards, errors and failure of an autonomous weapon must be seriously considered. In addition, the complexity of such machines will always make them of limited use in war, because of the difficulty to create them. Soldiers will always have to be ready to do the real fighting in war, especially in ground war.

Some analysts question the wisdom of allowing the military to fund and therefore direct the bulk of of artificial intelligence and weapons development. The questions are obvious: Should the "smart," autonomously acting machines be applied solely to the task of killing human beings? And is the military field the best arena for the application of these developments? For instance, autonomous machines could obviously be used in the workplace to replace human labor. But this possibility arouses the concern of union organizations, who fear the eventual loss of human labor to increasingly sophisticated machines. These questions will remain at the forefront of the current debate over the development and use of autonomous weapons.