History & Development of Autonomous Weapons


Definition of "autonomous weapon":

In order to adequately discuss Autonomous Weapons, a working definition must first be clearly established. Roth (CPSR) defines computerized weaponary on a continuum of autonomy. First, there is Direct Operator Control (DOC). DOC Weapons are controlled by human operators who are in some physical contact with the machine. Next, there is Teleoperator Control. Here, the unit is still controlled by a human, but at a distance. An example of Teleoperator Controlled weaponary is drone aircraft (RPVs: Remotely Piloted Vehicles). The next level of computerized weaponary are those under Preprogrammed Operation. These units are obviously programmed to perform a certain task, and simply follow a predetermined set of orders on their own on the battlefield (CPSR Newsletter, Roth). Approaching truly autonomous technology is Structured Control. Units in this category work in conjunction with artificial vision or sensor systems to respond in a rudimentary way to environmental stimuli.

Based on our research, we have chosen to define autonomous weapons as those weapons which have the capability of functioning at some level without human input or supervision. Furthermore, such weapons must be able to:


"Smart" weapons?

A history of self-guided weapons from World War I to the present can be found here. This link also discusses the artificial intelligence at work in these increasingly autonomous weapons and weapons systems. Modern autonomous weapons technology is a result of continuing efforts to provide electronic aids and enhancements to military pilots. These developments have served to offload a larger and larger share of navigation, target acquisition and attack functions from the pilot. Modern advances in processor, sensor, and algorithm technologies now make possible the imminent development of truly autonomous weapons (Haystead).

For a more in-depth history of autonomous weapons development, please link to our Time-line page.

What are some implications of having smart, perhaps autonomous, weapons available for use in warfare?

Development of "smart" weapons (precursors to truly autonomous weapons) was first undertaken on the assumption that they might be able to minimize collateral or unintended damage to both property and people. The concept of high technology fighting in the place of human beings was very attractive, resulting in early use of these weapons in conflict, or possibly even initiation of conflict.

However, with increasing development has come a growing concern for ethical and philosophical issues. The qualification "autonomous" implies that a weapon bears responsibility for its own actions, that it is the computer's fault. This attitude could obviously lead to a denial of responsibility with respect to warfare involving autonomous weapons. Beusmans et. al argue that responsibility for using autonomous weapons must remain with the commanding officer who orders the use of a weapon and the operator who decides to use it. Furthermore, this responsibility must also be acknolwedged by the weapon designers who encapsulate knowledge of real targets and how to locate them (Beusmans et al.) For a more in-depth review of the problematic issues related to autonomous weapons, please go to our "Arguments For" and "Conclusions" links.

Gateway to the sites referenced above and other related sites.