Arguments Against


Synopsis

  1. ERRORS: bugs, interface problems, etc. have large consequences. Fail-Safe/War Games notion.
  2. COMPLEXITY: Autonomous weapons are very complicated in design, perhaps even more so than current weapons. difficult, if not impossible, to build and create (SDI).
  3. MORAL QUESTIONS: dehumanizing war. placing the power to kill in the hands of machines.
  4. DISCERNING SURRENDER: How will these weapons be able to hold fire or take prisoners?

What are the problematic issues related to the development of autonomous weapons?

One of the fundamental ethical problems associated with the use of computer-aided autonomous weapons is the issue of software failure. In addition to being able to make more decisions independent of human input, current autonomous weapons have become increasingly complex in nature and may be more difficult to employ than conventional human-guided weapons. The more complicated the weapons systems become, the greater the risk they run of posing problems because of programming bugs or judgment errors. These risks could potentially lead to critical mistakes in the midst of battle, such as mistaken identification or misinterpreted data, and the consequences of such errors are drastic. Less than a decade ago, the Navy's highly sophisticated Aegis radar system (designed to identify the friend or foe and military or civilian status of nearby aircraft) led the USS Vincennes to shoot down a civilian Iranian airbus killing all 290 passengers on board. The incident served to prove that, despite the increased identification capabilities of a system such as Aegis, more advanced weapons technology may still lead to judgments which are error-prone.

In addition to the issue of software failure, the complex design of autonomous weapons technology poses the even more practical concern of whether it is possible to create such advanced weapons systems. Current technology is "smart" at best, but not entirely autonomous. Laser guided bombs and cruise missiles may be able to find their own way to a predetermined target, but whether or not they will ever be able to selectively decide when and where to fire is another question. Despite growing optimism in the application of artificial intelligence to high tech weapons systems, past examples from history give reason to cast doubt on developing efforts. In the 1980's, most defense technology creators showed similar wide-eyed hopes early on towards Ronald Reagan's proposal for the Strategic Defense Initiative (or Star Wars program) only to find after repeated investigations and reports that the program was impossible to design and implement because of its sheer complexity. Also problematic was the issue of testing such a defense system; nothing short of true nuclear war could be adequate in determining the effectiveness of the technology. Difficulties related to these types of problems also surround the creation of autonomous weapons to be used in the battlefield. Even if the technology were possible, how could designers find a proven means of testing its accuracy and effectiveness. Another practical question relating to the creation and use of autonomous weapons deals with the cost to making such systems: "We can make a weapon do essentially what we want, but can it be cheaply mass-produced? As of now, the military is not convinced. And because of the expense, the uniformed services cannot try them out enough to build up confidence in them" (DeMeis).

Even larger of an ethical argument against the use of autonomous weapons is the moral implication behind the whole idea of a man-made sentient killing machine. This theme provides the all to familiar backdrop to the movie Terminator and its sequel Terminator 2 in which futuristic human-designed robots are sent out to kill living targets. Although this kind of technology still remains fictional, greater decision-making ability on the part of the weapon itself only leads to the a greater potential for machines to possess the capacity to murder. In the words of one auther, "These weapons will be the first killing machines that are actually predatory, that are designed to hunt human beings and destroy them" (Warner). Even in its current development of the Tomahawk missile, the Navy still hesitates to advance its weapon system towards the independent ability to identify, decide and fire: "The Tomahawk system would seem to be a natural fit for transition to complete autonomy through the addition of an automatic target recognition and attack capability, but in actuality, the Navy is looking at quite a different approachÑkeeping the man in the loop longer" (Haystead). This decision shows that the moral concerns of equipping a weapon with the power to make aggressive decisions on its own is too problematic even for the military itself. According to Gerald O. Miller, Technical Director of the Navy Cruise Missile Program Office, war-related judgments on whether or not to attack are still best kept in the hands of human beings: "The man makes the decision, the missile executes it."

Lastly, the question of mercy is one that has been somewhat overlooked in the development and design of autonomous weapons. Although computer-guided weapons may be able to identify and fire at the right targets in the right locations, will they also have the level of consciousness to be able to discriminate signs of surrender? In light of the numerous possible responses that an opposing military may be able to communicate to the attacking force, the "fire-and-forget" philosophy fails to account for surrender conditions which might allow for humans abort continuing aggressive decisions. No reasonable amount of technical design, it seems, could possibly equip autonomous weapons with the ability to discern enemy surrender and subsequently hold fire. In the words of one writer, "Computers, of course, do as people tell them. The hard part is for people to foresee all circumstances and write instructions to handle all circumstances optimally" (Lemmons). Furthermore, with weapons designed to select and fire, the option of taking captives alive is precluded.