LEVEL 1 - 57 OF 65 STORIES Copyright 1986 Information Access Company, a Thomson Corporation Company ASAP Copyright 1986 Ziff-Davis Publishing Company PC Magazine January 14, 1986 SECTION: Vol. 5 ; No. 1 ; Pg. 97 ; ISSN: 0888-8507 LENGTH: 1630 words HEADLINE: Can Computers Be Fail-Safe? BYLINE: Augarten, Stan. BODY: Can Computers Be Fail-Safe? The Department of Defense, whose favorite pastime is throwing good money after bad, has devised another idiotic scheme to save us from the Communists. You've probably all heard of it by now: It's known as the Strategic Defense Initiative, aka the Star Wars defense system, and it calls for the design and contruction of a vast network of computers, satellites, laser beams, antiballistic missiles, and other ridiculously expensive and unreliable electronic gadgets to protect us from a Soviet nuclear strike. It's one of the stupidest ideas the Pentagon ever had. Star Wars, which the Pentagon estimates will set us back about $ 26 billion (but you can bet that it will end up costing much more; you know how expensive those custom-made toilet seats can be), is only one of several heavily computerized weapons systems that the military is pushing. For example, the Army is seeking to develop an "autonomous land vehicle,' an unmanned tank that could fire weapons and conduct reconnaissance without human guidance; the Air Force is hoping to devise a "pilot's associate,' a computerized copilot that would help fliers operate their electronic weapons and engage in deadlier dogfights; and the Navy is working on artificial intelligence systems for its warships, which, as the sinking of the British destroyer Sheffield in the Falklands war demonstrated, have become sitting ducks for any smart bomb or guided missile. At the heart of all these projects is a major military research-and-development campaign known as the Strategic Computing Program (SCP). A 5-year, $ 600-million effort, it began in 1984 with an initial budget of $ 50 million. The SCP wants to develop a new generation of intelligent weapons, such as the Army's autonomous vehicle, that could fight a battle, perhaps even an entire war, with little or no human intervention. The SCP is run by the Defense Advanced Research Projects Agency (DARPA), sort of a military venture capital firm that finances state-of- the-art research-and-development projects. DARPA's armchair warriors have outlined their absurd ambitions for theSCP in a report entitled Strategic Computing, New-Generation Computing Technology: A Strategic Plan for Its Development and Application to Critical Problems in Defense. DARPA may be the death of us all: Instead of fielding simple guided missiles or remotely piloted vehicles, we might launch completely autonomous land, sea, and air vehicles capable of complex, far-ranging reconnaissance and attack missions. . . . In contrast with previous computers, the new generation will exhibit humanlike, "intelligent' capabilities for planning and reasoning. . . . Using this new technology, machines will perform complex tasks with little human intervention, or even with complete autonomy. . . . Our leaders will employ intelligent computers as active assistants in the management of complex enterprises. Developing all this hardware isn't going to be easy, but writing the software will be much harder, perhaps impossible. The SCP is planning to create the largest programs ever conceived; the Star Wars network alone will require about 10 million lines of computer code, while such lesser projects as the autonomous vehicle will need several million each. Not only will these programs dwarf the biggest programs in existence today, but they will have to transcend the rigid logicality of current programming and achieve a plastic versatility akin to human thought. Not known for its pessimism, the SCP hopes to achieve all this by the early 1990s. Of course, the necessary software technology doesn't exist and won't for decades, if ever; today's computer scientists can't even get a wheeled robot to roll into a room and pick out a copy of War and Peace from a pile of comic books, let alone tell the difference between friend or foe. Anyone working in that pie-in-the-sky business called artificial intelligence knows how far down the road such abilities really are, but the military, afflicted by an incredibly bad case of technological hubris, thinks anything is possible. The Pentagon is quite literally banking on major technological breakthroughs appearing on schedule (for another view of artificial intelligence, see "Is There Intelligent Life in the PC?' elsewhere in this issue). Our political and military leaders have a childlike faith in American technological prowess. I can't help wondering whether President Reagan, Cap Weinberger, and the other backers of Star Wars have ever used a computer. For if they had ever done something as commonplace as buying a personal computer,uncrating it, plugging it in, booting it up, and running a few canned programs on it, they'd realize just how complicated and fickle even the finest machines can be. Under the best of conditions, computers occasionally fail; in the heat of battle, when the physical safety of both silicon and men are pushed to the limit, even the most carefully built and programmed computers will break down. Computers are prone to many different kinds of errors, from the breakdown of integrated circuits (and some of America's leading chip manufacturers have been fined for neglecting to thoroughly test military components) to unforeseen software bugs. The Pentagon knows this but for obvious reasons has decided to ignore the whole subject. Here's a short list of some of the most notorious computer fiascos: In 1977, the Worldwide Military Command and Control System flunked a major test. It failed to send messages 62 percent of the time, and the Readiness Command, a crucial arm of the system, broke down 85 percent of the time. In 1979, a computer operator inadvertently fed the wrong magnetic tape into the North American Aerospace Defense Command (NORAD) computer system. The tape, which contained a simulation of a Soviet attack, triggered every bell, whistle, and alarm at NORAD, but luckily the blunder was discovered before it was too late. Also in 1979, the crew of an Air New Zealand airliner heading for Antarctica fed the wrong routing data into the jet's navigation computer. Its visibility obscured by bad weather, the plane drifted 26 miles off course and crashed into a mountain. More than 200 people were killed. Some investigators suspect that a similar error in the entry of routing data led Korean Airlines flight 007 astray, with equally deadly consequences for its passengers and crew. And, in what is probably the most costly programming error of all time, the first American Venus probe was lost owing to the inadvertent substitution of a period for a comma in a FORTRAN program. A NASA programmer wrote DO 3 I = 1.3 when he should have entered DO 3 I = 1,3 That seemingly inconsequential period sent the rocket, launched from Cape Canaveral on July 22, 1962, careering toward populated areas, and the rocket had to be destroyed. Despite NASA's intensive checking procedures, which are probably at least as good as the military's, the mistake went undetected during many months of testing and retesting. Professional programmers try to write error-free code, but they're onlyhuman and bugs and omissions are inevitable. Such mistakes are insidious and invisible. An improperly programmed computer looks every bit as healthy as a properly programmed one; there are no loose screws dangling from the box or strange noises emanating from the chips. A software bug may lurk deep inside a program, utterly harmless until some computational event awakens it and causes it to wreak havoc. Even after a bug has cropped up, the goof may be excruciatingly difficult to track down and fix, and then the programmer's Band-Aid may interact with other parts of the system in unexpected ways, causing still more bugs, and so on. Often only the original programmers can fully decipher their own code; in large systems, few people, if any, are familiar with the entire program. There are ways to minimize errors, of course. One answer is programming languages that make it impossible to commit simple syntactical error, such as the one that killed the Venus mission; another is algorithms that test programs before they are put into operation. But there is no substitute for actual and repeated testing under real conditions. While such testing may be possible for the autonomous tank and other weapons of the electronic battlefield, it isn't feasible for Star Wars. Such a system, designed to foil a massive attack from ICBMs, submarines, planes, and satellites, can only be realistically evaluated in actual combat. Simulations and small-scale battle tests won't do the job, no matter how frequently or rigorously performed. It is precisely in a war, when the system is fully engaged and taxed to the limit, that programming bugs and omissions emerge. Star Wars is bound to be a billion-dollar paper tiger. But there are two things about the system that are far more disturbing than the vast waste of money and effort: the lies and illusions fostered by the Pentagon and its supporters in the computer industry and the academic establishment. In order to foil an attack, Star Wars must swing into action within minutes of a Soviet onslaught, which means that the system's computers must possess a great deal of decision- making autonomy in evaluating the attack and in orchestrating our initial response. There will be little time for the generals to confirm the warning and approve a counterattack--maybe not even enough time to get the President's permission. And, given the impossibility of properly debugging the system, Star Wars may very well increase the chances of accidental nuclear war. We are on the verge of entrusting our future to computers. As I said, the whole thing is a stupid idea.