Background and Issues
Safety-critical systems include transportation systems (e.g. planes,
automobiles), nuclear power plants, medical systems, chemical processing
plants, nuclear and conventional weapons, and many military systems,
just to name a few. All these systems have the potential to cause unintended
physical harm. Monitors and controls are necessary on these systems
to ensure energies do not inflict unwanted damage. Before the age of
computers, completely physical means (hardware) was used to provide
monitors and controls. Nowadays, there are instances of computer systems
being used in critical places in all the systems types mentioned above.
Indeed, the use of software has allowed technological advances in many
fields.
There are a number of issues that arise when using software in a critical
system as opposed to only hardware. Because of the flexibility of software,
designers tend to create feature-rich systems which makes for complex
systems, generally much more complex than a hardware solution would
be, though also more powerful. This complexity makes it difficult for
software to be written without any errors. For one, the requirements
for the software are complex and may not have been captured correctly
or may be ambiguous. Designing to such specifications will likely cause
errors, and the design process itself may introduce errors or oversights.
The coding phase can introduce more errors. Testing, including random
testing, can find many errors, but due to complexity, not all pieces
of the software will be exercised in all possible manners. Software
can be verified through formal mathematical techniques, but this is
generally very difficult and extremely time consuming (and there could
always be an error made in the verification process). Additional concerns,
such as the human-computer interface, can cause additional problems,
as some of the case studies show. For such reasons as these, creating
trustworthy software is a challenging task.
The commercial and military sectors have somewhat different approaches
to developing reliable systems, arising because of different needs.
A general assumption is that the military sector holds higher standards
for software testing and risk management because of the extreme nature
of many of their systems, such as the nuclear arsenal. On the other
hand, many of their designs must be very aggressive, such as a fighter
jet which must be superior to enemy fighter jets. Indeed, today's fighter
jets are physically unstable and unable to fly without computer control.
So, interesting questions arise. Is the military able to produce more
reliable systems? What are the differences in design approach between
military and commercial designs? In testing and verification? Where
do errors generally arise? What can we learn from these two sectors?
These are the types of question this web site addresses.