Philosophical Interests in Program Verification
By: Paul Tenney

The philosophical issue of program verification primarily involves the theory of knowledge. Program verification is a very specific example of knowledge theory, so in order to create a frame of reference within the subject of knowledge theory, one must first step back to a less specific example, mathematical proofs. Theorem proving, in the classical sense of the term, is an issue of creating direct logical deductions to demonstrate the truth of a statement. The word truth is an ancient controversial word in philosophy, and thus its use in the study of program verification begets controversy.

a priori versus a posteriori

In the philosophical sense, truth is something that is absolutely certain without the possibility of incorrectness. While many philosophers cast doubt towards the idea that such a thing exists, a more common conception of truth is that it must follow a logical order of statements. In other words, there must be a means to verify a statement as true without utilizing any sort of experiential data. A priori refers to a basis of knowledge that can be known in this sense. A posteriori knowledge, on the other hand, is based upon precedents learned from experience. Between the two, only a priori knowledge can represent truth. A posteriori knowledge instead represents merely a degree of certainty. It is a belief held by one person or a body of people, but it is not necessarily true. For instance, if there were a tribe of persons living on a island where there were only red apples, one member of the tribe may state that all apples are red. This assumption, based on experience, cannot be contradicted from the standpoint of the individual or the community, yet it is in fact false. When working with worldly sorts of information, nothing can be demonstrated as true, but rather can only become an overwhelming certainty. One of the few areas in which truth can be demonstrated is mathematics, because the subject is based strictly upon logical construction.


Classic mathematics is based upon certain formulas which are subject to demonstration. For any mathematical proof, there must be a series of true premises which result in a validation of the theorem that is being proved. However, there is a fundamental philosophical difference between the theorem and the proof that must be noted. One of the first articles to make this point was titled "Social Processes and Proofs of Theorems and Programs" written by Richard De Millo, Richard Lipton, and Alan Perlis in May 1979 for the Communications of the ACM journal. The authors suggest that while it is possible to determine the correctness of a theorem through a long chain of formal logic, mathematicians instead use a proof, "a very different animal," as it is referred to by the authors. A proof is intended to increase the confidence in a theorem rather than to actually show its validity. In this sense, the word proof seems to be a bit of a misnomer. A proof, philosophically speaking, is a demonstration of a truthful statement. Because many proofs are shown to be incorrect after a period of time by other mathematicians, the subject can be seen as an almost empirical process. This method of "proving" theorems has led to two classifications of mathematicians. The classicists say that every theorem, if it is to be assumed as valid, must believe that there is a correct, formal, checkable deduction. The probabilists, on the other hand, take into account the possibility of error in any proof, and therefore state theorems probabilistically and give probabilistic proofs. The advantage of such an approach is that it is both technically simpler than the classical method and may allow mathematicians to focus on critical ideas rather than extensive demonstrations. Because the probabilists do not focus strictly upon deductive arguments, their approach can be classified as a posteriori. One example given by De Millo, Lipton, and Perlis of the probabilist approach is Michael Rabin*s algorithm for testing probable primality. Essentially, the computational time needed to determine the primality of a very large number is immense, so if you are willing to settle for a good probability that the number is prime, then you can get a solution in a very small amount of time, and with a very small probability of error. Such an algorithm is able to become part of further proofs and theorems, but because there is even a very small probability of error, all theorems based upon such a proof can not be considered true, but rather likely. De Millo, Lipton, and Perlis suggest that this nature in mathematics leads to a social process which determines whether mathematicians feel confident about a theorem. James Fetzer, in a response to De Millo, Lipton, and Perlis points out that a social process is neither necessary nor sufficient for a proof to be valid. This is the important philosophical distinction in the theory of knowledge for mathematics.

More Theory of Knowledge

As part of the theory of knowledge, it is important to define the terms inductive and deductive. In Fetzer*s article, he makes the following definition:

The features that distinguish (good) deductive arguments are the following:

a) they are demonstrative, i.e. if their premises were true, their conclusions could not be false (without contradiction).

b) they are non-ampliative, i.e. there is no information or content in their conclusions that is not already contained in their premises; and,

c) they are additive i.e. the addition of further information in the form of additional premises can neither strengthen nor weaken these arguments, which are already maximally strong.

On the other hand, a good inductive argument is non-demonstrative, ampliative and non-additive. From a philosophical standpoint, deductive arguments are meant to produce truth, while inductive arguments are meant to expand the field of knowledge. One example of the expansion of knowledge is making a conjecture about a population from a sample. On the other hand, for a computer program to be verifiable, it must be subject to deductive procedures. In order for a certain inference to be made about the execution of a program, there must be certain rules of inference concerning the performance.

Program Verification

When the correctness of a program, its compiler, and the hardware of the computer have all been established with mathematical certainty, it will be possible to place great reliance on the results of the program, and predict their properties with a confidence limited only by the reliability of the electronics.
C.A.R. Hoare

The most obvious analogy that could be drawn between mathematics and program verification would be to say that theorems are to proofs as programs are to verification. There is a necessity in each case to examine the first item and use deductive logic to determine its validity. However, in terms of programming, it is much more useful to see it as a function from inputs to outputs. A better analogy to mathematics is then, in mathematical terms, premises are to rules of inference are to theorems as input is to programs are to output. Rules of inference resemble programming language in that it is a specific implementation of the premises just as a program is a specific implementation of an algorithm. DeMillo, Lipton, and Perlis demonstrate that complex programs have aspects such as error messages and user interfaces which are unverifiable by definition.

In this sense, much like in mathematics which use probability rather than absolute proofs, program verification must be based upon a relative verifiability. A program is a particular implementation of an algorithm in a language that is understandable by the machine. Fetzer describes a program as "a causal model of a logical structure of which a specific algorithm may be a specific instance." In this interpretation, the algorithm is analogous to pure mathematics, while the program is analogous to applied mathematics. The concept of the term program, must be further defined, as it can take the form of any of the following meanings (I) algorithms, (ii) encodings of algorithms, (iii) encodings of algorithms that can be compiled, or (iv) encodings of algorithms that can be compiled and executed by a machine. The numeric progression is inversely proportionate to the amount of abstraction of a concept. If the word "program" were defined as simply an algorithm, it could not fail to be verifiable because as algorithm must follow deductive logic if effective. The definitions in (I) and in (ii) are both meant to refer to abstract machines, but not to run on any tangible machine, whereas (iii) and (iv) refer to physical machines, and cannot be absolutely verifiable. Cases (iii) and (iv) can only be established by inductive means.


The philosophical importance behind this entire subject of program verification is that an a priori determination of the output of a program is impossible. However, using inductive means to examine a causal relationship between input and output based on a specific implementation of a program can be made possible with a great degree of certainty, just like it can in probabilist mathematics. Philosophically, there is still a controversy when using the term "verification" because it suggests an absolute certainty as does "proof." The model represented by the subject of program verification is an excellent example of the controversy between mathematics and philosophy.