The Genesis

The concept of DNA computing was introduced in 1994 by USC professor, Leonard Adleman, in the November 1994 Science article, Molecular Computations of Solutions to Combinatorial Problems. Adleman showed that DNA could be used to store data and even perform computations in a massively parallel fashion. Using the four bases of DNA (adenine, thymine, cytosine, and guanine), Adleman encoded a classic “hard” problem (one that exhibits exponential growth with each additional input parameter) known as the Traveling Salesman Problem into strands of DNA and utilized biological properties of DNA to find the answer. Adleman stumbled upon the idea of DNA computing when he noticed how DNA replication was remarkably similar to an early theoretical computer developed in the 1930’s by Alan Turing. During replication, DNA polymerase slides along a single DNA strand, reading each base and writing its complement on the new strand, while in one version of the Turing Machine, a mechanism moved along a pair tapes, reading instructions from an “input tape” and writing out the result on the “output tape”. Interestingly, Alan Turing’s simple machine was proven to have the same computing capability as any modern computer. Adleman now began to wonder: if Turing’s simple machine has such great computational ability, would similarly operating DNA also have the ability to do computations? It did, as Adleman’s first experiment proved. Although his experiment involved large amounts of slow, manual labor to separate out the correct answers, included a high chance of error, and was unscalable for larger problems, DNA computing promised immensely high density storage, unparalleled energy efficiency, and a level of parallelism unknown to digital computers. A new field was born.

First Steps

Adleman’s experiment sparked huge interest in the community, and it wasn’t long before other academics began exploring the other applications of DNA computing. Richard Lipton, a Princeton professor described a method for solving the famous NP-complete satisfiability problem only months after Adleman’s discovery. In a method similar to Adleman’s, Lipton describes how all possible combinations of values in an SAT problem can be described in a graph (show fig.1), which could then be translated into DNA in the exact same manner as Adleman’s Traveling Salesman example. According to Lipton, one could theoretically deduce the answer from this pool by using a series of biological steps (involving pouring mixtures of specific strands around) whose complexity was directly proportional to the number of statements in the SAT problem. In other words, Lipton’s algorithm for solving the SAT problem was a linear one, a feat yet to be achieved using modern day computers.

Although Lipton never published any actual test results during this time, he and his team contributed greatly to the field with their theoretical papers on molecular computing. Lipton and two Princeton graduate students, Dan Boneh (now at Stanford) and Christopher Dunworth, published a paper on more efficient algorithms for solving general boolean circuits and other optimization problems, also describing how to find the optimal setting for boolean networks for which no complete solution existed. The following year (1996), the three published a well known paper where they described a “molecular program” that could be used to break the Data Encryption Standard (DES), a widely used cipher system approved by the US government. In the paper, they claimed that their algorithm, given one plain-text/cipher-text pairing as input, could determine the encryption key in approximately four months. Furthermore, the paper claimed that if using a pair where the attacker specified the plain-text to be encoded, their algorithm could theoretically recover the key about one day’s time.

next page