Applications of Genetic Algorithms

Genetic algorithms offer great flexibility in the range of problems to which they can be applied. GA's have been instrumental in solving a number of practical problems as well as remaining an active topic for research. The following discussion will attempt to illustrate the some of the ways in which sample problems can be converted to be solvable using genetic algorithms.

Optimization

Traditionally, most genetic algorithm research has centered around their use in solving numerical function optimization problems. GA's are easily applicable to this category of problems, and can often be used to solve "messy" optimization problems. GA's are able to find solutions to problems where other conventional optimization techniques may fail, including functions that are discontinuous or do not have a derivative or gradient. The genetic algorithm approaches the problem with a population of potential solutions. The fitness function then plugs the individual solutions into the function, and evaluates their fitness by choosing the solutions which come closest to the maximum (or minimum). These individuals are selected to breed, and the selection and crossover process continues through multiple generations until the individual solutions converge upon a single solution. The element of mutation can be important to optimizations problems which contain many local maximums (or minimums), because a mutation can be instrumental in preventing a premature convergence.

Prisoner's Dilemma

GA's can be applied to some of the classical mathematical problems including the prisoner's dilemma. This problem that has been studied extensively in the fields of game theory, economics, and political science because of its application to real-world situations such as the arms-race. The prisoners dilemma is a game between two players (Player A and Player B) presented with the same scenario. Both players have been accused of committing a crime together and are being held in separate jail cells, with no communication between them. Both players are offered the following deal: if Player A confesses to the crime and testifies against Player B, B will go to jail for 5 years, while A's sentence will be suspended. However, if B also confesses and testifies against A, each will get 4 years in jail for pleading guilty. If neither of them confesses to the crime, then both will be convicted on a lesser charge and spend only 2 years in jail. Each player must decide what decision to make.

Strategy figures into this problem because the game is repeated many times. Each game consists of one move by each player. Moves can be thought of in terms of "cooperation" with the other player or "defection" against him or her. The best strategy seems to be that if you suspect the other player to cooperate, then you should defect. If you suspect that the other player will defect, you should defect too. This creates the dilemma in that players score worse if both defect than if both cooperate. But, if it is advantageous in most situations for a player to defect, how do you encourage reciprocal cooperation?

In his in depth study of the prisoner's dilemma, Robert Axelrod of the University of Michigan set up two tournaments of strategy. In Axelrod's tournament, each player had a memory of the outcome of the three previous games. Also, the game was altered slightly so that the goal was to gain as many points as possible rather than to avoid years in jail. The payoff matrix is shown below.



The two numbers in each box correspond to the points that Player A and Player B receive respectively for the given move. The numbers translate from the original problem as five minus the number of years in prison.

The strategy that proved to be the winner of both tournaments was known as TIT FOR TAT. This strategy cooperates in the first game, and then in each subsequent game, makes the move that the other player made in the last game. In essence, this strategy offers cooperation and reciprocates it, while it also punishes defection.

Axelrod wanted to see if a genetic algorithm would also come up with the TIT FOR TAT strategy. The following explanation shows how Axelrod encoded the problem into a string that could be used in a genetic algorithm.

If each player only had memory of the single game that had happened directly beforehand, then there are four possibilities to take into consideration. These outcomes are represented as "C" for cooperate and "D" for defect, and shown as respectively the moves of Player A and then Player B.

They are as follows:

case 1: CC

case 2: CD

case 3: DC

case 4: DD

A strategy dictates what move is made in each of these cases. For example the TIT FOR TAT strategy of Player A is as follows.

If CC (case 1), then C.

If CD (case 2), then D.

If DC (case 3), then C.

If DD (case 4), then D.

By ordering the cases, this strategy can be written in a string as CDCD. To use this string, the player remembers the result of the last game (for example, DC), and finds this to be case 3 from the ordered list. This can be thought of as for DC, i = 3. The player then selects the letter in the i th position of the string (for example, C).

Because in this situation there are four possibilities of outcome with two possible reactions to each, there are 24 total possible strategies. However, in Axelrod's tournaments, each player retained a memory of the outcome of three previous games. This means that there were sixty-four possible outcomes and 264 possible strategies. Axelrod actually used a seventy letter string which included three hypothetical previous games that led to the actual first game, so 270 possible strategies existed for each game. Obviously, the size of this sample makes it virtually impossible to search exhaustively.

Genetic algorithms are appropriate to solve this problem because they capitalize on a selected population rather than the entire set of possible solutions. In his first experiment, Axelrod began the GA with an initial population of twenty strategies. He tested their fitness through using an environment of eight human-generated strategies from his other tournaments (not including TIT FOR TAT). He chose these strategies as representative of how the strategy being tested would fair against all the other entries. The fitness of the strategy was established as an average score of all the games played. Axelrod performed forty runs of fifty generations each. Most of the strategies that evolved were similar to TIT FOR TAT in that they reciprocated cooperation and punished defection. The main difference was that the new strategies did not necessarily rely solely on the directly preceding move.

The GA's sometimes found strategies that scored higher than TIT FOR TAT. This fact is particularly impressive if you consider that using an initial population of twenty strategies multiplied by fifty generations, the GA evolved these strategies testing only 1000 possible individuals out of 270 possibilities. However, the strategies that the GA found are not necessarily better than TIT FOR TAT. Since the environment in which the strategies were being tested was fixed, the strategies were able to exploit certain weaknesses of the eight fixed strategies. When the environment was changed, the strategies that had performed better than TIT FOR TAT did not necessarily continue to be superior. While these strategies were very specialized, TIT FOR TAT remains successful as a general strategy. From this experiment, Axelrod concluded that genetic algorithms have similar strengths to the evolutionary process in that they can develop a species fit to survive in specific characteristics of an environment.

In his second experiment, Axelrod wanted to see the effects of a changing environment rather than a fixed one. He did so by determining fitness through matching the strategies against each other instead of using the eight fixed strategies. In this way, the environment changed with each generation because its opponents were changing at the same time. In this procedure, Axelrod observed that at first the strategies that the GA found did not tend to be cooperative. The few strategies that did attempt to cooperate did not find reciprocation and consequently died out rather quickly. However, after ten to twenty generations, the trend began to change so that the GA evolved strategies that reciprocated cooperation and punished defection, similar to TIT FOR TAT. These strategies faired well against each other because they were not completely defeated by less cooperative strategies as the initial failures had been, but still were able to benefit from cooperation.

The success of Axelrod's experiments helps illustrate the flexibility and versatility of genetic algorithms.

Traveling Salesperson Problem

GA's can also be applied to the traveling salesperson problem, a problem of optimization of scheduling. This problem is worth studying in detail because it can be related to many real-world scheduling problems which are often very complex. The problem entails attempting to discover the shortest tour between a collection of cities located in a plane. The route should visit each city only once.

The challenge in applying GA's to the traveling salesperson problem is creating a way to encode the sequences between cities that allows for meaningful recombination. One strategy for solving this problem involves the use of the "edge recombination operator." An "edge" can be considered a link between two cities. For example, in the tour [A B C D E F], examples of edges include "AB" and "DE". We can assume that the tour is circular, so after visiting city F, the salesperson returns to city A. It is important to note that the edge represented as "FA" is not different than "AF." A good operator in the GA will create the offspring from the links from the parent structures.

The edge recombination operator uses an "edge map" that keeps track of the connections that the parents have into and out of a city. Recording this information allows offspring to inherit as much genetic information as possible from the parents. Each city will have between two to four edge associations, since it inherits two edges from each parent which may overlap. Below you can see a sample edge map for the tours [A B C D E F] and [B D C A E F].

A has edges to: B F C A D has edges to: C E B
B has edges to: A C D F E has edges to: D F A
C has edges to: B D A F has edges to: A E B

From the edge map, the operator can put together a tour that utilizes edges from both parents. The operator must successfully avoid choosing routes that are unable to use these edges. If a city has no connections from the parents, then a new edge must be created, thereby introducing an unwanted mutation. Using the edge map, this situation can usually be prevented by choosing the city that has the fewest unused edges to be the next city on the route. In this way, the cities that are most likely to be isolated have greatest priority for being chosen next. If cities have a tie between the number of unused edges, the algorithm randomly chooses one of them to be next. After all the cities have been ordered, the connection between the final city back to the initial city does not come from the parent information. This portion of the path allows for mutation in the algorithm.

Following is an example of the recombination algorithm using the data from the example edge map.

1. The path begins by choosing one of the two initial cities from the parent strings. Since A and B both have four edges, B is chosen randomly.

2. B's edges include A, C, D, and F. C, D, and F all have two edges since they began with three and B has been used. A now has three edges so it is not considered. C is chosen randomly.

3. C connects to A and D. D is chosen since it has fewer edges.

4. D's only edge is E, so E comes next.

5. E has edges to A and F, which both have one remaining edge. A is chosen at random.

6. A must now connect to F to finish out the tour.

The sequence evolved from the parents [A B C D E F] and [B D C A E F] turns out to be [B C D E A F].

The final task of applying a genetic algorithm to the traveling salesperson problem involves encoding the string into binary. In this six city tour, we can list all the possible edges:



We can include the parents tours [A B C D E F] and [B D C A E F], and the offspring tour [B C D E A F] in a chart as binary masks which show which connections occur:



The mask can be simplified by removing the bit positions where neither parent has an edge, and denoting how recombination occurred. The bit donated by each parent has been indicated.



We can see that a mutation occurred in the final bit position, due to mutation by omission. Although both parents had the edge "ef," it was not passed on to the offspring.

Once the obstacles of encoding have been surpassed, the traveling salesperson becomes essentially the same as any other optimization problem. Because genetic algorithms find optimal answers without doing exhaustive searches, GA's can be applied to the traveling salesperson problem using a greater number of cities. In experiments performed with 30 city and 105 city traveling salesperson problems, genetic algorithms consistently found optimal solutions.