Parallel Search Algorithms

In their article "Randomized Parallel Algorithms for Backtrack Search and Branch-and-Bound Computation," Richard Karp and Yanjun Zhang show that parallel search techniques can be accelerated by using a randomized approach to disperse the workload among currently idle processors. Using a randomized algorithm to distribute the workload makes the whole process much simpler and more efficient because it eliminates the need for complex global data structures and communication instructions.

Karp and Zhang's study focuses in particular on the benefits of randomization in backtracking search programs. A backtracking search algorithm searches a tree or a section of a tree by expanding a node, determining if it could lead to a solution, then either recursively expanding both of the children nodes, or backtracking to a path that it has not yet explored that could yield and answer. In order for the algorithm to be parallel, it must distribute the computations to the various available processors. When one processor expands a node, it then donates half of the children nodes to an idle processor. It is the method of donation that separates the deterministic algorithm from the randomized algorithm.

The deterministic algorithm has a set of rules that tells it how to choose the correct recipient processor. These rules require a complex level of communication among the parallel processors. The randomized version of the algorithm is much simpler and more efficient. When a processor needs to donate its load, it receives requests from the idle processors and chooses one at random. As it turns out, this method is significantly faster and less complicated. For more in-depth information on randomized parallel search algorithms, look at the article by Karp and Zhang.

Back to Applications Main Page