While still an unintelligent algorithm, the iterative deepening search combines the positive elements of breadth-first and depth-first searching to create an algorithm which is often an improvement over each method individually.

An iterative deepening search operates like a depth-first search, except slightly more constrained--there is a maximum depth which defines how many levels deep the algorithm can look for solutions. A node at the maximum level of depth is treated as terminal, even if it would ordinarily have successor nodes. If a search "fails," then the maximum level is increased by one and the process repeats. The value for the maximum depth is initially set at 0 (i.e., only the initial node).

Visited Nodes
Current Node

The initial node is checked for a goal state; then, since the search cannot go any deeper, it "fails."
The maximum level is increased to 1; then the search restarts-the search (in its most basic implementation) does not remember testing the initial node already. This time, since the initial node is not at the maximum level, it can be expanded.
Its successors, however, cannot; they are checked...if they fail, they are treated as terminal nodes and deleted. The search "fails," and the search once again restarts, with maximum level 2.



This continues until a solution is found.

An interesting observation is that the nodes in this search are first checked in the same order they would be checked in a breadth-first-search; however, since nodes are deleted as the search progresses, much less memory is used at any given time.

The drawback to the iterative deepening search is clear from the walkthrough--it can be painfully redundant, rechecking every node it has already checked with each new iteration. The algorithm can be enhanced to remember what nodes it has already seen, but this sacrifices most of the memory efficiency that made the algorithm worthwhile in the first place, and nodes at the maximum level for one iteration will still need to be re-accessed and expanded in the following iteration. Still, when memory is at a premium, iterative deepening is preferable to a plain depth-first search when there is danger of looping or the most efficient solution is desired.