For the past half century, few areas of Computer Science have received as much attention from the public as the study of Artificial Intelligence. Thinkers ranging from Alan Turing to Steven Spielberg have shared their visions of how computers may someday match or exceed the abilities of humans, while an IBM computer shocked the world by defeating the world's greatest human chess player. While Deep Blue is only human in its chess playing ability and the humanoid robots like those in books and movies are probably still hundreds of years away, society is nonetheless captivated with the concept that machines can think.

While "Artificial Intelligence" is an umbrella term for many concepts in Computer Science, one area of particular interest is the search strategies employed by computers. Searches of various kinds are the backbone behind programs ranging from flight plan designers to many computer games. Our hope is that by understanding the rationale behind some commonly used search algorithms (especially so-called "intelligent" algorithms) we can begin to learn some ways in which computers are taught to think.


Unintelligent or blind algorithm: An algorithm which does not use a heuristic to aid in its search--that is, a search which cannot detect how "close" it is to a solution.

Heuristic: An estimate of how close a given state is to the goal state. Examples include straight line distance on maps, or assigning point values to pieces in chess. The better a heuristic estimates, the better the search that uses it performs. All heuristics must evaluate to 0 when a goal state is reached.

Initial State: the state that the agent starts in.

Path: A sequence of states connected by the sequence of actions.

Predecessor: Any node that is higher up on the tree than the one that is being considered.

Successor: Any node that is a child of a node or a child or a child of a node, etc.

Closed List: A structure that holds every expanded node.

Open List: A structure that holds a list of unexpanded nodes.

Pruning: Ignoring portions of a tree that make no difference to the final choice.

Minimax Value: The utility value of being in a certain state assuming that both players play optimally.

Goal Test: A test that determines whether a given state is a goal state.

Path Cost: A function that assigns a numeric value to each path.

Solution: A path from the initial to the goal state.

Optimal Solution: One that has the lowest path cost among the solutions.

Goal state: The desired resulting condition in a given problem, and what search algorithms are looking for. Some problems may have a unique goal state (arrive at a given location, for example) whereas others can have multiple goal states, all satisfying a set of conditions (checkmate in chess, for instance.)

Terminal state: A condition in which the problem "ends", either due to the rules of the problem or because of a lack of options from that point. These states are the source of minimax evaluation; in a depth first search, if terminal states are not goal states, they are deleted.

Parent: The node from which the node in question was expanded. Generally, nodes only normally have one parent (though identical nodes can often be produced by moves from two different states.)

Agent: Something that perceives and acts in an environment.

Agent Function: A function that specifies the action taken by an agent in response to an environment.

Utitlity Function: A function that gives a numeric value for terminal states.

Transposition Table: A hash table of previously seen positions.

Locally Finite: A graph in which none of the nodes on the graph have an infinite branching factor.

Admissible Heuristic: A heuristic that does not overestimate.

Optimistic Heuristic: Considers the cost of solving a problem to be less than it really is.

Monotonicity: A property of logical systems that states that the set of entailed sentences can only increase as information is added to the knowledge base.

Pathmax: An equation that compares estimated path cost of a node with the estimated path cost of its parent node.

Branching Factor: Maximum number of successors of any node.