“A* Search”. Report Online. 9 Sept., 2003. <>.

This website is a part of the Virtual Lecture Project on Artificial Intelligence. The method of the algorithm briefly describes the approach comparing it to other search algorithms. A shockwave graphical demonstration is also presented along with the stepwise breakdown of the algorithm. The approach is evaluated on the basis of its completeness, search cost and optimization.

“Best First Search”. Report Online. 9 Sept. 2003. <>.

This website is also a part of the Virtual Lecture Project on Artificial Intelligence and is of the similar format as that described above. A short description of the algorithm is followed by a graphical demonstration. The algorithm is also broken down to three main steps and is further evaluated in terms of the search cost and optimization.

Elithorn, Alick and Banerji, Ranan.  Artificial and Human Intelligence. Amsterdam, Netherlands: Elsevier Science Publishers B.V., 1984.

            In addition to dealing with the present and future of intelligent searching, we also hope to understand more about the history of the subject and the questions it has posed.  Artificial and Human Intelligence is a collection of papers presented at the International NATO Symposium on Artificial and Human Intelligence, held in France in October of 1981.  Along with nearly every other facet of Artificial Intelligence being studied at the time, a paper dealing with the use of heuristics and one comparing the benefits of search approaches versus knowledge approaches are included, and are of particular relevance to our study.  Now that study of intelligent search algorithms has become so focused, such broad questions are rarely given so much attention as they are in these papers, making them important resources for comparison with modern-day study.

Fürnkranz, Johannes and Kubat, Miroslav, ed.  Machines That Learn To Play Games. Huntington, New York.: Nova Science Publishers, 2001.

As the title implies, Machines That Learn To Play Games deals with the motivation behind and advances in teaching computers how to play classic, mathematically based games.  The book gives an extensive discussion of stand-bys such as chess and go, but also deals with more unusual games such as poker.  Of particular interest to our project is the discussion of searches used in chess strategies, including Minimax and something called the TD(λ) algorithm.  In addition, the book gives a short but relevant discussion of the limitations of search strategies for game playing in general.

“Iterative Deepening”. Hyperdictionary. 9 Sept., 2003. <>.  

Presents a brief dictionary definition of the method.

“Iterative Deepening”. Report Online. 9 Sept., 2003. <>. 

This website was put together by a faculty member at Lancaster University in United Kingdom. The technique of iterative deepening is described and compared to other similar techniques such as the depth-first search and the breadth-first search. The page includes several graphs as well as a statistical best/worst case analysis for depth-first, breadth-first and iterative deepening algorithms.  

McNaughton, M. et al., “Memory Efficient A* Heuristics for Multiple Sequence Alignment.” Proceedings of the Eighteenth National Conference on Artificial Intelligence. Jul. 2002. 10 Sept. 2003. <>.

This paper shows that improving heuristics in A* search from pairwise alignments to three-way alignments can result in an algorithm that is both faster and more memory-efficient.  The case study algorithm represents the three-way heuristics as an octree where parts of the octree can be dynamically recomputed and stored in order to save space.

Kenthapadi, Krishnaram. The Search Problem. 26 Jun. 2003. 9 Sept. 2003. <>.  

This, along with the next two entries is a group of power point presentations from Stanford’s summer course called Introduction to Artificial Intelligence (CS121). The search problem is formulated and two general types of search algorithms (blind search vs. heuristic search) are analyzed. The presentation also contains some general information about artificial intelligence.      

Kenthapadi, Krishnaram. Blind Search. 1 Jul. 2003. 9 Sept. 2003. <>.

            See the above annotation.

Kenthapadi, Krishnaram. Blind Search (contd), Heuristic Search. 3 Jul. 2003. 9 Sept. 2003. <>.  

See the above annotation.

Plaat, Aske. A Minimax Algorithm faster than NegaScout. 3 Dec 1997.   9 Sept. 2003.  <>.

Provides information on MTD(f), an algorithm making use of AlphaBetaWithMemory that has performed better in games than NegaScout and has replaced NegaScout in programs such as MIT’s latest chess program.  It includes a pseudocode description of the algorithm, background, and usage tips.

Plaat, Aske. Research Re: Search & Re-search (PhD Thesis). 1996. 9 Sept. 2003.                 <>.

Presents new theories on SSS*, which is shown to be a variant of the depth-first algorithm, Alpha Beta.  It presents a new algorithm, MTD(f), which outperforms some Alpha Beta variants, and it examines ways to enhance minimax search algorithms by highlighting a difference between Alpha-Beta’s theoretical and practical minimal tree constraints.    

Schutzer, Daniel.  Artificial Intelligence: An Applications-oriented Approach. New York, New York: Van Nostrand Reinhold Co., 1987.

Though relatively old, this book provides a broad but basic look at the issues central to our topic.  The attributes of breadth-first and depth-first searches are discussed, as well as constraints on searches.  This book takes a more practical approach—including techniques for programming searches and other aspects of the study of artificial intelligence in LISP—but is by no means “modern” by our standards.  This makes it useful in our trace of the history of the subject as a sort of “middle ground” between the almost purely hypothetical looks of earlier works and the more specific and technical views we see today.

Sharples, Hogg, Hutchison, et. al. Computers and Thought: A Practical Introduction to Artificial Intelligence. Cambridge, Mass:  MIT Press, 1989.

Computers and Thought bills itself as a “self-contained introduction to artificial intelligence for readers with little or no computing background.”   Thus, the book offers an introduction to intelligent searching firmly grounded in real-world questions, from finding a subway route to completing a jigsaw puzzle.  At the same time, terms which are key to the understanding of the topic are provided and explained.  While our audience is by no means computer illiterate (or, for that matter, living in 1989), this book offers simple methods for explaining and understanding our topic that will help provide a foundation for discussion.

Shimbo, Masashi and Ishida, Toru.  Controlling the learning process of real-time heuristic search.”  Artificial Intelligence.  10 Sept. 2003.  <>.

This paper presents two methods of coping with the learning problem in real-time heuristic search where the algorithm does not focus well on both the short-term goal and the long-term goal.  These two techniques, the “weighted real-time search” and the “real-time search with upper bounds,” make the learning process cope with the learning convergence problem more effectively in comparison to other algorithms such as the Learning Real-Time A* algorithm.

Walker, A. N. “Alpha-beta Pruning.” The Grove Dictionary of Art Online. 9 Sept. 2003.            <>.

This website is maintained by a faculty member at the University of Nottingham. The site was designed for a game theory course and thus describes alpha-beta pruning in the context of game theory analysis. An evaluation of the algorithm is offered and some possible refinements of the method are suggested.   

Watson, Jean-Paul.  Beckb, J. Christopher.  Howea, Adele E.  Whitley, L. Darrel.  “Problem difficulty for tabu search in job-shop scheduling.”  Artificial IntelligenceVolume 143, Issue 2. Feb. 2003.  10 Sept. 2003. <>.  

            Tabu search algorithms are usually used for combinatorial optimization problems such as the extremely difficult  and NP complete job shop scheduling problem, but the reasons for these algorithms’ successes in this area and the conditions of their success are still largely a mystery.  This paper presents the first endeavor to create a problem difficulty model of tabu search in the JSP.  After outlining weaknesses of this model, it then uses this model to explain two observations about problem difficulty in the JSP relating to the search cost of locating sub-optimal JSP solutions and the variation in difficulty when solving different parallelogram JSPs.