In 1950, Alan Turing made yet another contribution to the vast array of philosophical issues raised in computer science with his description of the “imitation game.” About a decade and half earlier, he had claimed that what is now known as the Turing machine was capable of computing everything obtainable by mechanical means (therefore including calculations done by humans), and was now stating that a machine could in fact be capable of intelligence. In his paper “Computing Machinery and Intelligence,” Turing set up a situation in which an “interrogator” was to interact with typewritten output, and if he or she was unable to distinguish between human-generated and computer-generated output, then the computer would be said to possess intelligence.

Before going into the great debate spurned by Turing’s “imitation game,” it is worth discussing the belief systems his claim rests on. Turing’s categorization of intelligence relies on behaviorist theory in which, simply put, intelligence is attributed to anything that exemplifies the same behavior as that of intelligent humans. Behaviorism was never fully condoned in the science world, but Turing’s test was further supported in the 1960s when Hilary Putnam used it to derive the theory of functionalism. Functionalism deemphasizes the actual structure and workings of particular processes, and instead focuses on the functions of things, deeming intelligence when such functions match up. It was these principles of determining intelligence that John R. Searle took issue with and publicly denounced in his immensely well-known “Chinese Room Argument.”

In 1980, Searle exposed the world to the Chinese room in a paper entitled “Minds, Brains, and Programs,” a paper in which he tried to discredit Turing’s “imitation game,” and the method believed to recreate human intelligence. In his paper, Searle first distinguished between what he felt were two current philosophies of artificial intelligence, strong and weak AI. Weak AI, by Searle’s definition, was centered around simulation of the mind where machines such as the one described in the “imitation game” were tools for further understanding of the mind. On the other hand, Searle stated that those in the area of strong AI, like Turing and his supporters, believed that they could actually duplicate human intelligence, and create machines capable of understanding exactly as we do. Needless to say, Searle had absolutely no sympathy for this type of artificial intelligence.

Searle disagreed (and continues to disagree) with the Turing Test on two levels, and used a hypothetical “Chinese room” to show his point. In this room was a person who could not speak Chinese, however, one who had a batch of Chinese writing (a script), some more Chinese writing with English instructions that showed how to match the two batches up (a story), and another batch in Chinese with English instructions on how to correlate this batch with the other two (a set of questions). With the third batch he also had Chinese symbols that were to be given in reply (how to answer the questions). Searle maintained that once this person became good at moving from batch to batch, responding with what was to him meaningless symbols that were in fact Chinese characters, to an outsider it would appear as though he was fluent in Chinese. More importantly, Searle tried to show that supposing this person was a computer, therefore making the instructions a program, by the Turing test this person would be said to understand Chinese, of which he clearly does not.

Searle used this “thought experiment” as a way of demonstrating two problems he had with the Turing test. His first claim was that a computer cannot be attributed with understanding solely as a result of a program for the computer under these circumstances possesses no intentionality. By intentionality, Searle meant that such a computer, while formally manipulating symbols, attaches no meaning to these symbols, or in more technical terms, the symbols have syntax (rules for manipulation) but no semantics (truth or meaning). Humans on the other hand do have meanings for symbols and words, which is what constitutes our understandings of things. By his example, given a story with a restaurant, we ourselves have a meaning for the word “restaurant”- attached to it is a place. For the computer, “restaurant” would just be another symbol with rules, but there would be no real meaning to the word. Searle’s second objection came from a physiological perspective stating that the causal processes that yield intelligent thought are a direct result of certain biological process in brain. He claimed that the causal processes that allow for understanding are not included in a computer program and therefore it cannot be said that the same type of understanding is taking place. These two objections reject both the behaviorist and functionalist belief systems, and instead champion dualism and identity theory, dualism being the belief that intelligent thought requires a certain consciousness of things (syntax AND semantics), and identity theory being the necessity of certain biological processes for true understanding.

Searle’s Chinese Room Argument set the stage for a debate that continues to generate responses even today. For the most part, Searle’s view has been discounted in the AI world and the creation of intelligent machines is still being pursued. Most rebuttals to Searle’s argument come from the fact that no one can really tell how our understanding of things works. In her reply “Escaping from the Chinese Room,” Margaret Boden states that no one knows if understanding is only attainable through specific biological processes of the brain. She also maintains that the symbols in the Chinese room do possess semantics of their own in the terms of the programming language being used, a point which John McCarthy (Professor Emeritus of Computer Science at Stanford University) agrees with as well. For all we know, the Turing test does in fact take into consideration all of the necessities for intelligence for someone has yet to refute that given the abstract and therefore difficult to study characteristics of the mind. This also ties into the widely held opinion that neither Searle nor any expert can currently state that understanding is a biological phenomena that is routed in the processes of the brain. However, despite these refutations to Searle’s argument, problems have been discovered with the Turing test such as unintelligent machines being capable of passing it, and intelligent machines being unable to do so. What is important though is that in spite of the interesting points Searle makes with his infamous Chinese Room Argument, it has been agreed for the most part that machines are in fact capable of intelligence through appropriate programming and they will someday most likely achieve an understanding comparable to ours.