Bio of Turing
More about Turing
Books on Turing
What is Artificial Intelligence?
By Jack Copeland
© Copyright B.J. Copeland, May 2000
The Chinese Room Objection
One influential objection to strong AI, the Chinese room objection, originates with the philosopher John Searle. Searle claims to be able to prove that no computer program--not even a computer program from the far-distant future--could possibly think or understand.
Searle's alleged proof is based on the fact that every operation that a computer is able to carry out can equally well be performed by a human being working with paper and pencil. As Turing put the point, the very function of an electronic computer is to carry out any process that could be carried out by a human being working with paper and pencil in a "disciplined but unintelligent manner". For example, one of a computer's basic operations is to compare the binary numbers in two storage locations and to write 1 in some further storage location if the numbers are the same. A human can perfectly well do this, using pieces of paper as the storage locations. To believe that strong AI is possible is to believe that intelligence can "emerge" from long chains of basic operations each of which is as simple as this one.
Given a list of the instructions making up a computer program, a human being can in principle obey each instruction using paper and pencil. This is known as "handworking" a program. Searle's Chinese room objection is as follows. Imagine that, at some stage in the future, AI researchers in, say, China announce a program that really does think and understand, or so they claim. Imagine further that in a Turing test (conducted in Chinese) the program cannot be distinguished from human beings. Searle maintains that, no matter how good the performance of the program, and no matter what algorithms and data-structures are employed in the program, it cannot in fact think and understand. This can be proved, he says, by considering an imaginary human being, who speaks no Chinese, handworking the program in a closed room. (Searle extends the argument to connectionist AI by considering not a room containing a single person but a gymnasium containing a large group of people, each one of whom simulates a single artificial neuron.) The interogator's questions, expressed in the form of Chinese ideograms, enter the room through an input slot. The human in the room--Clerk, let's say--follows the instructions in the program and carries out exactly the same series of computations that an electronic computer running the program would carry out. These computations eventually produce strings of binary symbols that the program instructs Clerk to correlate, via a table, with patterns of squiggles and squoggles (actually Chinese ideograms). Clerk finally pushes copies of the ideograms through an output slot. As far as the waiting interogator is concerned, the ideograms form an intelligent response to the question that was posed. But as far as Clerk is concerned, the output is just squiggles and squoggles--hard won, but completely meaningless. Clerk does not even know that the inputs and outputs are linguistic expressions. Yet Clerk has done everything that a computer running the program would do. It surely follows, says Searle, that since Clerk does not understand the input and the output after working through the program, then nor does an electronic computer.
Few accept Searle's objection, but there is little agreement as to exactly what is wrong with it. My own response to Searle, known as the Logical Reply to the Chinese room objection, is this. The fact that Clerk says "No" when asked whether he understands the Chinese input and output by no means shows that the wider system of which Clerk is a part does not understand Chinese. The wider system consists of Clerk, the program, quantities of data (such as the table correlating binary code with ideograms), the input and output slots, the paper memory store, and so forth. Clerk is just a cog in a wider machine. Searle's claim is that the statement "The system as a whole does not understand" follows logically from the statement "Clerk does not understand". The logical reply holds that this claim is fallacious, for just the same reason that it would be fallacious to claim that the statement "The organisation of which Clerk is a part has no taxable assets in Japan" follows logically from the statement "Clerk has no taxable assets in Japan". If the logical reply is correct then Searle's objection to strong AI proves nothing.