AlanTuring.net Reference Articles on Turing

What is Artificial Intelligence?

By Jack Copeland

© Copyright B.J. Copeland, May 2000

 

Is Strong AI Possible?

The ongoing success of applied Artificial Intelligence and of cognitive simulation seems assured. However, strong AI, which aims to duplicate human intellectual abilities, remains controversial. The reputation of this area of research has been damaged over the years by exaggerated claims of success that have appeared both in the popular media and in the professional journals. At the present time, even an embodied system displaying the overall intelligence of a cockroach is proving elusive, let alone a system rivalling a human being.

The difficulty of "scaling up" AI's so far relatively modest achievements cannot be overstated. Five decades of research in symbolic AI has failed to produce any firm evidence that a symbol-system can manifest human levels of general intelligence. Critics of nouvelle AI regard as mystical the view that high-level behaviours involving language-understanding, planning, and reasoning will somehow "emerge" from the interaction of basic behaviours like obstacle avoidance, gaze control and object manipulation. Connectionists have been unable to construct working models of the nervous systems of even the simplest living things. Caenorhabditis elegans, a much-studied worm, has approximately 300 neurons, whose pattern of interconnections is perfectly known. Yet connectionist models have failed to mimic the worm's simple nervous system. The "neurons" of connectionist theory are gross oversimplifications of the real thing.

However, this lack of substantial progress may simply be testimony to the difficulty of strong AI, not to its impossibility.

Let me turn to the very idea of strong artificial intelligence. Can a computer possibly be intelligent, think and understand? Noam Chomsky suggests that debating this question is pointless, for it is a question of decision, not fact: decision as to whether to adopt a certain extension of common usage. There is, Chomsky claims, no factual question as to whether any such decision is right or wrong--just as there is no question as to whether our decision to say that aeroplanes fly is right, or our decision not to say that ships swim is wrong. However, Chomsky is oversimplifying matters. Of course we could, if we wished, simply decide to describe bulldozers, for instance, as things that fly. But obviously it would be misleading to do so, since bulldozers are not appropriately similar to the other things that we describe as flying. The important questions are: could it ever be appropriate to say that computers are intelligent, think, and understand, and if so, what conditions must a computer satisfy in order to be so described?

Some authors offer the Turing test as a definition of intelligence: a computer is intelligent if and only if the test fails to distinguish it from a human being. However, Turing himself in fact pointed out that his test cannot provide a definition of intelligence. It is possible, he said, that a computer which ought to be described as intelligent might nevertheless fail the test because it is not capable of successfully imitating a human being. For example, why should an intelligent robot designed to oversee mining on the moon necessarily be able to pass itself off in conversation as a human being? If an intelligent entity can fail the test, then the test cannot function as a definition of intelligence.

It is even questionable whether a computer's passing the test would show that the computer is intelligent. In 1956 Claude Shannon and John McCarthy raised the objection to the test that it is possible in principle to design a program containing a complete set of "canned" responses to all the questions that an interrogator could possibly ask during the fixed time-span of the test. Like Parry, this machine would produce answers to the interviewer's questions by looking up appropriate responses in a giant table. This objection--which has in recent years been revived by Ned Block, Stephen White, and myself--seems to show that in principle a system with no intelligence at all could pass the Turing test.

In fact AI has no real definition of intelligence to offer, not even in the sub-human case. Rats are intelligent, but what exactly must a research team achieve in order for it to be the case that the team has created an artefact as intelligent as a rat?

In the absence of a reasonably precise criterion for when an artificial system counts as intelligent, there is no way of telling whether a research program that aims at producing intelligent artefacts has succeeded or failed. One result of AI's failure to produce a satisfactory criterion of when a system counts as intelligent is that whenever AI achieves one of its goals--for example, a program that can summarise newspaper articles, or beat the world chess champion--critics are able to say "That's not intelligence!" (even critics who have previously maintained that no computer could possibly do the thing in question).

Marvin Minsky's response to the problem of defining intelligence is to maintain that "intelligence" is simply our name for whichever problem-solving mental processes we do not yet understand. He likens intelligence to the concept "unexplored regions of Africa": it disappears as soon as we discover it. Earlier Turing made a similar point, saying "One might be tempted to define thinking as consisting of those mental processes that we don't understan"'. However, the important problem remains of giving a clear criterion of what would count as success in strong artificial intelligence research.

[Previous section] [top of page] [Next section]