Reference Articles on Turing

What is Artificial Intelligence?

By Jack Copeland

© Copyright B.J. Copeland, May 2000


Expert Systems

An expert system is a computer program dedicated to solving problems and giving advice within a specialised area of knowledge. A good system can match the performance of a human specialist. The field of expert systems is the most advanced part of AI, and expert systems are in wide commercial use. Expert systems are examples of micro-world programs: their "worlds"--for example, a model of a ship's hold and the containers that are to be stowed in it--are self-contained and relatively uncomplicated. Uses of expert systems include medical diagnosis, chemical analysis, credit authorisation, financial management, corporate planning, document routing in financial institutions, oil and mineral prospecting, genetic engineering, automobile design and manufacture, camera lens design, computer installation design, airline scheduling, cargo placement, and the provision of an automatic customer help service for home computer owners.

The basic components of an expert system are a "knowledge base" or KB and an "inference engine". The information in the KB is obtained by interviewing people who are expert in the area in question. The interviewer, or "knowledge engineer", organises the information elicited from the experts into a collection of rules, typically of "if-then" structure. Rules of this type are called "production rules". The inference engine enables the expert system to draw deductions from the rules in the KB. For example, if the KB contains production rules "if x then y" and "if y then z", the inference engine is able to deduce "if x then z". The expert system might then query its user "is x true in the situation that we are considering?" (e.g. "does the patient have a rash?") and if the answer is affirmative, the system will proceed to infer z.

In 1965 the AI researcher Edward Feigenbaum and the geneticist Joshua Lederberg, both of Stanford University, began work on Heuristic Dendral, the high-performance program that was the model for much of the ensuing work in the area of expert systems (the name subsequently became DENDRAL). The program's task was chemical analysis. The substance to be analysed might, for example, be a complicated compound of carbon, hydrogen and nitrogen. Starting from spectrographic data obtained from the substance, DENDRAL would hypothesise the substance's molecular structure. DENDRAL's performance rivalled that of human chemists expert at this task, and the program was used in industry and in universities.

Work on MYCIN, an expert system for treating blood infections, began at Stanford in 1972. MYCIN would attempt to identify the organism responsible for an infection from information concerning the patient's symptoms and test results. The program would request further information if necessary, asking questions such as "has the patient recently suffered burns?". Sometimes MYCIN would suggest additional laboratory tests. When the program had arrived at a diagnosis it would recommend a course of medication. If requested, MYCIN would explain the reasoning leading to the diagnosis and recommendation.

Examples of production rules from MYCIN's knowledge base are (1) If the site of the culture is blood, and the stain of the organism is gramneg, and the morphology of the organism is rod, and the patient has been seriously burned, then there is evidence (.4) that the identity of the organism is pseudomonas. (The decimal number is a certainty factor, indicating the extent to which the evidence supports the conclusion.) (2) If the identity of the organism is pseudomonas then therapy should be selected from among the following drugs: Colistin (.98) Polymyxin (.96) Gentamicin (.96) Carbenicillin (.65) Sulfisoxazole (.64). (The decimal numbers represent the statistical probability of the drug arresting the growth of pseudomonas.) The program would make a final choice of drug from this list after quizzing the user concerning contra-indications such as allergies. Using around 500 such rules MYCIN achieved a high level of performance. The program operated at the same level of competence as human specialists in blood infections, and rather better than general practitioners.

Janice Aikins' medical expert system Centaur (1983) was designed to determine the presence and severity of lung disease in a patient by interpreting measurements from pulmonary function tests. The following is actual output from the expert system concerning a patient at Pacific Medical Center in San Francisco.

The findings about the diagnosis of obstructive airways disease are as follows: Elevated lung volumes indicate overinflation. The RV/TLC ratio is increased, suggesting a severe degree of air trapping. Low mid-expiratory flow is consistent with severe airway obstruction. Obstruction is indicated by curvature of the flow-volume loop which is of a severe degree. Conclusions: Smoking probably exacerbates the severity of the patient's airway obstruction. Discontinuation of smoking should help relieve the symptoms. Good response to bronchodilators is consistent with an asthmatic condition, and their continued use is indicated. Pulmonary function diagnosis: Severe obstructive airways disease, asthmatic type. Consultation finished.

An important feature of expert systems is that they are able to work cooperatively with their human users, enabling a degree of human-computer symbiosis. AI researcher Douglas Lenat says of his expert system Eurisko, which became a champion player in the star-wars game Traveller, that the "final crediting of the win should be about 60/40% Lenat/Eurisko, though the significant point here is that neither Lenat nor Eurisko could have won alone". Eurisko and Lenat cooperatively designed a fleet of warships which exploited the rules of the Traveller game in unconventional ways, and which was markedly superior to the fleets designed by human participants in the game.

Fuzzy logic

Some expert systems use fuzzy logic. In standard, non-fuzzy, logic there are only two "truth values", true and false. This is a somewhat unnatural restriction, since we normally think of statements as being nearly true, partly false, truer than certain other statements, and so on. According to standard logic, however, there are no such in-between values--no "degrees of truth"--and any statement is either completely true or completely false. In 1920 and 1930 the Polish philosopher Jan Lukasiewicz introduced a form of logic that employs not just two values but many. Lotfi Zadeh, of the University of California at Berkeley, subsequently proposed that the many values of Lukasiewicz's logic be regarded as degrees of truth, and he coined the expression "fuzzy logic" for the result. (Zadeh published the first of many papers on the subject in 1965.) Fuzzy logic is particularly useful when it is necessary to deal with vague expressions, such as "bald", "heavy", "high", "low", "hot", "cold" and so on. Vague expressions are difficult to deal with in standard logic because statements involving them--"Fred is bald", say--may be neither completely true nor completely false. Non-baldness shades gradually into baldness, with no sharp dividing line at which the statement "Fred is bald" could change from being completely false to completely true. Often the rules that knowledge engineers elicit from human experts contain vague expressions, so it is useful if an expert system's inference engine employs fuzzy logic. An example of such a rule is: "If the pressure is high but not too high, then reduce the fuel flow a little". (Fuzzy logic is used elsewhere in AI, for example in robotics and in neuron-like computing. There are literally thousands of commercial applications of fuzzy logic, many developed in Japan, ranging from an automatic subway train controller to control systems for washing machines and cameras.)

Limitations of expert systems

Expert systems have no "common sense". They have no understanding of what they are for, nor of what the limits of their applicability are, nor of how their recommendations fit into a larger context. If MYCIN were told that a patient who has received a gunshot wound is bleeding to death, the program would attempt to diagnose a bacterial cause for the patient's symptoms. Expert systems can make absurd errors, such as prescribing an obviously incorrect dosage of a drug for a patient whose weight and age are accidentally swapped by the clerk. One project aimed at improving the technology further is described in the next section.

The knowledge base of an expert system is small and therefore manageable--a few thousand rules at most. Programmers are able to employ simple methods of searching and updating the KB which would not work if the KB were large. Furthermore, micro-world programming involves extensive use of what are called "domain-specific tricks"--dodges and shortcuts that work only because of the circumscribed nature of the program's "world". More general simplifications are also possible. One example concerns the representation of time. Some expert systems get by without acknowledging time at all. In their micro-worlds everything happens in an eternal present. If reference to time is unavoidable, the micro-world programmer includes only such aspects of temporal structure as are essential to the task--for example, that if a is before b and b is before c then a is before c. This rule enables the expert system to merge suitable pairs of before-statements and so extract their implication (e.g. that the patient's rash occurred before the application of penicillin). The system may have no other information at all concerning the relationship "before"--not even that it orders events in time rather than space.

The problem of how to design a computer program that performs at human levels of competence in the full complexity of the real world remains open.

[Previous section] [top of page] [Next section]