next up previous
Next: Production systems and cognitive modelling
Up: Conventional AI: Production systems and expert systems
Previous: An introduction to inference: mainly expert systems

More on expert systems and inference

If you're still puzzled about inference and the difference between forward and backward chaining, I've worked some examples. You can find these in my lecture notes 6 and 7, in the folder by the Psychology library catalogue.

There are also examples in The Guide to Expert Systems, by Alex Goodall (Learned Information, 1985), RSL, Comp BD 36, chapter 3. This contains a more concise than Winston worked example of the difference between forward- and backward-chaining. Incidentally, Chapter 5 is a discussion, for non-computer-scientists, of several types of knowledge representation.

For general information on expert systems, see the article in The Encyclopaedia of AI. There's also a nice book called Expert systems, edited by Richard Forsyth, in the same bookshelf in the RSL.

Now you should know the two directions of inference. Forward-chaining is usually (but not always) data-driven: rules act on data, producing new data. Backward-chaining is usually (but not always) goal-driven: the system's goal is to prove some conclusion, and rules are called to prove ever simpler sub-conclusions until you get down to known facts.

The distinction between data-driven and goal-driven can be applied in many places. For example, a vision system might continuously monitor its perceptions, all the time updating a primal sketch, from it a 2-1/2 D sketch, and from it a 3-D sketch, and finally a list of the objects it thinks it sees. Or it might form hypotheses about what objects are present (``I hear a roar'') and then call the lower-level modules to do only as much processing as is necessary to prove or disprove the existence of a tiger.

Incidentally, the names ``forward-chaining'' and ``backward-chaining'' denote general strategies, not specific tactics. For example, in the cycle on p 168 of Winston, a forward-chaining system has to know when to stop trying to fire rules. It might stop when firing a rule produces no change in the database; or it might stop once some fact about a particular individual appears (``the animal I just heard is a ...''). There are many variations on both types of inference.


next up previous
Next: Production systems and cognitive modelling
Up: Conventional AI: Production systems and expert systems
Previous: An introduction to inference: mainly expert systems



Jocelyn Paine
Tue Jun 3 11:26:14 BST 1997