Review of Cognitive Carpentry and Artificial Minds

A review of Cognitive Carpentry by John L Pollock (MIT Press 1995) and Artificial Minds by Stan Franklin (MIT Press 1995)
Train sets, Meccano, Sim City: I've always liked to build working models. And a working model of mind has a special fascination. This is surely sufficient reason for doing AI; but I can also feel a glow of satisfaction at knowing I'm helping my neighbouring disciplines survive.

As John Pollock says in Cognitive Carpentry: philosophy needs AI as much as AI needs philosophy. One necessary test of a theory of mind is that we can build an AI system which implements the theory. It behooves philosophers to remember this, for many popular philosophical theories are not implementable.

Stan Franklin takes a similar attitude to psychology, asserting in Artificial Minds that the cognitive scientists, with their lust to build models, understand mind more deeply than psychologists.

But why should such models teach us anything about our own minds? Consider Pollock's work. His objective is a computational theory of rational thought. Taking what philosopher Daniel Dennett calls the ``design stance'' to AI, he regards rationality as evolution's engineering solution to a difficult design problem. The constraints - logical and computational - on the problem may be so tight that there is only one reasonable solution. If so, then in building a rational machine, we will learn how human rationality necessarily operates.

More specifically, rationality solves the problem of surviving in an uncertain, unstable environment. What mental equipment can we use? Firstly, mechanisms for yielding beliefs about our situation: for example, that it has started snowing. Secondly, likes and dislikes about general features of situations: we loath cold. These attitudes are hard-wired to help us keep body temperature and other variables within safe limits. To do so, we must plan a course of action that changes our situation to one we like more: by abandoning our shopping trip, perhaps, and walking home to a warm fire.

This sounds familiar: an agent derives goals from its beliefs, then makes plans to achieve them. But it must choose actions sensibly: if a job could be done equally well using either water or liquid radium, we'd hardly regard someone who goes further than the nearest tap as rational. Concentrating on the theory of planning, AI has tended to leave evaluation and selection to decision theory, a kind of economic cost-benefit analysis. Pollock combines the two into a unified theory, with useful results on scheduling and action selection. Detailed plan synthesis requires further research, but existing planners can be incorporated into his framework.

This ``practical cognition'' - deciding which actions to adopt - is one component of rational thought. It relies on the other, epistemic cognition, to perform inferences and supply it with beliefs. Computation time is scarce, so - a key point - epistemic cognition must be driven by practical cognition's interests and not waste time on irrelevant reasoning. When cycling home, it's more important to avoid cars than to plan tonight's meal. Pollock has implemented an an interest-driven reasoner based on this principle. His tests suggest it does well, compared with various theorem-provers, at avoiding unnecessary inferences. He also describes a defeasible reasoner (one that can undo existing beliefs as well as generating new ones) which combines ideas from default logic, circumscription and argument-based approaches.

Even with interest-driven reasoning, an agent relying solely on logical inference would be impossibly slow. ``Quick and inflexible'' modules, tailored to deliver approximately correct results without extended deliberation, are also needed. Some - jerking your hand away from heat - make plans. Others, such as our intuitive comparisons of areas, generate beliefs. Pollock integrates these into a common architecture, viewing a rational agent as a bundle of such modules with logical reasoning sitting on top and tweaking their output as required.

Pollock has implemented his theory as a Lisp program, OSCAR, available via http://info-center.ccit.arizona.edu/~oscar/. His company, Artilects, is applying OSCAR to medical decision support, amongst other problems; I will be interested to see how it scales up to them. In the meantime, his book offers insights into planning, defeasible reasoning, decision theory, and agent architectures, and I recommend it. Although intended for professionals in AI and philosophy, it is fairly self-contained: it requires facility with logic and probability theory, but little knowledge of other topics.

In contrast to Cognitive Carpentry, Franklin's Artificial Minds is written for the non-specialist; billed as an informal tour of some artificial ``mechanisms of mind'' and of three AI debates.

The diversity of AI would challenge any writer: two major paradigms, symbolic AI and connectionism, and several minor ones, all home to a variety of techniques and approaches. Symbolic AI is based on the hypothesis that we think by manipulating mental symbols (standing for objects, events, and so on) according to explicit rules. This is sufficient to explain human intelligence (and, as with OSCAR, to make a machine behave intelligently); our explanations don't need to descend to the level of the brain's neural hardware, any more than a programmer need explain his program in terms of logic gates. Connectionist models go deeper, imitating - in a simplified way - how our neurons operate.

Though the paradigms differ greatly, they usually stand together in their emphasis on modelling isolated mental functions. Some critics argue that we should instead try to understand the whole organism, starting with simple agents like insects. The point is - think of OSCAR's practical cognition - that real minds evolved to survive in a complex environment by producing the next action. Our mental functions originated subservient to this end, so this is the context within which we must place our models. Broadly speaking, this is the artificial life or situated agents approach.

Franklin promotes a combination of this ``action selection'' view with another which - following Marvin Minsky's Society of Mind and the rise of distributed computing - is increasing in popularity, namely that mind is not a hierarchical system overseen by a global controller, but a collection of autonomous modules each devoted to a specialised task. So although he describes one symbolic AI program, SOAR (one of a few programs claimed to embody a unified theory of cognition), and some examples of connectionism, as well as the debate between the two, he gives more attention to artificial life and multiple-agents research, including Wilson's Animat (a creature which evolves rules about how to find food) and Pattie Maes's behaviour networks. He also describes Pentti Kanerva's nifty model of sparse distributed memory, and Hofstadter and Mitchell's excellent Copycat analogical reasoner.

These originated mostly between 1985 and 1991; Franklin omits earlier staples such as the General Problem Solver and expert systems, together with natural-language understanding and machine vision. These can be found elsewhere, notably in Daniel Crevier's AI - the tumultuous history of artificial intelligence, so this does not diminish the worth of the book. Indeed, it's good to see a popular writer who doesn't feel obliged to recapitulate a science's entire development, thus forcing himself to squeeze the quarks and quasars into his final chapter's last few pages. That said, pointers to other popular accounts would help the reader obtain a balanced view.

Franklin gives a nice survey of recent work for the general reader, though I found some of his program descriptions unclear. More examples would help. Textbooks tend to omit the topics he covers, so Artificial Minds would also interest students.


Top of page
Jocelyn Ireson-Paine
A revised version of this article appeared in the Times Higher Education Supplement for May 17th 1996.