Criticisms

Next: Grander criticisms - PenroseSearle and Dreyfus
Up: Goals and planning - criticisms
Previous: Goals and planning - criticisms
Back: to main list of student notes

## Criticisms

• By incorporating theorem-proving methods into a means-ends planner, one can build a planner which allows a wider variety of problem worlds than GPS. But the underlying planning method is the same. STRIPS: A New Approach to the Applications of Theorem Proving to Problem Solving by Fikes and Nilsson. Published in 1971, reprinted in Readings in Planning (page 88) edited by Allen, Hendler and Tate (PSY KH:A 427). Commentary on page 57.

• General-purpose planners like STRIPS and GPS use general-purpose search heuristics. It is inevitable that planners using such heuristics will, when solving complex problems, get caught in a combinatorial explosion. To overcome this, we can make the planner distinguish between details and essential information. When building our initial plan, we ignore the details. After we've made the plan, we then refine it by gradually introducing the details and reconstructing those portions of the plan that need them. This gives us ABSTRIPS. Planning in a Hierarchy of Abstraction Spaces by Sacerdoti. Published in 1974, reprinted in Readings in Planning (page 98). Commentary on page 57.

• The planners above distinguish between planning and execution - it's assumed that all changes in the environment can be known to the planner. But this assumption is unrealistic, and so we must be able to repair our plans as they're being executed, if something unexpected happens. Learning and Executing Generalized Robot Plans by Fikes, Hart and Nilsson. Published in 1972, reprinted in Readings in Planning (page 189). Page 187 is a general commentary on the problem of unexpected events.

• The notion of goal is too crude, being a binary partition of the environment into states that precisely achieve some outcome versus states that don't. This is unrealistic. Instead of allocating complete desirability to the goal states, and complete undesirability to all the others, each state will have a particular utility. For instance, in filling a petrol tank, you want to spill as little petrol as possible; but it may not be possible to fill it without spilling any. So we have a set of states whose utility depends on the amount of spilt petrol.

Moreover, realistic agents will usually have more than one desirable outcome in mind. STRIPS-style goal-based planning is no help in building an agent which can trade-off priorities between different desirable outcomes.

To overcome these two defects, we need to find a new semantics for goals, in terms of continuously variable preferences between outcomes. This should be based on decision theory. See pages 210-212 of Planning and Control by Dean and Wellman (Morgan Kaufmann 1991; PSY KH:D 034). This account is based on a paper published in 1991, Preferential semantics for goals by Wellman and Doyle, in Proceedings AAAI-91.

• So far, we've assumed that agents work with world models. These are symbolic descriptions of the world, in which different tokens uniquely represent distinct individuals. See for example the POETIC system which I referred to in my first lecture. One of the things you do when planning is to use this model to simulate the way in which your actions will change the world, bringing you nearer to or further from a desirable goal.

But the type of behaviour generated by a central controller (planner plus execution mechanism) acting on a world model is inflexible, ``brittle'' and slow. We should investigate an alternative approach where behaviour emerges naturally as an effect of co-operation between many simple modules. Such an agent will still have goals, but they're represented in a different way from goals in conventional planners. In particular, they use a deictic representation. Instead of using separate tokens to identify different objects, the agent has a self-centered representation, and uses symbols like ```the sprayer I am holding now```, or `the food I can see in front of me`. This means less search during planning, and less perceptual decoding. Situated Agents Can Have Goals by Pattie Maes. Published in 1990. From Designing Autonomous Agents edited by Maes (PSY KH:M 026).

• The idea that robots should explicitly represent their goals is wrong. It's based on a naive folk-psychological view. Just as astrology is a naive pre-scientific theory of the night sky, so folk-psychological entities like goals are pre-scientific constructs. They arise from our own introspection; but there's no reason to expect our introspective perceptions of our own minds to reflect reality any more closely than did the Babylonian's perceptions of the night sky. This stance is called eliminative materialism.

Not only is it bad cognitive science to try and find such entities in the mind, it's bad engineering to build them into our robots. Doing so will lead to inefficient, incapable, systems. Taking Eliminative Materialism Seriously: A methodology for Autonomous Systems Research by Tim Smithers, from Towards a Practice of Autonomous Systems, edited by Varela and Bourgine (MIT 1992: PSY KH:V 042; RSL M92.C00938).

• In any case, the notion of representation is too weak! Instead of explaining cognition as the manipulation of representations by computational systems, we should see it as state-space evolution in dynamical systems. Explaining the Behaviour of Springs, Pendulums, and Cognizers by Michael Wheeler. Sussex University CogSci report CSRP 284. Published June 1993. AI Box photocopy W87.

• But unless these dynamical systems embody certain non-local quantum effects, they may not be able to implement the ``holistic perception'' required by human cognition. No Turing-equivalent computer can do so (and, as far as I can see, none of the other dynamical systems we can currently build could either). Penrose, see below.

Next: Grander criticisms - PenroseSearle and Dreyfus
Up: Goals and planning - criticisms
Previous: Goals and planning - criticisms
Back: to main list of student notes

Jocelyn Ireson-Paine
Wed Feb 14 23:51:11 GMT 1996