Roger Schank's group: some of the best-known and longest-running
research on symbolic models of memory. What does he think AI is? I've
taken this list from his article What is AI anyway? in The
foundations of artificial intelligence edited by Partridge and Wilks
(CUP 1990; PSY KH:P 025). Many of the comments and examples are mine,
not his.
- Representation. Probably the most significant issue. What do we
know? How can we get a machine to know it?
Although Schank doesn't say so, there's more than one aspect to this.
E.g. in Poetic, it is not enough to say that the world model uses logic
as a knowledge representation. This describes the underlying language,
but not what is said in it. We need to consider both the symbol
level (elementary symbols and operations), and the knowledge
level (description in terms of goals etc). See entry for Knowledge
level in the Encyclopaedia of AI.
- Indexing. See the paradox of the expert, last week. Note the
emphasis on indexing in the memory model I describe below.
Important point: we should not constrain our engineering or our
cognitive models by what conventional computers can do. As connectionism
suggests, biological hardware is very different. It may for example be
more efficient at ``automatically'' performing associations that, on a
conventional machine, would require complex indexing methods. For an
extreme example, the holographic pattern matcher described in Transforming the prospects for robot vision, AI photocopy B165.
- Dynamic modification. Any intelligent program will need to change
its methods of representation. For example, memory for chess positions,
last week.
- Decoding from the real world into an internal representation. E.g.
Poetic going from police logs to its logic-based world model. How do we
decode, how do we cross-reference sensory data, and so on? Indeed, do we
do so at all?
- Inference. How do we combine pieces of information to make new
ones? And when? For example, on reading ``John went down the aisle and
put a can of tuna in his basket'', do we infer then that he was in a
supermarket, or do we wait until we need to know what he was doing?
The issue of timing is relevant to generalisation (see below). Do we
store examples and generalise later, or is a certain amount of
generalisation automatically performed as we store each example?
- Controlling the combinatorial explosion. How do you prevent
inference going on for ever? How do you decide how much you want to
know? See the fable about R2D2 in Cognitive Wheels by Dennett, AI
box photocopy D74.
- Generalization. A good generalizer must be able to connect
disparate experiences (the essence of creativity, says Schank).
An excellent example: the Bongard problems in
Chapter XIX of Gödel, Escher, Bach by Hofstadter (PSY KH:H 067).
Here, generalisation is finding what the figures on each side
have in common, that distinguishes them from those on the other side.
But the connection may not fit further examples (problem of induction),
so the generaliser must be able to experiment, re-fit and revise.
- Curiosity. In Schank's view, curiosity depends on prediction. A
system's predictions may fail, and it should try to find out why. This
will involve formulating suitable questions and ways to answer them -
the answers might come from internal knowledge, or by experiment. These
predictions might arise from generalisation, but could also come from
other kinds of processing: e.g. a plan that fails.
- Prediction and recovery. Before you can discover why a prediction
failed, you must be able to tell that it has failed.
- Creativity. There are very few creative AI programs, AM being one
exception. Briefly, a program that created new mathematical concepts and
conjectures from old, starting from set theory. It didn't churn them out
at random (not just throwing any ideas together), but ranked them by
interestingness, determined by (e.g.) how often the same concept had
been discovered; how closely it was related to other interesting
concepts.