Footnotes

...\author
popx@vax.ox.ac.uk

...
Penrose's criticisms on AI in [Emperor's New Mind] are one of the latest.

...
In [Minds, Brains and Programs].

...
Why did neural-network research dry up in the early 60's? If you can find a copy, it's worth reading the epilogue in the second edition of Minsky and Papert's Perceptrons. Failing that, see Chapter 1 of [*]brains machines and mathematics.

Another reason for the lack of success may be the unreliability of the early simulations. See page 1390 of [Machines which learn].

...
Two other popular-science papers [*]an imitation of life, a machine that learns, from 1950-1951, describe little electric ``turtles'' that could roam around, seek light and power, and even undergo Pavlovian conditioning. These are an example of a more detailed approach to neural modelling: in this case, the neurons were simulated by valves, capacitors and resistors.

Note the references to cybernetics: in the first paper, these include feedback, homeostasis, and desirable states (goals) - e.g. being near, but not too near, a light. A lot of early AI was inspired by such ideas. Page 44 contains what may be the first example of AI hype.

The second paper [*]a machine that learns, raises the question of how a learning system should detect significant correlations - and discard what it's learnt if they cease to be significant.

...
One problem about which there's still much debate is the exact relationship between AI programs and cognitive models which represent their knowledge as strings of disrete symbols (like those I'll cover in these notes), and connectionist systems where the knowledge is implicit in synaptic weights and neural activations. See [Artificial Intelligence: Pratt] for a clear two-page statement of one view and some further references.

...
Dreyfus is one of the best-known critics of the logical approach. See [Mind over Machine]. Daniel Dennett has argued that while the problem of selecting such premises is indeed very hard, it does not signal the death of AI. See [Cognitive Wheels], in which he starts with an elegant little fable of three robots. For one example of a commercially valuable logic-based system designed to advise motorists about accidents, see [*]poetic. This is a good examplar of logic-based symbolic AI.

...
You may find it easier to follow the account in [The Logic Theory Machine]. Also, it may help to draw some tree diagrams. If you want further explanation of search and heuristics, then there's a very good account, with diagrams and examples, in [*]learning and problem solving, pp 8-24.

...
One subject where it's very important to reduce search is chess. As well as all the work on faster chess machines, there's been a lot of interest in cognitive science on the psychology of chess players. See Human chess skill by Neil Charness in [Chess skill in man and machine], and the papers he cites, particularly the Chase and Simon reference.

Newell has worked on chess - he makes some suggestions for a logic-based chess-program in his 1957 article The Chess Machine in []. Pages 73-76 are a very clear statement of the search problems involved. In Chess-playing programs and the problem of complexity, from [Computers and Thought 1963], Newell, Shaw and Simon compare the performance of the then existing chess programs. This article has a good explanation of minimaxing, also covered in [*]learning and problem solving.

...
[*]learning and problem solving

...
Most AI textbooks contain descriptions of GPS - if you aim to write about any of the planning work, you should certainly include it. Boden gives a clear non-technical account in pages 354-357 of [AI and Natural Man]. Winston's account in [*]artificial intelligence: winston is probably one of the best.

See page 212 of [Machines who Think] for some informal comments by Newell and Simon on the spark of intuition that produced GPS. Until Crevier's book [*]ai: the tumultuous appeared, this was the only general history of AI ever written. It contains a number of interviews with Newell, Minsky, and other founders of AI.

...
How did planning evolve from here? You can get a quick survey by reading the chapter introductions in [Readings in Planning]: the Foreword, then pages 57, 109-110, 187-188, 289-290, 391-392, 521, 579, 647-649.

It was realised early on that one of the key problems was interactions between subgoals. For example, if your goal is to make the table look nice and set it for tea, then polishing it will disturb the goal of setting it, since you'll have to wait for the polish to dry. The first program to attempt interacting goals was Hacker. It, essentially, planned as though the goals were independent, and then tried to correct faults in the resulting plans. See pages 286-297 and 360-366 of [AI and Natural Man].

...
People from conventional AI often underestimate the material problems a robot or animal would face in the real world. These include motion control, perception, noise, and dealing with the opposition. Charniak and McDermott pages 527-543 describe some of the problems faced by (a) assembly robots and (b) games players.

...
How did Newell believe production systems could benefit psychology? Read [*]you cant play 20 questions..., and follow it with pages 154-168 and 210-213 from [Computer Models of Mind].

...
If you intend to write a Finals question on production systems, you should certainly look at SOAR in [*]unified theories of human cognition at some stage of the course. Try to read the first four chapters. You will find a short survey of the book in Behavioural and Brain Sciences volume 15, number 3, September 1992. This also contains a number of criticisms of SOAR and the idea of unified theories by cognitive psychologists, computer scientists and others.

...
For a technological example of an Evans-style relational description, see [Learning shape descriptions] and [The Mechanic's Mate]. The Mechanics Mate was a program designed to reason about (for example) how best to remove nails from wood. In order to do this, it had to look at the work and decode the resulting image into a symbolic description (giving, e.g. the type of nail) which could then be reasoned about when selecting a suitable tool.

...
John Hallam, in [Computational Theories...], applies Marr's explanatory framework. You may find this a useful example: I haven't checked it.

...
If you want to add to this, I recommend [Intelligence without Representation], and [*]an emerging paradigm in robot behaviour. You'll find that the second is useful in giving a concise summary of the classical approach to AI and robotics.

...
There's one on the general history of AI: [*]ai: the tumultuous. You can skip anything before Boole and Babbage, as well as speculations about the future of AI: concentrate on the research and commercial achievements Crevier describes, and the flow of ideas. The book is oversimplified, but not bad as a survey. Then contrast with the new approaches described in the chapter on Real Artificial Life in [*]artificial life. The rest of this book is also interesting, and I find that it conveys very well the enthusiasm and excitement of a new field.

Jocelyn Paine
Wed Feb 14 23:52:04 GMT 1996