The subject covers a lot of ground - an eight-hour AI course is as hopeless an undertaking as an eight-hour psychology course. Many universities allow you to spend your entire three years studying for a degree in the subject. Oxford, of course, does not. Worse, not only is AI a large subject, but it is horribly fragmented. With its lack of general principles, there's a lot of disagreement about aims, methods, and philosophy. Bear this in mind when you compare one person's work with another's.
Approaches to AI differ widely, depending not least on whether the researcher is interested in cognitive science or in engineering. However, I think most would agree with the following definition, given by Aaron Sloman [Computers and Thought 1989]:
The definition above is a clear description of AI's objectives, but it's harder to say what AI people should be doing in order to reach them. Different parts of the subject seem to be mutually incompatible, connectionism versus symbolic AI being the best-known example. There are incompatibilities on a smaller scale too, for example the split between classical and non-representational or behaviour-based AI that I'll mention at the end of these notes. It may be that, like architecture, AI can be no more than a collection of tools and techniques. Or it may be that there can be found some unifying concepts which underly the whole of the subject.
As far as psychology is concerned, an important way to look at AI is from the design stance. Suppose we set out to design a person from scratch. How would we solve the problems of vision language, and hearing? How could we build a problem solver which balances the need to maintain concentration on existing goals, while being alert to new sensory data but yet not becoming completely distracted by it? What steps are logically necessary in reconstructing object information from images, and how do these relate to the physics of light and the geometry of space? Given that we have limited computational resources, how can we best allocate them between different tasks? In vision, is it necessary to identify objects at all, or can we sometimes make do with simpler processing? Do we need to store beliefs in a logic-based language, or to reason logically? How can we predict how the people around us will behave, without running a complete simulation of their own minds? In other words, when we study AI, we take an engineering approach to the mind.
This is an important meta-idea which you should always bear in mind. However, I haven't had time to reorient these notes around it, and I shall approach AI historically, facing the variety of techniques and methods from the start. So I'll now present a series of papers published between 1950 and 1994, together with a book on the history of AI.