Symbol manipulation systems in AI as psychology


next up previous contents
Next: Hierarchies of functional architecture
Up: Symbolic versus connectionist AI
Previous: Symbol manipulation systems in AI as engineering
Back: to main list of student notes

Symbol manipulation systems in AI as psychology

Here, this is much less clear. Firstly, it's only a hypothesis that all or part of the mind is (or can be modelled as) a symbol manipulation system. Assuming it can, does, how are the symbols and the rules of manipulation encoded?

Most practitioners assume that, as long as we can find out what the symbols and rules are, we need not worry about the encoding. The mind can be studied at a purely symbolic level, without having to delve into the hardware. In particular, we can map the symbols and rules into a computer, and hence create a program which solves the same problems in the same way as the original.

So a symbolic-AI theory of some mental process must include

Altogether, these constitute the functional architecture of the part of the mind being modelled. The name comes from computing, and implies that one is ignoring lower levels of detail. For those who know a bit about computers, one might talk about the functional architecture of a 486 PC: it has a linear array of memory cells, each capable of holding 32 binary digits; these digits can be treated as numbers, and added, multiplied, etc. There is a limit on the size of memory, usually between one million and ten million cells. There are some elementary operations for manipulating these numbers; these are also represented as 32-digit sequences, and are also stored in memory. Like the numbers, they can be operated on and changed. Only one number can be operated on at a time, and this must be done by copying it from memory into an accumulator. Addition takes this number of nanoseconds, multiplication takes that number of nanoseconds, and so on.

For a psychological example of functional architecture, see page 164 of Computer Models of Mind by Boden (CUP 1988; PSY KD:B 063), Also, read the Young article on Production Systems for Modelling Human Cognition in Expert Systems in the Microelectronic Age, edited by Michie (E.U.P. 1980; PSY KH:M 58). Quite apart from the idea of functional architecture, PSs are important as developmental models, and there'll probably be a question on them.

For more on the symbolic-AI approach, see the summary of ``High Church Computationalism'' in section 1.2.2 of Intelligence as Adaptive Behaviour. Beer quotes Dennett's summary of those shared assumptions behind the symbolic AI approach. Note: I am teaching you about this approach, but not necessarily agreeing with it all. Note also that there is much debate about what constitutes a sumbol, etc.


next up previous contents
Next: Hierarchies of functional architecture
Up: Symbolic versus connectionist AI
Previous: Symbol manipulation systems in AI as engineering
Back: to main list of student notes



Jocelyn Ireson-Paine
Wed Feb 14 23:38:20 GMT 1996