The symbol-grounding problem; and a warning


next up previous contents
Next: The problems of scale
Up: No Title
Previous: Planning and execution
Back: to main list of student notes

The symbol-grounding problem; and a warning

Most of the symbols in PopBeast have names which look like English words: opens, assertion, in. When looking at programs that use such symbols, it's very easy to fall into the trap of thinking that they mean a lot more to the program than they do. When we see the symbol opens, it induces a rich network of concepts and associations. None of these, however, are available to PopBeast: to it, opens only gains meaning by virtue of the way it is manipulated by various parts of the program.

Another way to make this point is that PopBeast would behave exactly the same if I were to systematically replace every name by its equivalent in some other language. I could, for example, replace square by vierkant, key by sleutel, me by mij ...As long as I have done this so that names are different in this new language wherever they were different in the original, the program won't suffer. I could of course also use made-up names: square by z, key by zz, me by zzz ...This is an important point, well worth bearing in mind when reading AI programs and books. Most of the examples of symbolic representations you will see use English names, for the simple reason that authors and programmers find it easier to work with names with familiar connotations. These names mean so much to us that we can easily forget they have no such meaning to their program. Beware of this. I recommend reading Artificial intelligence and natural stupidity by Drew McDermott from Mind Design by Haugeland (MIT 1981, PSY KH:H029) as a preventative. It assumes some experience with semantic nets, so you may want to come back to it when you've read about those.

If the internal symbols don't gain meaning from their names, how do they gain it? PopBeast's brain does not operate in isolation, but perceives and acts in what, to it, is an external reality. How does the internal symbol door come to be connected with what PopBeast perceives when it sees a # in Eden? This is the symbol-grounding problem: how, in an artificial or natural symbol-manipulating system, the internal symbols come to be connected to external referents and actions. In PopBeast, the answer is that although most of the symbols' names are completely arbitrary, this is not true of the images on PopBeast's retina, nor of the motor actions that it obeys. These are determined by Eden's ``laws of physics'', and somewhere inside PopBeast there is a consistent mapping between them and symbols like door - a mapping set up by me. The symbol-grounding problem becomes more acute in learning systems which must identify for themselves novel features and properties for which their programmer has not provided pre-defined symbols.


next up previous contents
Next: The problems of scale
Up: No Title
Previous: Planning and execution
Back: to main list of student notes



Jocelyn Ireson-Paine
Thu Feb 15 00:09:05 GMT 1996