Most symbolic AI models don't specify their functional architecture in much detail, and do not state exactly which features are accidental to the theory. Young, on page 43 of Production Systems for Modelling Human Cognition, discusses this further.
One example of such a theory is in Dynamic memory by Roger Schank (CUP 1982; PSY BG:S 299). Definitely read this. You can understand the first two chapters without any technical knowledge at all, and these will give a good idea of symbolic AI applied to reminding and learning. I shall cover some of this work in my second AI lecture.
How should AI papers be evaluated? What's the worth of these models? One researcher argues that the objective of AI is to suggest approaches: to show that, in principle, a given kind of architecture can be made to solve a given class of problems, and then to leave psychologists to refine this. See pages 213-218 of Vision, Instruction, and Action by Chapman (PSY KH:C 036). This comment could apply to all AI as psychology, not merely symbolic AI.