This was one of the first ever implemented AI programs. It's interesting because it was designed in part by Allen Newell, a leading exponent of the symbol-processing approach to cognitive modelling. His work culminated in 1990 with the publication of a book on unified theories of cognition. The paper cited below is one of the first statements of his views on AI as a theory of cognition. So please read [Elements of a Theory...]. Pay most attention to Newell's views of what such a theory should do (pp 151-153) and of its relation to other schools of psychology (pp 163-166).
How does the program actually work? The important point is that you start with a set of axioms of logic from which a theorem is to be proven. These axioms can be thought of as ``sentences'' describing what you originally know. A theorem is also a sentence, which has to be derived from the axioms.
As well as the axioms, you have a number of rules of inference which convert old sentences into new ones. There is a very close analogy to a one-person board game. The axioms are the starting positions; rules of inference are the permitted moves; proving a theorem is equivalent to asking whether a given state of the board is reachable by a sequence of legal moves from one of the starting positions.
In principle, one could exhaustively generate all possible sentences from the axioms, stopping when one of them was the same as the theorem to be proven. This is what the authors refer to as the ``British Museum algorithm''. Because this algorithm generates all possible sentences, it's guaranteed to find a sentence that matches the theorem to be proved, should one exist.
In practice, this is impossible on any present or future computer, because of the combinatorial explosion in the number of possible moves. So one needs heuristics: methods which save time by looking only at plausible moves. In the jargon of AI, they prune the search tree. However, unless the heuristics are to be as slow as the original exhaustive computation, they can only make a guess at plausibility, relying on hopefully informative but sometimes deceptive surface features. Hence they do not guarantee a solution.