Another reason for the lack of success may be the unreliability of the early simulations. See page 1390 of [Machines which learn].
Note the references to cybernetics: in the first paper, these include feedback, homeostasis, and desirable states (goals) - e.g. being near, but not too near, a light. A lot of early AI was inspired by such ideas. Page 44 contains what may be the first example of AI hype.
The second paper [*]a machine that learns, raises the question of how a learning system should detect significant correlations - and discard what it's learnt if they cease to be significant.
Newell has worked on chess - he makes some suggestions for a logic-based chess-program in his 1957 article The Chess Machine in . Pages 73-76 are a very clear statement of the search problems involved. In Chess-playing programs and the problem of complexity, from [Computers and Thought 1963], Newell, Shaw and Simon compare the performance of the then existing chess programs. This article has a good explanation of minimaxing, also covered in [*]learning and problem solving.
See page 212 of [Machines who Think] for some informal comments by Newell and Simon on the spark of intuition that produced GPS. Until Crevier's book [*]ai: the tumultuous appeared, this was the only general history of AI ever written. It contains a number of interviews with Newell, Minsky, and other founders of AI.
It was realised early on that one of the key problems was interactions between subgoals. For example, if your goal is to make the table look nice and set it for tea, then polishing it will disturb the goal of setting it, since you'll have to wait for the polish to dry. The first program to attempt interacting goals was Hacker. It, essentially, planned as though the goals were independent, and then tried to correct faults in the resulting plans. See pages 286-297 and 360-366 of [AI and Natural Man].