Most of these articles have referred to the idea that consciousness arises from the need to monitor one's mental tasks, allocating appropriate resources to each. This is related to the idea that consciousness arises whenever an organism creates a mental model of its own operation. Obviously both are relevant to the topic of limited resources in planning; and they would be of great philosophical interest if they had something compelling to say about how consciousness arose. I don't believe they have; but read them anyway, because I may be wrong and it's an important topic.
The easiest-to-find work on this is the final chapter of Mental Models by Johnson-Laird (1983). See also A computational analysis of consciousness by him in Cognition and Brain Theory volume 6 page 4499. There's a photocopy of that in Psychology.
You should also look at the references on Miles Glen's and Peter McLeod's reading lists for Consciousness (in the H.I.P. reading lists file, by the catalogue in the Psychology library).
Why am I unconvinced about self-monitoring entailing consciousness? For one reason, because I know a number of programs that do, and I don't believe they're conscious! (I'm using one at the moment as I edit). The best-known class of such programs is operating systems, the programs that allocate time between the many users using a big computer like the VAX. These were probably the source of the idea of self-monitoring in the works above. To see what such programs do, and by analogy, what it's thought the mind does, read the article on Time-sharing systems by Fano and Corbato (especially pages 133--135) in Scientific American for September 1966. You may also find the article by Samuel in New Scientist for May 27th 1965 (volume 26, no 445) helpful.