In the first lecture, I quoted from Computers and Thought on AI as productive laziness. In a sense, the difference between shallow and deep reasoning is one of laziness. Given a house-wiring problem, you can either solve it from first principles (deep reasoning), or you can rely on ad-hoc rules of thumb gained by previous experience (shallow reasoning). The first is more certain to be right (assuming you don't get tired and make mistakes); the second may be a lot faster.
Most systems have the rules but not the principles; deep reasoning systems have the principles but can't condense them into rules. The ideal would be a combined system. When set a problem, it first tries the rules it's learnt. If they fail, it resorts to fundamentals. Having done so, it encodes the answer to its problem as new rules so they can be used next time round. And when the system is idle, it spends its time preparing itself by solving imaginary problems and encoding the solutions as rules.
There has been some research on these combined systems. But given the difficulty of deep systems by themselves, it's not surprising that there are few results.