You can view a backward chaining system as having an initial goal: to answer the question set by its user. In answering this question, the system may spawn subgoals: to answer further questions posed by the conditions of rules. In fact, such systems do represent their goals explicitly: they have to, so that they can return up a level.
However, most systems do not reason about their goals; e.g. should I abandon attempting to prove this because it's taking too long? Amongst those systems that do are blackboard systems. Instead of the simple last-posed first-answered strategy seen above, blackboard systems have an ``intelligent'' ``strategy layer'' or ``control layer'' which attempts to reason about the best order in which to prove something, and the best rules to use.
See also Neomycin. This doesn't reason about its goals in the same way as blackboard systems. But there is a sense in which the meta-rules contain knowledge about goals.