Robots that are meant to function in unstructured environments must actively monitor and suppress risk to extend their longevity. Robots like these would be subject to issues arising from sensor noise, occlusion and dynamic environments. To be successful in these domains, these agents must be able to reason about all possibilities that may occur due to their actions (or inaction, as the case may be).
Computationally, this is achieved by maintaining a probabilistic belief distribution over all the states the agent believes it is in. This belief represents the history of temporal observations that have been sensed. Using probabilistic models that describe the robots affordances, the agent can plan out a sequence of actions that should lead to posterior belief distributions that are condensed over the set of goal states.
Further, these planners can be arranged hierarchically to reason about different levels of the task and reuse existing structure to support higher-level decision making.