Abstraction and Approximate Decision Theoretic Planning

Richard Dearden and Craig Boutilier
Department of Computer Science
University of British Columbia
Vancouver, BC, CANADA, V6T 1Z4
email: dearden@cs.ubc.ca, cebly@cs.ubc.ca

Abstract
Markov decision processes (MDPs) have recently been proposed as useful conceptual models for understanding decision-theoretic planning. However, the utility of the associated computational methods remains open to question: most algorithms for computing optimal policies require explicit enumeration of the state space of the planning problems. We propose an abstraction technique for MDPs that allows approximately optimal solutions to be computed quickly. Abstractions are generated automatically, using an intensional representation of the planning problem (probabilistic STRIPS rules) to determine the most relevant problem features and optimally solving a reduced problem based on these relevant features. The key features of our method are: abstractions can be generated quickly; the abstract solution can be applied directly to the original problem; and the loss of optimality can be bounded. We also describe methods by which the abstract solution can be viewed as a set of default reactions that can be improved incrementally, and used as a heuristic for search-based planning or other MDP methods. Finally, we discuss out certain difficulties that point toward other forms of aggregation for MDPs.

Artificial Intelligence 89(1):219-283 (1997)

Return to List of Papers