Hierarchical Solution of Markov Decision Processes using Macro-actions

Milos Hauskrecht, Nicolas Meuleau, Leslie Pack Kaelbling, Thomas Dean
Computer Science Department, Box 1910
Brown University
Providence, RI 02912-1210, U.S.A.
email: milos,nm,lpk,tld@cs.brown.edu

Craig Boutilier
Department of Computer Science
University of British Columbia
Vancouver, BC, CANADA, V6T 1Z4
email: cebly@cs.ubc.ca

Abstract
We investigate the use of temporally abstract actions, or macro-actions, in the solution of Markov decision processes. Unlike current models that combine both primitive actions and macro-actions and leave the state space unchanged, we propose a hierarchical model (using an abstract MDP) that works with macro-actions only, and that significantly reduces the size of the state space. This is achieved by treating macro-actions as local policies that act in certain regions of state space, and by restricting states in the abstract MDP to those at the boundaries of regions. The abstract MDP approximates the original and can be solved more efficiently. We discuss several ways in which macro-actions can be generated to ensure good solution quality. Finally, we consider ways in which macro-actions can be reused to solve multiple, related MDPs; and we show that this can justify the computational overhead of macro-action generation.

To appear, UAI-98

Return to List of Papers