Planning and acting in partially observable stochastic domains

In this paper, we bring techniques from operations research to bear on the problem of choosing optimal actions in partially observable stochastic domains. We begin by introducing the theory of Markov decision processes (mdps) and partially observable MDPs (pomdps). We then outline a novel algorithm...

Full description

Saved in:
Bibliographic Details
Published inArtificial intelligence Vol. 101; no. 1; pp. 99 - 134
Main Authors Kaelbling, Leslie Pack, Littman, Michael L., Cassandra, Anthony R.
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.05.1998
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, we bring techniques from operations research to bear on the problem of choosing optimal actions in partially observable stochastic domains. We begin by introducing the theory of Markov decision processes (mdps) and partially observable MDPs (pomdps). We then outline a novel algorithm for solving pomdps off line and show how, in some cases, a finite-memory controller can be extracted from the solution to a POMDP. We conclude with a discussion of how our approach relates to previous work, the complexity of finding exact solutions to pomdps, and of some possibilities for finding approximate solutions.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:0004-3702
1872-7921
DOI:10.1016/S0004-3702(98)00023-X