Entropy Maximization for Partially Observable Markov Decision Processes
We study the problem of synthesizing a controller that maximizes the entropy of a partially observable Markov decision process (POMDP) subject to a constraint on the expected total reward. Such a controller minimizes the predictability of an agent's trajectories to an outside observer while gua...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
16.05.2021
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.2105.07490 |
Cover
Summary: | We study the problem of synthesizing a controller that maximizes the entropy
of a partially observable Markov decision process (POMDP) subject to a
constraint on the expected total reward. Such a controller minimizes the
predictability of an agent's trajectories to an outside observer while
guaranteeing the completion of a task expressed by a reward function. We first
prove that an agent with partial observations can achieve an entropy at most as
well as an agent with perfect observations. Then, focusing on finite-state
controllers (FSCs) with deterministic memory transitions, we show that the
maximum entropy of a POMDP is lower bounded by the maximum entropy of the
parametric Markov chain (pMC) induced by such FSCs. This relationship allows us
to recast the entropy maximization problem as a so-called parameter synthesis
problem for the induced pMC. We then present an algorithm to synthesize an FSC
that locally maximizes the entropy of a POMDP over FSCs with the same number of
memory states. In numerical examples, we illustrate the relationship between
the maximum entropy, the number of memory states in the FSC, and the expected
reward. |
---|---|
DOI: | 10.48550/arxiv.2105.07490 |