Simulation Based Algorithms for Markov Decision Processes and Multi-Action Restless Bandits
We consider multi-dimensional Markov decision processes and formulate a long term discounted reward optimization problem. Two simulation based algorithms---Monte Carlo rollout policy and parallel rollout policy are studied, and various properties for these policies are discussed. We next consider a...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
25.07.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We consider multi-dimensional Markov decision processes and formulate a long
term discounted reward optimization problem. Two simulation based
algorithms---Monte Carlo rollout policy and parallel rollout policy are
studied, and various properties for these policies are discussed. We next
consider a restless multi-armed bandit (RMAB) with multi-dimensional state
space and multi-actions bandit model. A standard RMAB consists of two actions
for each arms whereas in multi-actions RMAB, there are more that two actions
for each arms. A popular approach for RMAB is Whittle index based heuristic
policy. Indexability is an important requirement to use index based policy.
Based on this, an RMAB is classified into indexable or non-indexable bandits.
Our interest is in the study of Monte-Carlo rollout policy for both indexable
and non-indexable restless bandits. We first analyze a standard indexable RMAB
(two-action model) and discuss an index based policy approach. We present
approximate index computation algorithm using Monte-Carlo rollout policy. This
algorithm's convergence is shown using two-timescale stochastic approximation
scheme. Later, we analyze multi-actions indexable RMAB, and discuss the index
based policy approach. We also study non-indexable RMAB for both standard and
multi-actions bandits using Monte-Carlo rollout policy. |
---|---|
DOI: | 10.48550/arxiv.2007.12933 |