Approximate Solutions To Constrained Risk-Sensitive Markov Decision Processes
This paper considers the problem of finding near-optimal Markovian randomized (MR) policies for finite-state-action, infinite-horizon, constrained risk-sensitive Markov decision processes (CRSMDPs). Constraints are in the form of standard expected discounted cost functions as well as expected risk-s...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
29.09.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This paper considers the problem of finding near-optimal Markovian randomized
(MR) policies for finite-state-action, infinite-horizon, constrained
risk-sensitive Markov decision processes (CRSMDPs). Constraints are in the form
of standard expected discounted cost functions as well as expected
risk-sensitive discounted cost functions over finite and infinite horizons. The
main contribution is to show that the problem possesses a solution if it is
feasible, and to provide two methods for finding an approximate solution in the
form of an ultimately stationary (US) MR policy. The latter is achieved through
two approximating finite-horizon CRSMDPs which are constructed from the
original CRSMDP by time-truncating the original objective and constraint cost
functions, and suitably perturbing the constraint upper bounds. The first
approximation gives a US policy which is $\epsilon$-optimal and feasible for
the original problem, while the second approximation gives a near-optimal US
policy whose violation of the original constraints is bounded above by a
specified $\epsilon$. A key step in the proofs is an appropriate choice of a
metric that makes the set of infinite-horizon MR policies and the feasible
regions of the three CRSMDPs compact, and the objective and constraint
functions continuous. A linear-programming-based formulation for solving the
approximating finite-horizon CRSMDPs is also given. |
---|---|
DOI: | 10.48550/arxiv.2209.14963 |