Risk-Averse Planning Under Uncertainty

We consider the problem of designing policies for partially observable Markov decision processes (POMDPs) with dynamic coherent risk objectives. Synthesizing risk-averse optimal policies for POMDPs requires infinite memory and thus undecidable. To overcome this difficulty, we propose a method based...

Full description

Saved in:
Bibliographic Details
Published in2020 American Control Conference (ACC) pp. 3305 - 3312
Main Authors Ahmadi, Mohamadreza, Ono, Masahiro, Ingham, Michel D., Murray, Richard M., Ames, Aaron D.
Format Conference Proceeding
LanguageEnglish
Published AACC 01.07.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We consider the problem of designing policies for partially observable Markov decision processes (POMDPs) with dynamic coherent risk objectives. Synthesizing risk-averse optimal policies for POMDPs requires infinite memory and thus undecidable. To overcome this difficulty, we propose a method based on bounded policy iteration for designing stochastic but finite state (memory) controllers, which takes advantage of standard convex optimization methods. Given a memory budget and optimality criterion, the proposed method modifies the stochastic finite state controller leading to sub-optimal solutions with lower coherent risk.
ISSN:2378-5861
DOI:10.23919/ACC45564.2020.9147792