On Practical Robust Reinforcement Learning: Adjacent Uncertainty Set and Double-Agent Algorithm

Robust reinforcement learning (RRL) aims to seek a robust policy by optimizing the worst case performance over an uncertainty set. This set contains some perturbed Markov decision processes (MDPs) from a nominal MDP (N-MDP) that generate samples for training, which reflects some potential mismatches...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 36; no. 4; pp. 7696 - 7710
Main Authors Hwang, Ukjo, Hong, Songnam
Format Journal Article
LanguageEnglish
Published United States IEEE 01.04.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Robust reinforcement learning (RRL) aims to seek a robust policy by optimizing the worst case performance over an uncertainty set. This set contains some perturbed Markov decision processes (MDPs) from a nominal MDP (N-MDP) that generate samples for training, which reflects some potential mismatches between the training simulator (i.e., N-MDP) and real-world settings (i.e., the testing environments). Unfortunately, existing RRL algorithms are only applied to the tabular setting and it is still an open problem to extend them into more general continuous state space. We contribute to this subject in the following ways. We first construct an elaborated uncertainty set, which contains plausible (perturbed) MDPs only compared with the existing sets. Based on this, we propose a sample-based RRL algorithm [named adjacent robust Q-learning (ARQ-Learning)] for the tabular setting and characterize its finite-time error bound. Also, it is proved that ARQ-Learning converges as fast as the standard Q-learning and robust Q-learning (Robust-Q) while guaranteeing better robustness. Our major contribution is to introduce an additional pessimistic agent that can address the major hurdle for the extension of ARQ-Learning into cases with large or continuous state spaces. Leveraging this double-agent approach, we for the first time develop (model-free) RRL algorithms for continuous state/action spaces. Via experiments, we demonstrate the effectiveness of our algorithms.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2024.3385234