Robust Offline Reinforcement learning with Heavy-Tailed Rewards

This paper endeavors to augment the robustness of offline reinforcement learning (RL) in scenarios laden with heavy-tailed rewards, a prevalent circumstance in real-world applications. We propose two algorithmic frameworks, ROAM and ROOM, for robust off-policy evaluation and offline policy optimizat...

Full description

Saved in:
Bibliographic Details
Main Authors Zhu, Jin, Wan, Runzhe, Qi, Zhengling, Luo, Shikai, Shi, Chengchun
Format Journal Article
LanguageEnglish
Published 28.10.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper endeavors to augment the robustness of offline reinforcement learning (RL) in scenarios laden with heavy-tailed rewards, a prevalent circumstance in real-world applications. We propose two algorithmic frameworks, ROAM and ROOM, for robust off-policy evaluation and offline policy optimization (OPO), respectively. Central to our frameworks is the strategic incorporation of the median-of-means method with offline RL, enabling straightforward uncertainty estimation for the value function estimator. This not only adheres to the principle of pessimism in OPO but also adeptly manages heavy-tailed rewards. Theoretical results and extensive experiments demonstrate that our two frameworks outperform existing methods on the logged dataset exhibits heavy-tailed reward distributions. The implementation of the proposal is available at https://github.com/Mamba413/ROOM.
DOI:10.48550/arxiv.2310.18715