Robust Offline Reinforcement learning with Heavy-Tailed Rewards
This paper endeavors to augment the robustness of offline reinforcement learning (RL) in scenarios laden with heavy-tailed rewards, a prevalent circumstance in real-world applications. We propose two algorithmic frameworks, ROAM and ROOM, for robust off-policy evaluation and offline policy optimizat...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
28.10.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This paper endeavors to augment the robustness of offline reinforcement
learning (RL) in scenarios laden with heavy-tailed rewards, a prevalent
circumstance in real-world applications. We propose two algorithmic frameworks,
ROAM and ROOM, for robust off-policy evaluation and offline policy optimization
(OPO), respectively. Central to our frameworks is the strategic incorporation
of the median-of-means method with offline RL, enabling straightforward
uncertainty estimation for the value function estimator. This not only adheres
to the principle of pessimism in OPO but also adeptly manages heavy-tailed
rewards. Theoretical results and extensive experiments demonstrate that our two
frameworks outperform existing methods on the logged dataset exhibits
heavy-tailed reward distributions. The implementation of the proposal is
available at https://github.com/Mamba413/ROOM. |
---|---|
DOI: | 10.48550/arxiv.2310.18715 |