Offline Primal-Dual Reinforcement Learning for Linear MDPs
Offline Reinforcement Learning (RL) aims to learn a near-optimal policy from a fixed dataset of transitions collected by another policy. This problem has attracted a lot of attention recently, but most existing methods with strong theoretical guarantees are restricted to finite-horizon or tabular se...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
22.05.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Offline Reinforcement Learning (RL) aims to learn a near-optimal policy from
a fixed dataset of transitions collected by another policy. This problem has
attracted a lot of attention recently, but most existing methods with strong
theoretical guarantees are restricted to finite-horizon or tabular settings. In
constrast, few algorithms for infinite-horizon settings with function
approximation and minimal assumptions on the dataset are both sample and
computationally efficient. Another gap in the current literature is the lack of
theoretical analysis for the average-reward setting, which is more challenging
than the discounted setting. In this paper, we address both of these issues by
proposing a primal-dual optimization method based on the linear programming
formulation of RL. Our key contribution is a new reparametrization that allows
us to derive low-variance gradient estimators that can be used in a stochastic
optimization scheme using only samples from the behavior policy. Our method
finds an $\varepsilon$-optimal policy with $O(\varepsilon^{-4})$ samples,
improving on the previous $O(\varepsilon^{-5})$, while being computationally
efficient for infinite-horizon discounted and average-reward MDPs with
realizable linear function approximation and partial coverage. Moreover, to the
best of our knowledge, this is the first theoretical result for average-reward
offline RL. |
---|---|
DOI: | 10.48550/arxiv.2305.12944 |