State-free Reinforcement Learning
In this work, we study the \textit{state-free RL} problem, where the algorithm does not have the states information before interacting with the environment. Specifically, denote the reachable state set by ${S}^\Pi := \{ s|\max_{\pi\in \Pi}q^{P, \pi}(s)>0 \}$, we design an algorithm which requires...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
27.09.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In this work, we study the \textit{state-free RL} problem, where the
algorithm does not have the states information before interacting with the
environment. Specifically, denote the reachable state set by ${S}^\Pi := \{
s|\max_{\pi\in \Pi}q^{P, \pi}(s)>0 \}$, we design an algorithm which requires
no information on the state space $S$ while having a regret that is completely
independent of ${S}$ and only depend on ${S}^\Pi$. We view this as a concrete
first step towards \textit{parameter-free RL}, with the goal of designing RL
algorithms that require no hyper-parameter tuning. |
---|---|
DOI: | 10.48550/arxiv.2409.18439 |