Stochastic Gradient Succeeds for Bandits

We show that the \emph{stochastic gradient} bandit algorithm converges to a \emph{globally optimal} policy at an \(O(1/t)\) rate, even with a \emph{constant} step size. Remarkably, global convergence of the stochastic gradient bandit algorithm has not been previously established, even though it is a...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Mei, Jincheng, Zhong, Zixin, Dai, Bo, Agarwal, Alekh, Szepesvari, Csaba, Schuurmans, Dale
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 27.02.2024
Subjects
Online AccessGet full text

Cover

Loading…