Stochastic Gradient Succeeds for Bandits

We show that the \emph{stochastic gradient} bandit algorithm converges to a \emph{globally optimal} policy at an \(O(1/t)\) rate, even with a \emph{constant} step size. Remarkably, global convergence of the stochastic gradient bandit algorithm has not been previously established, even though it is a...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Mei, Jincheng, Zhong, Zixin, Dai, Bo, Agarwal, Alekh, Szepesvari, Csaba, Schuurmans, Dale
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 27.02.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We show that the \emph{stochastic gradient} bandit algorithm converges to a \emph{globally optimal} policy at an \(O(1/t)\) rate, even with a \emph{constant} step size. Remarkably, global convergence of the stochastic gradient bandit algorithm has not been previously established, even though it is an old algorithm known to be applicable to bandits. The new result is achieved by establishing two novel technical findings: first, the noise of the stochastic updates in the gradient bandit algorithm satisfies a strong ``growth condition'' property, where the variance diminishes whenever progress becomes small, implying that additional noise control via diminishing step sizes is unnecessary; second, a form of ``weak exploration'' is automatically achieved through the stochastic gradient updates, since they prevent the action probabilities from decaying faster than \(O(1/t)\), thus ensuring that every action is sampled infinitely often with probability \(1\). These two findings can be used to show that the stochastic gradient update is already ``sufficient'' for bandits in the sense that exploration versus exploitation is automatically balanced in a manner that ensures almost sure convergence to a global optimum. These novel theoretical findings are further verified by experimental results.
ISSN:2331-8422