Simple Regret Minimization for Contextual Bandits

There are two variants of the classical multi-armed bandit (MAB) problem that have received considerable attention from machine learning researchers in recent years: contextual bandits and simple regret minimization. Contextual bandits are a sub-class of MABs where, at every time step, the learner h...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Aniket Anand Deshmukh, Sharma, Srinagesh, Cutler, James W, Moldwin, Mark, Clayton, Scott
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 25.02.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:There are two variants of the classical multi-armed bandit (MAB) problem that have received considerable attention from machine learning researchers in recent years: contextual bandits and simple regret minimization. Contextual bandits are a sub-class of MABs where, at every time step, the learner has access to side information that is predictive of the best arm. Simple regret minimization assumes that the learner only incurs regret after a pure exploration phase. In this work, we study simple regret minimization for contextual bandits. Motivated by applications where the learner has separate training and autonomous modes, we assume that the learner experiences a pure exploration phase, where feedback is received after every action but no regret is incurred, followed by a pure exploitation phase in which regret is incurred but there is no feedback. We present the Contextual-Gap algorithm and establish performance guarantees on the simple regret, i.e., the regret during the pure exploitation phase. Our experiments examine a novel application to adaptive sensor selection for magnetic field estimation in interplanetary spacecraft, and demonstrate considerable improvement over algorithms designed to minimize the cumulative regret.
ISSN:2331-8422