Hierarchical Bayesian Bandits
Meta-, multi-task, and federated learning can be all viewed as solving similar tasks, drawn from a distribution that reflects task similarities. We provide a unified view of all these problems, as learning to act in a hierarchical Bayesian bandit. We propose and analyze a natural hierarchical Thomps...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
12.11.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Meta-, multi-task, and federated learning can be all viewed as solving
similar tasks, drawn from a distribution that reflects task similarities. We
provide a unified view of all these problems, as learning to act in a
hierarchical Bayesian bandit. We propose and analyze a natural hierarchical
Thompson sampling algorithm (HierTS) for this class of problems. Our regret
bounds hold for many variants of the problems, including when the tasks are
solved sequentially or in parallel; and show that the regret decreases with a
more informative prior. Our proofs rely on a novel total variance decomposition
that can be applied beyond our models. Our theory is complemented by
experiments, which show that the hierarchy helps with knowledge sharing among
the tasks. This confirms that hierarchical Bayesian bandits are a universal and
statistically-efficient tool for learning to act with similar bandit tasks. |
---|---|
DOI: | 10.48550/arxiv.2111.06929 |