Improved Algorithms for Adversarial Bandits with Unbounded Losses
We consider the Adversarial Multi-Armed Bandits (MAB) problem with unbounded losses, where the algorithms have no prior knowledge on the sizes of the losses. We present UMAB-NN and UMAB-G, two algorithms for non-negative and general unbounded loss respectively. For non-negative unbounded loss, UMAB-...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
02.10.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We consider the Adversarial Multi-Armed Bandits (MAB) problem with unbounded
losses, where the algorithms have no prior knowledge on the sizes of the
losses. We present UMAB-NN and UMAB-G, two algorithms for non-negative and
general unbounded loss respectively. For non-negative unbounded loss, UMAB-NN
achieves the first adaptive and scale free regret bound without uniform
exploration. Built up on that, we further develop UMAB-G that can learn from
arbitrary unbounded loss. Our analysis reveals the asymmetry between positive
and negative losses in the MAB problem and provide additional insights. We also
accompany our theoretical findings with extensive empirical evaluations,
showing that our algorithms consistently out-performs all existing algorithms
that handles unbounded losses. |
---|---|
DOI: | 10.48550/arxiv.2310.01756 |