Multi-Task Learning for Contextual Bandits
Contextual bandits are a form of multi-armed bandit in which the agent has access to predictive side information (known as the context) for each arm at each time step, and have been used to model personalized news recommendation, ad placement, and other applications. In this work, we propose a multi...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
24.05.2017
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Contextual bandits are a form of multi-armed bandit in which the agent has
access to predictive side information (known as the context) for each arm at
each time step, and have been used to model personalized news recommendation,
ad placement, and other applications. In this work, we propose a multi-task
learning framework for contextual bandit problems. Like multi-task learning in
the batch setting, the goal is to leverage similarities in contexts for
different arms so as to improve the agent's ability to predict rewards from
contexts. We propose an upper confidence bound-based multi-task learning
algorithm for contextual bandits, establish a corresponding regret bound, and
interpret this bound to quantify the advantages of learning in the presence of
high task (arm) similarity. We also describe an effective scheme for estimating
task similarity from data, and demonstrate our algorithm's performance on
several data sets. |
---|---|
DOI: | 10.48550/arxiv.1705.08618 |