Accelerated learning from recommender systems using multi-armed bandit
Recommendation systems are a vital component of many online marketplaces, where there are often millions of items to potentially present to users who have a wide variety of wants or needs. Evaluating recommender system algorithms is a hard task, given all the inherent bias in the data, and successfu...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
16.08.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recommendation systems are a vital component of many online marketplaces,
where there are often millions of items to potentially present to users who
have a wide variety of wants or needs. Evaluating recommender system algorithms
is a hard task, given all the inherent bias in the data, and successful
companies must be able to rapidly iterate on their solution to maintain their
competitive advantage. The gold standard for evaluating recommendation
algorithms has been the A/B test since it is an unbiased way to estimate how
well one or more algorithms compare in the real world. However, there are a
number of issues with A/B testing that make it impractical to be the sole
method of testing, including long lead time, and high cost of exploration. We
argue that multi armed bandit (MAB) testing as a solution to these issues. We
showcase how we implemented a MAB solution as an extra step between offline and
online A/B testing in a production system. We present the result of our
experiment and compare all the offline, MAB, and online A/B tests metrics for
our use case. |
---|---|
DOI: | 10.48550/arxiv.1908.06158 |