Off-Policy Evaluation in Embedded Spaces
Off-policy evaluation methods are important in recommendation systems and search engines, where data collected under an existing logging policy is used to estimate the performance of a new proposed policy. A common approach to this problem is weighting, where data is weighted by a density ratio betw...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
05.03.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Off-policy evaluation methods are important in recommendation systems and
search engines, where data collected under an existing logging policy is used
to estimate the performance of a new proposed policy. A common approach to this
problem is weighting, where data is weighted by a density ratio between the
probability of actions given contexts in the target and logged policies. In
practice, two issues often arise. First, many problems have very large action
spaces and we may not observe rewards for most actions, and so in finite
samples we may encounter a positivity violation. Second, many recommendation
systems are not probabilistic and so having access to logging and target policy
densities may not be feasible. To address these issues, we introduce the
featurized embedded permutation weighting estimator. The estimator computes the
density ratio in an action embedding space, which reduces the possibility of
positivity violations. The density ratio is computed leveraging recent advances
in normalizing flows and density ratio estimation as a classification problem,
in order to obtain estimates which are feasible in practice. |
---|---|
DOI: | 10.48550/arxiv.2203.02807 |