BRExIt: On Opponent Modelling in Expert Iteration
Finding a best response policy is a central objective in game theory and multi-agent learning, with modern population-based training approaches employing reinforcement learning algorithms as best-response oracles to improve play against candidate opponents (typically previously learnt policies). We...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
31.05.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Finding a best response policy is a central objective in game theory and
multi-agent learning, with modern population-based training approaches
employing reinforcement learning algorithms as best-response oracles to improve
play against candidate opponents (typically previously learnt policies). We
propose Best Response Expert Iteration (BRExIt), which accelerates learning in
games by incorporating opponent models into the state-of-the-art learning
algorithm Expert Iteration (ExIt). BRExIt aims to (1) improve feature shaping
in the apprentice, with a policy head predicting opponent policies as an
auxiliary task, and (2) bias opponent moves in planning towards the given or
learnt opponent model, to generate apprentice targets that better approximate a
best response. In an empirical ablation on BRExIt's algorithmic variants
against a set of fixed test agents, we provide statistical evidence that BRExIt
learns better performing policies than ExIt. |
---|---|
DOI: | 10.48550/arxiv.2206.00113 |