BRExIt: On Opponent Modelling in Expert Iteration

Finding a best response policy is a central objective in game theory and multi-agent learning, with modern population-based training approaches employing reinforcement learning algorithms as best-response oracles to improve play against candidate opponents (typically previously learnt policies). We...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Hernandez, Daniel, Baier, Hendrik, Kaisers, Michael
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 25.04.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Finding a best response policy is a central objective in game theory and multi-agent learning, with modern population-based training approaches employing reinforcement learning algorithms as best-response oracles to improve play against candidate opponents (typically previously learnt policies). We propose Best Response Expert Iteration (BRExIt), which accelerates learning in games by incorporating opponent models into the state-of-the-art learning algorithm Expert Iteration (ExIt). BRExIt aims to (1) improve feature shaping in the apprentice, with a policy head predicting opponent policies as an auxiliary task, and (2) bias opponent moves in planning towards the given or learnt opponent model, to generate apprentice targets that better approximate a best response. In an empirical ablation on BRExIt's algorithmic variants against a set of fixed test agents, we provide statistical evidence that BRExIt learns better performing policies than ExIt.
ISSN:2331-8422