Exploration-Exploitation in Multi-Agent Competition: Convergence with Bounded Rationality

The interplay between exploration and exploitation in competitive multi-agent learning is still far from being well understood. Motivated by this, we study smooth Q-learning, a prototypical learning model that explicitly captures the balance between game rewards and exploration costs. We show that Q...

Full description

Saved in:
Bibliographic Details
Published inIDEAS Working Paper Series from RePEc
Main Authors Leonardos, Stefanos, Piliouras, Georgios, Spendlove, Kelly
Format Paper
LanguageEnglish
Published St. Louis Federal Reserve Bank of St. Louis 01.01.2021
Online AccessGet full text

Cover

Loading…
More Information
Summary:The interplay between exploration and exploitation in competitive multi-agent learning is still far from being well understood. Motivated by this, we study smooth Q-learning, a prototypical learning model that explicitly captures the balance between game rewards and exploration costs. We show that Q-learning always converges to the unique quantal-response equilibrium (QRE), the standard solution concept for games under bounded rationality, in weighted zero-sum polymatrix games with heterogeneous learning agents using positive exploration rates. Complementing recent results about convergence in weighted potential games, we show that fast convergence of Q-learning in competitive settings is obtained regardless of the number of agents and without any need for parameter fine-tuning. As showcased by our experiments in network zero-sum games, these theoretical results provide the necessary guarantees for an algorithmic approach to the currently open problem of equilibrium selection in competitive multi-agent settings.