Learning optimal policies in potential Mean Field Games: Smoothed Policy Iteration algorithms

We introduce two Smoothed Policy Iteration algorithms (\textbf{SPI}s) as rules for learning policies and methods for computing Nash equilibria in second order potential Mean Field Games (MFGs). Global convergence is proved if the coupling term in the MFG system satisfy the Lasry Lions monotonicity c...

Full description

Saved in:
Bibliographic Details
Main Authors Tang, Qing, Song, Jiahao
Format Journal Article
LanguageEnglish
Published 09.12.2022
Subjects
Online AccessGet full text
DOI10.48550/arxiv.2212.04791

Cover

Loading…
More Information
Summary:We introduce two Smoothed Policy Iteration algorithms (\textbf{SPI}s) as rules for learning policies and methods for computing Nash equilibria in second order potential Mean Field Games (MFGs). Global convergence is proved if the coupling term in the MFG system satisfy the Lasry Lions monotonicity condition. Local convergence to a stable solution is proved for system which may have multiple solutions. The convergence analysis shows close connections between \textbf{SPI}s and the Fictitious Play algorithm, which has been widely studied in the MFG literature. Numerical simulation results based on finite difference schemes are presented to supplement the theoretical analysis.
DOI:10.48550/arxiv.2212.04791