Learning optimal policies in potential Mean Field Games: Smoothed Policy Iteration algorithms
We introduce two Smoothed Policy Iteration algorithms (\textbf{SPI}s) as rules for learning policies and methods for computing Nash equilibria in second order potential Mean Field Games (MFGs). Global convergence is proved if the coupling term in the MFG system satisfy the Lasry Lions monotonicity c...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
09.12.2022
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.2212.04791 |
Cover
Loading…
Summary: | We introduce two Smoothed Policy Iteration algorithms (\textbf{SPI}s) as
rules for learning policies and methods for computing Nash equilibria in second
order potential Mean Field Games (MFGs). Global convergence is proved if the
coupling term in the MFG system satisfy the Lasry Lions monotonicity condition.
Local convergence to a stable solution is proved for system which may have
multiple solutions. The convergence analysis shows close connections between
\textbf{SPI}s and the Fictitious Play algorithm, which has been widely studied
in the MFG literature. Numerical simulation results based on finite difference
schemes are presented to supplement the theoretical analysis. |
---|---|
DOI: | 10.48550/arxiv.2212.04791 |