Two-timescale recurrent neural networks for distributed minimax optimization

In this paper, we present two-timescale neurodynamic optimization approaches to distributed minimax optimization. We propose four multilayer recurrent neural networks for solving four different types of generally nonlinear convex–concave minimax problems subject to linear equality and nonlinear ineq...

Full description

Saved in:
Bibliographic Details
Published inNeural networks Vol. 165; pp. 527 - 539
Main Authors Xia, Zicong, Liu, Yang, Wang, Jiasen, Wang, Jun
Format Journal Article
LanguageEnglish
Published United States Elsevier Ltd 01.08.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, we present two-timescale neurodynamic optimization approaches to distributed minimax optimization. We propose four multilayer recurrent neural networks for solving four different types of generally nonlinear convex–concave minimax problems subject to linear equality and nonlinear inequality constraints. We derive sufficient conditions to guarantee the stability and optimality of the neural networks. We demonstrate the viability and efficiency of the proposed neural networks in two specific paradigms for Nash-equilibrium seeking in a zero-sum game and distributed constrained nonlinear optimization.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0893-6080
1879-2782
1879-2782
DOI:10.1016/j.neunet.2023.06.003