Distributed nonconvex optimization subject to globally coupled constraints via collaborative neurodynamic optimization

In this paper, a recurrent neural network is proposed for distributed nonconvex optimization subject to globally coupled (in)equality constraints and local bound constraints. Two distributed optimization models, including a resource allocation problem and a consensus-constrained optimization problem...

Full description

Saved in:
Bibliographic Details
Published inNeural networks Vol. 184; p. 107027
Main Authors Xia, Zicong, Liu, Yang, Hu, Cheng, Jiang, Haijun
Format Journal Article
LanguageEnglish
Published United States Elsevier Ltd 01.04.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, a recurrent neural network is proposed for distributed nonconvex optimization subject to globally coupled (in)equality constraints and local bound constraints. Two distributed optimization models, including a resource allocation problem and a consensus-constrained optimization problem, are established, where the objective functions are not necessarily convex, or the constraints do not guarantee a convex feasible set. To handle the nonconvexity, an augmented Lagrangian function is designed, based on which a recurrent neural network is developed for solving the optimization models in a distributed manner, and the convergence to a local optimal solution is proven. For the search of global optimal solutions, a collaborative neurodynamic optimization method is established by utilizing multiple proposed recurrent neural networks and a meta-heuristic rule. A numerical example, a simulation involving an electricity market, and a distributed cooperative control problem are provided to verify and demonstrate the characteristics of the main results.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0893-6080
1879-2782
1879-2782
DOI:10.1016/j.neunet.2024.107027