A penalty-like neurodynamic approach to constrained nonsmooth distributed convex optimization
A nonsmooth distributed optimization problem subject to affine equality and convex inequality is considered in this paper. All the local objective functions in the distributed optimization problem possess a common decision variable. And taking privacy into consideration, each agent doesn’t share its...
Saved in:
Published in | Neurocomputing (Amsterdam) Vol. 377; pp. 225 - 233 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
15.02.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | A nonsmooth distributed optimization problem subject to affine equality and convex inequality is considered in this paper. All the local objective functions in the distributed optimization problem possess a common decision variable. And taking privacy into consideration, each agent doesn’t share its local information with other agents, including the information about the local objective function and constraint set. To cope with this distributed optimization, a neurodynamic approach based on the penalty-like methods is proposed. It is proved that the presented neurodynamic approach is convergent to an optimal solution to the considered distributed optimization problem. The proposed neurodynamic approach in this paper has lower model complexity and computational load via reducing auxiliary variables. In the end, two illustrative examples are given to show the effectiveness and practical application of the proposed neural network. |
---|---|
ISSN: | 0925-2312 1872-8286 |
DOI: | 10.1016/j.neucom.2019.10.050 |