Continuous-Time Distributed Subgradient Algorithm for Convex Optimization With General Constraints

The distributed convex optimization problem is studied in this paper for any fixed and connected network with general constraints. To solve such an optimization problem, a new type of continuous-time distributed subgradient optimization algorithm is proposed based on the Karuch-Kuhn-Tucker condition...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on automatic control Vol. 64; no. 4; pp. 1694 - 1701
Main Authors Zhu, Yanan, Yu, Wenwu, Wen, Guanghui, Chen, Guanrong, Ren, Wei
Format Journal Article
LanguageEnglish
Published New York IEEE 01.04.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The distributed convex optimization problem is studied in this paper for any fixed and connected network with general constraints. To solve such an optimization problem, a new type of continuous-time distributed subgradient optimization algorithm is proposed based on the Karuch-Kuhn-Tucker condition. By using tools from nonsmooth analysis and set-valued function theory, it is proved that the distributed convex optimization problem is solved on a network of agents equipped with the designed algorithm. For the case that the objective function is convex but not strictly convex, it is proved that the states of the agents associated with optimal variables could converge to an optimal solution of the optimization problem. For the case that the objective function is strictly convex, it is further shown that the states of agents associated with optimal variables could converge to the unique optimal solution. Finally, some simulations are performed to illustrate the theoretical analysis.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9286
1558-2523
DOI:10.1109/TAC.2018.2852602