Distributed Continuous-Time Optimization: Nonuniform Gradient Gains, Finite-Time Convergence, and Convex Constraint Set

In this paper, a distributed optimization problem with general differentiable convex objective functions is studied for continuous-time multi-agent systems with single-integrator dynamics. The objective is for multiple agents to cooperatively optimize a team objective function formed by a sum of loc...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on automatic control Vol. 62; no. 5; pp. 2239 - 2253
Main Authors Peng Lin, Wei Ren, Farrell, Jay A.
Format Journal Article
LanguageEnglish
Published New York IEEE 01.05.2017
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN0018-9286
1558-2523
DOI10.1109/TAC.2016.2604324

Cover

Loading…
More Information
Summary:In this paper, a distributed optimization problem with general differentiable convex objective functions is studied for continuous-time multi-agent systems with single-integrator dynamics. The objective is for multiple agents to cooperatively optimize a team objective function formed by a sum of local objective functions with only local interaction and information while explicitly taking into account nonuniform gradient gains, finite-time convergence, and a common convex constraint set. First, a distributed nonsmooth algorithm is introduced for a special class of convex objective functions that have a quadratic-like form. It is shown that all agents reach a consensus in finite time while minimizing the team objective function asymptotically. Second, a distributed algorithm is presented for general differentiable convex objective functions, in which the interaction gains of each agent can be self-adjusted based on local states. A corresponding condition is then given to guarantee that all agents reach a consensus in finite time while minimizing the team objective function asymptotically. Third, a distributed optimization algorithm with state-dependent gradient gains is given for general differentiable convex objective functions. It is shown that the distributed continuous-time optimization problem can be solved even though the gradient gains are not identical. Fourth, a distributed tracking algorithm combined with a distributed estimation algorithm is given for general differentiable convex objective functions. It is shown that all agents reach a consensus while minimizing the team objective function in finite time. Fifth, as an extension of the previous results, a distributed constrained optimization algorithm with nonuniform gradient gains and a distributed constrained finite-time optimization algorithm are given. It is shown that both algorithms can be used to solve a distributed continuous-time optimization problem with a common convex constraint set. Numerical examples are included to illustrate the obtained theoretical results.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9286
1558-2523
DOI:10.1109/TAC.2016.2604324