An approximate dual subgradient algorithm for multi-agent non-convex optimization
We consider a multi-agent optimization problem where agents subject to local, intermittent interactions aim to minimize a sum of local objective functions subject to a global inequality constraint and a global state constraint set. In contrast to previous work, we do not require that the objective,...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
13.10.2010
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.1010.2732 |
Cover
Summary: | We consider a multi-agent optimization problem where agents subject to local,
intermittent interactions aim to minimize a sum of local objective functions
subject to a global inequality constraint and a global state constraint set. In
contrast to previous work, we do not require that the objective, constraint
functions, and state constraint sets to be convex. In order to deal with
time-varying network topologies satisfying a standard connectivity assumption,
we resort to consensus algorithm techniques and the Lagrangian duality method.
We slightly relax the requirement of exact consensus, and propose a distributed
approximate dual subgradient algorithm to enable agents to asymptotically
converge to a pair of primal-dual solutions to an approximate problem. To
guarantee convergence, we assume that the Slater's condition is satisfied and
the optimal solution set of the dual limit is singleton. We implement our
algorithm over a source localization problem and compare the performance with
existing algorithms. |
---|---|
DOI: | 10.48550/arxiv.1010.2732 |