Distributed Linearized Alternating Direction Method of Multipliers for Composite Convex Consensus Optimization

Given an undirected graph G = (N, E) of agents N = {1,..., N} connected with edges in E, we study how to compute an optimal decision on which there is consensus among agents and that minimizes the sum of agent-specific private convex composite functions {Φ i } i∈N , where Φ i ≐ ξ i + f i belongs to...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on automatic control Vol. 63; no. 1; pp. 5 - 20
Main Authors Aybat, N. S., Wang, Z., Lin, T., Ma, S.
Format Journal Article
LanguageEnglish
Published IEEE 01.01.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Given an undirected graph G = (N, E) of agents N = {1,..., N} connected with edges in E, we study how to compute an optimal decision on which there is consensus among agents and that minimizes the sum of agent-specific private convex composite functions {Φ i } i∈N , where Φ i ≐ ξ i + f i belongs to agent-i. Assuming only agents connected by an edge can communicate, we propose a distributed proximal gradient algorithm (DPGA) for consensus optimization over both unweighted and weighted static (undirected) communication networks. In one iteration, each agent-i computes the prox map of ξ i and gradient of f i , and this is followed by local communication with neighboring agents. We also study its stochastic gradient variant, SDPGA, which can only access to noisy estimates of ∇f i at each agent-i. This computational model abstracts a number of applications in distributed sensing, machine learning and statistical inference. We show ergodic convergence in both suboptimality error and consensus violation for the DPGA and SDPGA with rates O(1/t) and O(1/√t), respectively.
ISSN:0018-9286
1558-2523
DOI:10.1109/TAC.2017.2713046