Newton-Raphson Consensus for Distributed Convex Optimization

We address the problem of distributed uncon- strained convex optimization under separability assumptions, i.e., the framework where each agent of a network is endowed with a local private multidimensional convex cost, is subject to communication constraints, and wants to collaborate to compute the m...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Varagnolo, Damiano, Zanella, Filippo, Cenedese, Angelo, Pillonetto, Gianluigi, Schenato, Luca
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 04.11.2015
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We address the problem of distributed uncon- strained convex optimization under separability assumptions, i.e., the framework where each agent of a network is endowed with a local private multidimensional convex cost, is subject to communication constraints, and wants to collaborate to compute the minimizer of the sum of the local costs. We propose a design methodology that combines average consensus algorithms and separation of time-scales ideas. This strategy is proved, under suitable hypotheses, to be globally convergent to the true minimizer. Intuitively, the procedure lets the agents distributedly compute and sequentially update an approximated Newton- Raphson direction by means of suitable average consensus ratios. We show with numerical simulations that the speed of convergence of this strategy is comparable with alternative optimization strategies such as the Alternating Direction Method of Multipliers. Finally, we propose some alternative strategies which trade-off communication and computational requirements with convergence speed.
ISSN:2331-8422