A General Framework to Distribute Iterative Algorithms with Localized Information over Networks

Emerging applications in IoT (Internet of Things) and edge computing/learning have sparked massive renewed interest in developing distributed versions of existing (centralized) iterative algorithms often used for optimization or machine learning purposes. While existing work in the literature exhibi...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on automatic control Vol. 68; no. 12; pp. 1 - 16
Main Authors Timoudas, Thomas Ohlson, Zhang, Silun, Magnusson, Sindri, Fischione, Carlo
Format Journal Article
LanguageEnglish
Published New York IEEE 2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Emerging applications in IoT (Internet of Things) and edge computing/learning have sparked massive renewed interest in developing distributed versions of existing (centralized) iterative algorithms often used for optimization or machine learning purposes. While existing work in the literature exhibit similarities, for the tasks of both algorithm design and theoretical analysis, there is still no unified method or framework for accomplishing these tasks. This paper develops such a general framework, for distributing the execution of (centralized) iterative algorithms over networks in which the required information or data is partitioned between the nodes in the network. This paper furthermore shows that the distributed iterative algorithm, which results from the proposed framework, retains the convergence properties (rate) of the original (centralized) iterative algorithm. In addition, this paper applies the proposed general framework to several interesting example applications, obtaining results comparable to the state of the art for each such example, while greatly simplifying and generalizing their convergence analysis. These example applications reveal new results for distributed proximal versions of gradient descent, the heavy-ball method, and Newton's method. For example, these results show that the dependence on the condition number for the convergence rate of this distributed Heavy ball method is at least as good as for centralized gradient descent.
ISSN:0018-9286
1558-2523
1558-2523
DOI:10.1109/TAC.2023.3279901