MUSIC: Accelerated Convergence for Distributed Optimization With Inexact and Exact Methods

Gradient-type distributed optimization methods have blossomed into one of the most important tools for solving a minimization learning task over a networked agent system. However, only one gradient update per iteration makes it difficult to achieve a substantive acceleration of convergence. In this...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. PP; pp. 1 - 15
Main Authors Wu, Mou, Liao, Haibin, Ding, Zhengtao, Xiao, Yonggang
Format Journal Article
LanguageEnglish
Published United States IEEE 26.03.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Gradient-type distributed optimization methods have blossomed into one of the most important tools for solving a minimization learning task over a networked agent system. However, only one gradient update per iteration makes it difficult to achieve a substantive acceleration of convergence. In this article, we propose an accelerated framework named multiupdates single-combination (MUSIC) allowing each agent to perform multiple local updates and a single combination in each iteration. More importantly, we equip inexact and exact distributed optimization methods into this framework, thereby developing two new algorithms that exhibit accelerated linear convergence and high communication efficiency. Our rigorous convergence analysis reveals the sources of steady-state errors arising from inexact policies and offers effective solutions. Numerical results based on synthetic and real datasets demonstrate both our theoretical motivations and analysis, as well as performance advantages.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2024.3376421