Distributed Online Optimization Based on One-Step Gradient Descent and Multi-Step Consensus

We propose a distributed online optimization al-gorithm with continuously learning ability. In this algorithm, we first perform one-step gradient descent with fixed step size to ensure the ability of tracking the optimal solutions, and then use multi-step consensus to ensure the collaboration betwee...

Full description

Saved in:
Bibliographic Details
Published inInternational Conference on Control, Automation, Robotics and Vision pp. 840 - 845
Main Authors Zhou, Yingjie, Wang, Xinyu, Li, Tao
Format Conference Proceeding
LanguageEnglish
Published IEEE 12.12.2024
Subjects
Online AccessGet full text
ISSN2474-963X
DOI10.1109/ICARCV63323.2024.10821605

Cover

More Information
Summary:We propose a distributed online optimization al-gorithm with continuously learning ability. In this algorithm, we first perform one-step gradient descent with fixed step size to ensure the ability of tracking the optimal solutions, and then use multi-step consensus to ensure the collaboration between neighboring nodes. For strongly convex and smooth objective functions, we provide a dynamic regret analysis of the proposed algorithm and show that the dynamic regret is upper bounded by the initial values, the path variation of the optimal solution, and a linear growth term. The coefficient of the linear growth term can be made arbitrarily small by adjusting the step size of gradient descent. We also demonstrate the performance of the proposed algorithm by numerical simulations.
ISSN:2474-963X
DOI:10.1109/ICARCV63323.2024.10821605