Online distributed optimization with stochastic gradients: high probability bound of regrets

In this paper, the problem of online distributed optimization subject to a convex set is studied via a network of agents. Each agent only has access to a noisy gradient of its own objective function, and can communicate with its neighbors via a network. To handle this problem, an online distributed...

Full description

Saved in:
Bibliographic Details
Published inControl theory and technology Vol. 22; no. 3; pp. 419 - 430
Main Authors Yang, Yuchen, Lu, Kaihong, Wang, Long
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.08.2024
Springer Nature B.V
Center for Systems and Control,College of Engineering,Peking University,Beijing 100871,China%College of Electrical Engineering and Automation,Shandong University of Science and Technology,Qingdao 266590,Shandong,China
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, the problem of online distributed optimization subject to a convex set is studied via a network of agents. Each agent only has access to a noisy gradient of its own objective function, and can communicate with its neighbors via a network. To handle this problem, an online distributed stochastic mirror descent algorithm is proposed. Existing works on online distributed algorithms involving stochastic gradients only provide the expectation bounds of the regrets. Different from them, we study the high probability bound of the regrets, i.e., the sublinear bound of the regret is characterized by the natural logarithm of the failure probability’s inverse. Under mild assumptions on the graph connectivity, we prove that the dynamic regret grows sublinearly with a high probability if the deviation in the minimizer sequence is sublinear with the square root of the time horizon. Finally, a simulation is provided to demonstrate the effectiveness of our theoretical results.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2095-6983
2198-0942
DOI:10.1007/s11768-023-00186-3