Linear convergence for distributed stochastic optimization with coupled inequality constraints

This paper considers the distributed stochastic optimization problem over time-varying networks, in which agents aim to cooperatively minimize the expected value of the sum of their cost functions subject to coupled affine inequalities. Considering various stochastic factors and constraints on decis...

Full description

Saved in:
Bibliographic Details
Published inJournal of the Franklin Institute Vol. 362; no. 1; p. 107405
Main Authors Du, Kaixin, Meng, Min, Li, Xiuxian
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.01.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper considers the distributed stochastic optimization problem over time-varying networks, in which agents aim to cooperatively minimize the expected value of the sum of their cost functions subject to coupled affine inequalities. Considering various stochastic factors and constraints on decisions inherent in physical environments, this problem has wide applications such as in smart grids, resource allocation and distributed machine learning. To solve this problem, a novel distributed stochastic primal–dual algorithm is devised by applying variance reduction and distributed tracking techniques. A complete and rigorous analysis shows that the developed algorithm linearly converges to the optimal solution in the mean square sense, and an explicit upper bound on the required constant stepsize is presented. Finally, a numerical example is conducted to illustrate the theoretical findings.
ISSN:0016-0032
DOI:10.1016/j.jfranklin.2024.107405