Distributed Online Convex Optimization with Improved Dynamic Regret
In this paper, we consider the problem of distributed online convex optimization, where a group of agents collaborate to track the global minimizers of a sum of time-varying objective functions in an online manner. Specifically, we propose a novel distributed online gradient descent algorithm that r...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
12.11.2019
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.1911.05127 |
Cover
Loading…
Summary: | In this paper, we consider the problem of distributed online convex
optimization, where a group of agents collaborate to track the global
minimizers of a sum of time-varying objective functions in an online manner.
Specifically, we propose a novel distributed online gradient descent algorithm
that relies on an online adaptation of the gradient tracking technique used in
static optimization. We show that the dynamic regret bound of this algorithm
has no explicit dependence on the time horizon and, therefore, can be tighter
than existing bounds especially for problems with long horizons. Our bound
depends on a new regularity measure that quantifies the total change in the
gradients at the optimal points at each time instant. Furthermore, when the
optimizer is approximatly subject to linear dynamics, we show that the dynamic
regret bound can be further tightened by replacing the regularity measure that
captures the path length of the optimizer with the accumulated prediction
errors, which can be much lower in this special case. We present numerical
experiments to corroborate our theoretical results. |
---|---|
DOI: | 10.48550/arxiv.1911.05127 |