Adaptation and learning over networks under subspace constraints -- Part II: Performance Analysis
Part I of this paper considered optimization problems over networks where agents have individual objectives to meet, or individual parameter vectors to estimate, subject to subspace constraints that require the objectives across the network to lie in low-dimensional subspaces. Starting from the cent...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
01.06.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Part I of this paper considered optimization problems over networks where
agents have individual objectives to meet, or individual parameter vectors to
estimate, subject to subspace constraints that require the objectives across
the network to lie in low-dimensional subspaces. Starting from the centralized
projected gradient descent, an iterative and distributed solution was proposed
that responds to streaming data and employs stochastic approximations in place
of actual gradient vectors, which are generally unavailable. We examined the
second-order stability of the learning algorithm and we showed that, for small
step-sizes $\mu$, the proposed strategy leads to small estimation errors on the
order of $\mu$. This Part II examines steady-state performance. The results
reveal explicitly the influence of the gradient noise, data characteristics,
and subspace constraints, on the network performance. The results also show
that in the small step-size regime, the iterates generated by the distributed
algorithm achieve the centralized steady-state performance. |
---|---|
DOI: | 10.48550/arxiv.1906.12250 |