Can Learning Be Explained By Local Optimality In Low-rank Matrix Recovery?
We explore the local landscape of low-rank matrix recovery, aiming to reconstruct a $d_1\times d_2$ matrix with rank $r$ from $m$ linear measurements, some potentially noisy. When the true rank is unknown, overestimation is common, yielding an over-parameterized model with rank $k\geq r$. Recent fin...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
21.02.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We explore the local landscape of low-rank matrix recovery, aiming to
reconstruct a $d_1\times d_2$ matrix with rank $r$ from $m$ linear
measurements, some potentially noisy. When the true rank is unknown,
overestimation is common, yielding an over-parameterized model with rank $k\geq
r$. Recent findings suggest that first-order methods with the robust
$\ell_1$-loss can recover the true low-rank solution even when the rank is
overestimated and measurements are noisy, implying that true solutions might
emerge as local or global minima. Our paper challenges this notion,
demonstrating that, under mild conditions, true solutions manifest as
\textit{strict saddle points}. We study two categories of low-rank matrix
recovery, matrix completion and matrix sensing, both with the robust
$\ell_1$-loss. For matrix sensing, we uncover two critical transitions. With
$m$ in the range of $\max\{d_1,d_2\}r\lesssim m\lesssim \max\{d_1,d_2\}k$, none
of the true solutions are local or global minima, but some become strict saddle
points. As $m$ surpasses $\max\{d_1,d_2\}k$, all true solutions become
unequivocal global minima. In matrix completion, even with slight rank
overestimation and mild noise, true solutions either emerge as non-critical or
strict saddle points. |
---|---|
DOI: | 10.48550/arxiv.2302.10963 |