Double Descent and Overfitting under Noisy Inputs and Distribution Shift for Linear Denoisers
Despite the importance of denoising in modern machine learning and ample empirical work on supervised denoising, its theoretical understanding is still relatively scarce. One concern about studying supervised denoising is that one might not always have noiseless training data from the test distribut...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
26.05.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Despite the importance of denoising in modern machine learning and ample
empirical work on supervised denoising, its theoretical understanding is still
relatively scarce. One concern about studying supervised denoising is that one
might not always have noiseless training data from the test distribution. It is
more reasonable to have access to noiseless training data from a different
dataset than the test dataset. Motivated by this, we study supervised denoising
and noisy-input regression under distribution shift. We add three
considerations to increase the applicability of our theoretical insights to
real-life data and modern machine learning. First, while most past theoretical
work assumes that the data covariance matrix is full-rank and well-conditioned,
empirical studies have shown that real-life data is approximately low-rank.
Thus, we assume that our data matrices are low-rank. Second, we drop
independence assumptions on our data. Third, the rise in computational power
and dimensionality of data have made it important to study non-classical
regimes of learning. Thus, we work in the non-classical proportional regime,
where data dimension $d$ and number of samples $N$ grow as $d/N = c + o(1)$.
For this setting, we derive data-dependent, instance specific expressions for
the test error for both denoising and noisy-input regression, and study when
overfitting the noise is benign, tempered or catastrophic. We show that the
test error exhibits double descent under general distribution shift, providing
insights for data augmentation and the role of noise as an implicit
regularizer. We also perform experiments using real-life data, where we match
the theoretical predictions with under 1\% MSE error for low-rank data. |
---|---|
DOI: | 10.48550/arxiv.2305.17297 |