Exact expressions for double descent and implicit regularization via surrogate random design
Double descent refers to the phase transition that is exhibited by the generalization error of unregularized learning models when varying the ratio between the number of parameters and the number of training samples. The recent success of highly over-parameterized machine learning models such as dee...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
10.12.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Double descent refers to the phase transition that is exhibited by the
generalization error of unregularized learning models when varying the ratio
between the number of parameters and the number of training samples. The recent
success of highly over-parameterized machine learning models such as deep
neural networks has motivated a theoretical analysis of the double descent
phenomenon in classical models such as linear regression which can also
generalize well in the over-parameterized regime. We provide the first exact
non-asymptotic expressions for double descent of the minimum norm linear
estimator. Our approach involves constructing a special determinantal point
process which we call surrogate random design, to replace the standard i.i.d.
design of the training sample. This surrogate design admits exact expressions
for the mean squared error of the estimator while preserving the key properties
of the standard design. We also establish an exact implicit regularization
result for over-parameterized training samples. In particular, we show that,
for the surrogate design, the implicit bias of the unregularized minimum norm
estimator precisely corresponds to solving a ridge-regularized least squares
problem on the population distribution. In our analysis we introduce a new
mathematical tool of independent interest: the class of random matrices for
which determinant commutes with expectation. |
---|---|
DOI: | 10.48550/arxiv.1912.04533 |