Generalization for Least Squares Regression With Simple Spiked Covariances
Random matrix theory has proven to be a valuable tool in analyzing the generalization of linear models. However, the generalization properties of even two-layer neural networks trained by gradient descent remain poorly understood. To understand the generalization performance of such networks, it is...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
17.10.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Random matrix theory has proven to be a valuable tool in analyzing the
generalization of linear models. However, the generalization properties of even
two-layer neural networks trained by gradient descent remain poorly understood.
To understand the generalization performance of such networks, it is crucial to
characterize the spectrum of the feature matrix at the hidden layer. Recent
work has made progress in this direction by describing the spectrum after a
single gradient step, revealing a spiked covariance structure. Yet, the
generalization error for linear models with spiked covariances has not been
previously determined. This paper addresses this gap by examining two simple
models exhibiting spiked covariances. We derive their generalization error in
the asymptotic proportional regime. Our analysis demonstrates that the
eigenvector and eigenvalue corresponding to the spike significantly influence
the generalization error. |
---|---|
DOI: | 10.48550/arxiv.2410.13991 |