When Will Gradient Regularization Be Harmful?
Gradient regularization (GR), which aims to penalize the gradient norm atop the loss function, has shown promising results in training modern over-parameterized deep neural networks. However, can we trust this powerful technique? This paper reveals that GR can cause performance degeneration in adapt...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
14.06.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Gradient regularization (GR), which aims to penalize the gradient norm atop
the loss function, has shown promising results in training modern
over-parameterized deep neural networks. However, can we trust this powerful
technique? This paper reveals that GR can cause performance degeneration in
adaptive optimization scenarios, particularly with learning rate warmup. Our
empirical and theoretical analyses suggest this is due to GR inducing
instability and divergence in gradient statistics of adaptive optimizers at the
initial training stage. Inspired by the warmup heuristic, we propose three GR
warmup strategies, each relaxing the regularization effect to a certain extent
during the warmup course to ensure the accurate and stable accumulation of
gradients. With experiments on Vision Transformer family, we confirm the three
GR warmup strategies can effectively circumvent these issues, thereby largely
improving the model performance. Meanwhile, we note that scalable models tend
to rely more on the GR warmup, where the performance can be improved by up to
3\% on Cifar10 compared to baseline GR. Code is available at
\href{https://github.com/zhaoyang-0204/gnp}{https://github.com/zhaoyang-0204/gnp}. |
---|---|
DOI: | 10.48550/arxiv.2406.09723 |