L4: Practical loss-based stepsize adaptation for deep learning
We propose a stepsize adaptation scheme for stochastic gradient descent. It operates directly with the loss function and rescales the gradient in order to make fixed predicted progress on the loss. We demonstrate its capabilities by conclusively improving the performance of Adam and Momentum optimiz...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
14.02.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We propose a stepsize adaptation scheme for stochastic gradient descent. It
operates directly with the loss function and rescales the gradient in order to
make fixed predicted progress on the loss. We demonstrate its capabilities by
conclusively improving the performance of Adam and Momentum optimizers. The
enhanced optimizers with default hyperparameters consistently outperform their
constant stepsize counterparts, even the best ones, without a measurable
increase in computational cost. The performance is validated on multiple
architectures including dense nets, CNNs, ResNets, and the recurrent
Differential Neural Computer on classical datasets MNIST, fashion MNIST,
CIFAR10 and others. |
---|---|
DOI: | 10.48550/arxiv.1802.05074 |