Deep Neural Network Learning with Second-Order Optimizers -- a Practical Study with a Stochastic Quasi-Gauss-Newton Method
Training in supervised deep learning is computationally demanding, and the convergence behavior is usually not fully understood. We introduce and study a second-order stochastic quasi-Gauss-Newton (SQGN) optimization method that combines ideas from stochastic quasi-Newton methods, Gauss-Newton metho...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
06.04.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Training in supervised deep learning is computationally demanding, and the
convergence behavior is usually not fully understood. We introduce and study a
second-order stochastic quasi-Gauss-Newton (SQGN) optimization method that
combines ideas from stochastic quasi-Newton methods, Gauss-Newton methods, and
variance reduction to address this problem. SQGN provides excellent accuracy
without the need for experimenting with many hyper-parameter configurations,
which is often computationally prohibitive given the number of combinations and
the cost of each training process. We discuss the implementation of SQGN with
TensorFlow, and we compare its convergence and computational performance to
selected first-order methods using the MNIST benchmark and a large-scale
seismic tomography application from Earth science. |
---|---|
DOI: | 10.48550/arxiv.2004.03040 |