Random Reshuffling with Variance Reduction: New Analysis and Better Rates

Virtually all state-of-the-art methods for training supervised machine learning models are variants of SGD enhanced with a number of additional tricks, such as minibatching, momentum, and adaptive stepsizes. One of the tricks that works so well in practice that it is used as default in virtually all...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Malinovsky, Grigory, Alibek Sailanbayev, Richtárik, Peter
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 19.04.2021
Subjects
Online AccessGet full text

Cover

Loading…