Error Compensated Loopless SVRG, Quartz, and SDCA for Distributed Optimization

The communication of gradients is a key bottleneck in distributed training of large scale machine learning models. In order to reduce the communication cost, gradient compression (e.g., sparsification and quantization) and error compensation techniques are often used. In this paper, we propose and s...

Full description

Saved in:
Bibliographic Details
Main Authors Qian, Xun, Dong, Hanze, Richtárik, Peter, Zhang, Tong
Format Journal Article
LanguageEnglish
Published 21.09.2021
Subjects
Online AccessGet full text

Cover

Loading…