Error Compensated Loopless SVRG, Quartz, and SDCA for Distributed Optimization
The communication of gradients is a key bottleneck in distributed training of large scale machine learning models. In order to reduce the communication cost, gradient compression (e.g., sparsification and quantization) and error compensation techniques are often used. In this paper, we propose and s...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
21.09.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Be the first to leave a comment!