MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence
We propose a new variant of the Adam optimizer called MicroAdam that specifically minimizes memory overheads, while maintaining theoretical convergence guarantees. We achieve this by compressing the gradient information before it is fed into the optimizer state, thereby reducing its memory footprint...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
24.05.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We propose a new variant of the Adam optimizer called MicroAdam that
specifically minimizes memory overheads, while maintaining theoretical
convergence guarantees. We achieve this by compressing the gradient information
before it is fed into the optimizer state, thereby reducing its memory
footprint significantly. We control the resulting compression error via a novel
instance of the classical \emph{error feedback} mechanism from distributed
optimization in which *the error correction information is itself compressed*
to allow for practical memory gains. We prove that the resulting approach
maintains theoretical convergence guarantees competitive to those of AMSGrad,
while providing good practical performance. Specifically, we show that
MicroAdam can be implemented efficiently on GPUs: on both million-scale (BERT)
and billion-scale (LLaMA) models, MicroAdam provides practical convergence
competitive to that of the uncompressed Adam baseline, with lower memory usage
and similar running time. Our code is available at
https://github.com/IST-DASLab/MicroAdam. |
---|---|
DOI: | 10.48550/arxiv.2405.15593 |