Distributed Deep Learning in Open Collaborations
Modern deep learning applications require increasingly more compute to train state-of-the-art models. To address this demand, large corporations and institutions use dedicated High-Performance Computing clusters, whose construction and maintenance are both environmentally costly and well beyond the...
Saved in:
Main Authors | , , , , , , , , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
18.06.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Modern deep learning applications require increasingly more compute to train
state-of-the-art models. To address this demand, large corporations and
institutions use dedicated High-Performance Computing clusters, whose
construction and maintenance are both environmentally costly and well beyond
the budget of most organizations. As a result, some research directions become
the exclusive domain of a few large industrial and even fewer academic actors.
To alleviate this disparity, smaller groups may pool their computational
resources and run collaborative experiments that benefit all participants. This
paradigm, known as grid- or volunteer computing, has seen successful
applications in numerous scientific areas. However, using this approach for
machine learning is difficult due to high latency, asymmetric bandwidth, and
several challenges unique to volunteer computing. In this work, we carefully
analyze these constraints and propose a novel algorithmic framework designed
specifically for collaborative training. We demonstrate the effectiveness of
our approach for SwAV and ALBERT pretraining in realistic conditions and
achieve performance comparable to traditional setups at a fraction of the cost.
Finally, we provide a detailed report of successful collaborative language
model pretraining with 40 participants. |
---|---|
DOI: | 10.48550/arxiv.2106.10207 |