Efficient Embedding of MPI Collectives in MXNET DAGs for scaling Deep Learning
Availability of high performance computing infrastructures such as clusters of GPUs and CPUs have fueled the growth of distributed learning systems. Deep Learning frameworks express neural nets as DAGs and execute these DAGs on computation resources such as GPUs. In this paper, we propose efficient...
Saved in:
Main Author | |
---|---|
Format | Journal Article |
Language | English |
Published |
19.02.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Availability of high performance computing infrastructures such as clusters
of GPUs and CPUs have fueled the growth of distributed learning systems. Deep
Learning frameworks express neural nets as DAGs and execute these DAGs on
computation resources such as GPUs. In this paper, we propose efficient designs
of embedding MPI collective operations into data parallel DAGs. Incorrect
designs can easily lead to deadlocks or program crashes. In particular, we
demonstrate three designs: Funneled, Concurrent communication and Dependency
chaining of using MPI collectives with DAGs. These designs automatically enable
overlap of computation with communication by allowing for concurrent execution
with the other tasks. We directly implement these designs into the KVStore API
of the MXNET. This allows us to directly leverage the rest of the
infrastructure. Using ImageNet and CIFAR data sets, we show the potential of
our designs. In particular, our designs scale to 256 GPUs with as low as 50
seconds of epoch times for ImageNet 1K datasets. |
---|---|
DOI: | 10.48550/arxiv.1802.06949 |