DistSim: A performance model of large-scale hybrid distributed DNN training

With the ever-increasing computational demand of DNN training workloads, distributed training has been widely adopted. A combination of data, model and pipeline parallelism strategy, called hybrid parallelism distributed training, is imported to tackle the problem of deploying large-scale models. Ho...

Full description

Saved in:
Bibliographic Details
Main Authors Lu, Guandong, Chen, Runzhe, Wang, Yakai, Zhou, Yangjie, Zhang, Rui, Hu, Zheng, Miao, Yanming, Cai, Zhifang, Li, Li, Leng, Jingwen, Guo, Minyi
Format Journal Article
LanguageEnglish
Published 14.06.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:With the ever-increasing computational demand of DNN training workloads, distributed training has been widely adopted. A combination of data, model and pipeline parallelism strategy, called hybrid parallelism distributed training, is imported to tackle the problem of deploying large-scale models. However, how to evaluate the hybrid strategy and the utilization of each device remains a challenge since existing works either profile on a real large-scale cluster with high time and money costs or only analyze a specific type of parallelism without considering the hybrid parallelism. In this work, we proposed DistSim, an event-based performance model to accurately analyze each device's computation and communication activities with low profiling costs. DistDim breaks down the model into events according to the given distributed strategy, which can be profiled on two nodes. Then DistSim leverages the hierarchy of different parallel strategies to generate the computation and communication event-flow from layer level to model level and finally the activity timeline of each device participating in training. Experiment shows that DistSim can reach \revise{<4\%} errors when predicting distributing training batch time and \revise{<5\%} errors when predicting a single device's activity time in various hybrid strategy settings. We also provide a use-case of DistSim, automatically evaluate and search the best distributed training strategy, and find a hybrid strategy with at most $7.37\times$ throughput improvement.
DOI:10.48550/arxiv.2306.08423