Multi-Level Analysis of GPU Utilization in ML Training Workloads

Training time has become a critical bottleneck due to the recent proliferation of large-parameter ML models. GPUs continue to be the prevailing architecture for training ML models. However, the complex execution flow of ML frameworks makes it difficult to understand GPU computing resource utilizatio...

Full description

Saved in:
Bibliographic Details
Published inProceedings - Design, Automation, and Test in Europe Conference and Exhibition pp. 1 - 6
Main Authors Delestrac, Paul, Battacharjee, Debjyoti, Yang, Simei, Moolchandani, Diksha, Catthoor, Francky, Torres, Lionel, Novo, David
Format Conference Proceeding
LanguageEnglish
Published EDAA 25.03.2024
Subjects
Online AccessGet full text
ISSN1558-1101
DOI10.23919/DATE58400.2024.10546769

Cover

Loading…
More Information
Summary:Training time has become a critical bottleneck due to the recent proliferation of large-parameter ML models. GPUs continue to be the prevailing architecture for training ML models. However, the complex execution flow of ML frameworks makes it difficult to understand GPU computing resource utilization. Our main goal is to provide a better understanding of how efficiently ML training workloads use the computing resources of modern GPUs. To this end, we first describe an ideal reference execution of a GPU-accelerated ML training loop and identify relevant metrics that can be measured using existing profiling tools. Second, we produce a coherent integration of the traces obtained from each profiling tool. Third, we leverage the metrics within our integrated trace to analyze the impact of different software optimizations (e.g., mixed-precision, various ML frameworks, and execution modes) on the throughput and the associated utilization at multiple levels of hardware abstraction (i.e., whole GPU, SM subpartitions, issue slots, and tensor cores). In our results on two modern GPUs, we present seven takeaways and show that although close to 100% utilization is generally achieved at the GPU level, average utilization of the issue slots and tensor cores always remains below 50% and 5.2%, respectively.
ISSN:1558-1101
DOI:10.23919/DATE58400.2024.10546769