Compiler-Aided Type Correctness of Hybrid MPI-OpenMP Applications
Hybrid MPI–OpenMP applications employ message-passing interface (MPI)-enabled process-level, distributed computations on many compute nodes in conjunction with OpenMP shared-memory, thread-level parallelism for the most efficient computation. This poses challenges on the dynamic MPI correctness tool...
Saved in:
Published in | IT professional Vol. 24; no. 2; pp. 45 - 51 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
Washington
IEEE
01.03.2022
IEEE Computer Society |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Hybrid MPI–OpenMP applications employ message-passing interface (MPI)-enabled process-level, distributed computations on many compute nodes in conjunction with OpenMP shared-memory, thread-level parallelism for the most efficient computation. This poses challenges on the dynamic MPI correctness tool MUST and TypeART, its memory allocation tracking sanitizer extension based on the LLVM compiler framework. In particular, at the thread-level granularity of a process, MPI calls and memory allocations, which are both tracked for our analysis, can occur concurrently. To correctly handle this situation, we: 1) extended our compiler extension to handle OpenMP and 2) introduced thread-safety mechanisms to our runtime libraries, thus keeping the tracking of data consistent and avoiding data races. Our approach exhibits acceptable runtime and memory overheads, both typically below 30%. |
---|---|
ISSN: | 1520-9202 1941-045X |
DOI: | 10.1109/MITP.2021.3093949 |