Process Distance-Aware Adaptive MPI Collective Communications

Message Passing Interface (MPI) implementations provide a great flexibility to allow users to arbitrarily bind processes to computing cores to fully exploit clusters of multicore/many-core nodes. An intelligent process placement can optimize application performance according to underlying hardware a...

Full description

Saved in:
Bibliographic Details
Published in2011 IEEE International Conference on Cluster Computing pp. 196 - 204
Main Authors Teng Ma, Herault, T., Bosilca, G., Dongarra, J. J.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.09.2011
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Message Passing Interface (MPI) implementations provide a great flexibility to allow users to arbitrarily bind processes to computing cores to fully exploit clusters of multicore/many-core nodes. An intelligent process placement can optimize application performance according to underlying hardware architecture and the application's communication pattern. However, such static process placement optimization can't help MPI collective communication, whose topology is dynamic with members in each communicator. Conversely, a mismatch between the collective communication topology, the underlying hardware architecture and the process placement often happens due to the MPI's limited capabilities of dealing with complex environments. This paper proposes an adaptive collective communication framework by combining process distance, underlying hardware topologies, and runtime communicator together. Based on this information, an optimal communication topology will be generated to guarantee maximum bandwidth for each MPI collective operation regardless of process placement. Based on this framework, two distance-aware adaptive intra-node collective operations (Broadcast and All gather) are implemented as examples inside Open MPI's KNEM collective component. The awareness of process distance helps these two operations construct optimal runtime topologies and balance memory accesses across memory nodes. The experiments show these two distance-aware collective operations provide better and more stable performance than current collectives in Open MPI regardless of process placement.
ISBN:9781457713552
1457713551
ISSN:1552-5244
2168-9253
DOI:10.1109/CLUSTER.2011.30