Productive Cluster Programming with OmpSs

Clusters of SMPs are ubiquitous. They have been traditionally programmed by using MPI. But, the productivity of MPI programmers is low because of the complexity of expressing parallelism and communication, and the difficulty of debugging. To try to ease the burden on the programmer new programming m...

Full description

Saved in:
Bibliographic Details
Published inEuro-Par 2011 Parallel Processing pp. 555 - 566
Main Authors Bueno, Javier, Martinell, Luis, Duran, Alejandro, Farreras, Montse, Martorell, Xavier, Badia, Rosa M., Ayguade, Eduard, Labarta, Jesús
Format Book Chapter Publication
LanguageEnglish
Published Berlin, Heidelberg Springer Berlin Heidelberg 2011
Springer
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Clusters of SMPs are ubiquitous. They have been traditionally programmed by using MPI. But, the productivity of MPI programmers is low because of the complexity of expressing parallelism and communication, and the difficulty of debugging. To try to ease the burden on the programmer new programming models have tried to give the illusion of a global shared-address space (e.g., UPC, Co-array Fortran). Unfortunately, these models do not support, increasingly common, irregular forms of parallelism that require asynchronous task parallelism. Other models, such as X10 or Chapel, provide this asynchronous parallelism but the programmer is required to rewrite entirely his application. We present the implementation of OmpSs for clusters, a variant of OpenMP extended to support asynchrony, heterogeneity and data movement for task parallelism. As OpenMP, it is based on decorating an existing serial version with compiler directives that are translated into calls to a runtime system that manages the parallelism extraction and data coherence and movement. Thus, the same program written in OmpSs can run in a regular SMP machine, in clusters of SMPs, or even can be used for debugging with the serial version. The runtime uses the information provided by the programmer to distribute the work across the cluster while optimizes communications using affinity scheduling and caching of data. We have evaluated our proposal with a set of kernels and the OmpSs versions obtain a performance comparable, or even superior, to the one obtained by the same version of MPI.
ISBN:9783642233999
3642233996
9783642297403
3642297404
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-642-23400-2_52