Empirical performance model-driven data layout optimization and library call selection for tensor contraction expressions

Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined in ATLAS by executing different versions of the computation. In contrast, optimizing compilers use a m...

Full description

Saved in:
Bibliographic Details
Published inJournal of parallel and distributed computing Vol. 72; no. 3; pp. 338 - 352
Main Authors Lu, Qingda, Gao, Xiaoyang, Krishnamoorthy, Sriram, Baumgartner, Gerald, Ramanujam, J., Sadayappan, P.
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.03.2012
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined in ATLAS by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empirically measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to select library calls and choose data layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry. ► Performance of tensor contraction code depends on layout and DGEMM parameters. ► Dynamic programming algorithm optimizes layout and selects library calls. ► Compile-time performance model uses empirically determined cost components. ► Measurements show this approach is effective on both clusters and multi-cores.
ISSN:0743-7315
1096-0848
DOI:10.1016/j.jpdc.2011.09.006