A Tight I/O Lower Bound for Matrix Multiplication

A tight lower bound for required I/O when computing an ordinary matrix-matrix multiplication on a processor with two layers of memory is established. Prior work obtained weaker lower bounds by reasoning about the number of segments needed to perform \(C:=AB\), for distinct matrices \(A\), \(B\), and...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Smith, Tyler Michael, Lowery, Bradley, Langou, Julien, Robert A van de Geijn
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 06.02.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:A tight lower bound for required I/O when computing an ordinary matrix-matrix multiplication on a processor with two layers of memory is established. Prior work obtained weaker lower bounds by reasoning about the number of segments needed to perform \(C:=AB\), for distinct matrices \(A\), \(B\), and \(C\), where each segment is a series of operations involving \(M\) reads and writes to and from fast memory, and \(M\) is the size of fast memory. A lower bound on the number of segments was then determined by obtaining an upper bound on the number of elementary multiplications performed per segment. This paper follows the same high level approach, but improves the lower bound by (1) transforming algorithms for MMM so that they perform all computation via fused multiply-add instructions (FMAs) and using this to reason about only the cost associated with reading the matrices, and (2) decoupling the per-segment I/O cost from the size of fast memory. For \(n \times n\) matrices, the lower bound's leading-order term is \(2n^3/\sqrt{M}\). A theoretical algorithm whose leading terms attains this is introduced. To what extent the state-of-the-art Goto's Algorithm attains the lower bound is discussed.
ISSN:2331-8422