Solving "large" dense matrix problems on multi-core processors

Few realize that for large matrices dense matrix computations achieve nearly the same performance when the matrices are stored on disk as when they are stored in a very large main memory. Similarly, few realize that, given the right programming abstractions, coding Out-of-Core (OOC) implementations...

Full description

Saved in:
Bibliographic Details
Published in2009 IEEE International Symposium on Parallel & Distributed Processing pp. 1 - 8
Main Authors Marques, M., Quintana-Orti, G., Quintana-Orti, E.S., van de Geijn, R.A.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.05.2009
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Few realize that for large matrices dense matrix computations achieve nearly the same performance when the matrices are stored on disk as when they are stored in a very large main memory. Similarly, few realize that, given the right programming abstractions, coding Out-of-Core (OOC) implementations of dense linear algebra operations (where data resides on disk and has to be explicitly moved in and out of main memory) is no more difficult than programming high-performance implementations for the case where the matrix is in memory. Finally, few realize that on a contemporary eight core architecture one can solve a 100,000 times 100,000 dense symmetric positive definite linear system in about an hour. Thus, for problems that used to be considered large, it is not necessary to utilize distributed-memory architectures with massive memories if one is willing to wait longer for the solution to be computed on a fast multithreaded architecture like an SMP or multi-core computer. This paper provides evidence in support of these claims.
ISBN:9781424437511
1424437512
ISSN:1530-2075
DOI:10.1109/IPDPS.2009.5161162