Adaptive SpMV/SpMSpV on GPUs for Input Vectors of Varied Sparsity
Despite numerous efforts for optimizing the performance of Sparse Matrix and Vector Multiplication (SpMV) on modern hardware architectures, few works are done to its sparse counterpart, Sparse Matrix and Sparse Vector Multiplication (SpMSpV), not to mention dealing with input vectors of varied spars...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
30.06.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Despite numerous efforts for optimizing the performance of Sparse Matrix and
Vector Multiplication (SpMV) on modern hardware architectures, few works are
done to its sparse counterpart, Sparse Matrix and Sparse Vector Multiplication
(SpMSpV), not to mention dealing with input vectors of varied sparsity. The key
challenge is that depending on the sparsity levels, distribution of data, and
compute platform, the optimal choice of SpMV/SpMSpV kernel can vary, and a
static choice does not suffice. In this paper, we propose an adaptive
SpMV/SpMSpV framework, which can automatically select the appropriate
SpMV/SpMSpV kernel on GPUs for any sparse matrix and vector at the runtime.
Based on systematic analysis on key factors such as computing pattern, workload
distribution and write-back strategy, eight candidate SpMV/SpMSpV kernels are
encapsulated into the framework to achieve high performance in a seamless
manner. A comprehensive study on machine learning based kernel selector is
performed to choose the kernel and adapt with the varieties of both the input
and hardware from both accuracy and overhead perspectives. Experiments
demonstrate that the adaptive framework can substantially outperform the
previous state-of-the-art in real-world applications on NVIDIA Tesla K40m, P100
and V100 GPUs. |
---|---|
DOI: | 10.48550/arxiv.2006.16767 |