Using fixed memory blocks in GPUs to accelerate SpMV multiplication in probabilistic model checkers
Probabilistic model checkers rely heavily on sparse matrix-vector multiplication (SpMV) to analyze a given probabilistic model. SpMV is a compute- and memory-intensive task. Therefore, it adversely affects the scalability of probabilistic model checkers. Graphical processing units (GPUs) have been u...
Saved in:
Published in | Journal of logical and algebraic methods in programming Vol. 147; p. 101073 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Inc
01.09.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Probabilistic model checkers rely heavily on sparse matrix-vector multiplication (SpMV) to analyze a given probabilistic model. SpMV is a compute- and memory-intensive task. Therefore, it adversely affects the scalability of probabilistic model checkers. Graphical processing units (GPUs) have been utilized to improve the speed of SpMV. The GPU-based SpMV compute time consists of two independent factors: (Factor 1) host-to-GPU memory transfer and (Factor 2) the actual GPU-based SpMV multiplication. While many researchers have focused on the importance of Factor 1, none have explored ways to minimize its impact on overall SpMV computation time.
This paper proposes an approach to reduce the memory transfer-related latency by hiding the data transfer from the host to the GPU in the state-space exploration step of probabilistic model checking.
This is achieved in two steps: 1) reserve the complete coalesced memory in the GPU, and 2) move chunks of the sparse matrix from the host to the reserved memory during state-space exploration.
We report on an open source prototypical implementation of our approach on a CUDA-based cuSPARSE API in Storm, a prominent probabilistic model checker.
We empirically demonstrate that our approach reduces memory transfer latency by at least one order of magnitude. Additionally, for most of the benchmarks, our approach achieves computation times comparable to GPU-Prism, a prominent probabilistic model checker.
[Display omitted]
•Sparse matrix-vector multiplication is a computational bottleneck in probabilistic model checkers.•The literature proposes using GPUs to accelerate SpMV multiplication.•Host to GPU memory transfers-related latency is an important factor in GPU-based SpMV multiplications.•Further optimization of transfers by introducing a fixed memory scheme to avoid secondary copy inside GPU. |
---|---|
ISSN: | 2352-2208 |
DOI: | 10.1016/j.jlamp.2025.101073 |