Automatic generation of ARM NEON micro-kernels for matrix multiplication
General matrix multiplication ( gemm ) is a fundamental kernel in scientific computing and current frameworks for deep learning. Modern realisations of gemm are mostly written in C, on top of a small, highly tuned micro-kernel that is usually encoded in assembly. The high performance realisation of...
Saved in:
Published in | The Journal of supercomputing Vol. 80; no. 10; pp. 13873 - 13899 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
Springer US
2024
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | General matrix multiplication (
gemm
) is a fundamental kernel in scientific computing and current frameworks for deep learning. Modern realisations of
gemm
are mostly written in C, on top of a small, highly tuned
micro-kernel
that is usually encoded in assembly. The high performance realisation of
gemm
in linear algebra libraries in general include a single micro-kernel per architecture, usually implemented by an expert. In this paper, we explore a couple of paths to automatically generate
gemm
micro-kernels, either using C++ templates with vector intrinsics or high-level Python scripts that directly produce assembly code. Both solutions can integrate high performance software techniques, such as loop unrolling and software pipelining, accommodate any data type, and easily generate micro-kernels of any requested dimension. The performance of this solution is tested on three ARM-based cores and compared with state-of-the-art libraries for these processors: BLIS, OpenBLAS and ArmPL. The experimental results show that the auto-generation approach is highly competitive, mainly due to the possibility of adapting the micro-kernel to the problem dimensions. |
---|---|
ISSN: | 0920-8542 1573-0484 |
DOI: | 10.1007/s11227-024-05955-8 |