High Performance and Portable Convolution Operators for ARM-based Multicore Processors
The considerable impact of Convolutional Neural Networks on many Artificial Intelligence tasks has led to the development of various high performance algorithms for the convolution operator present in this type of networks. One of these approaches leverages the ımcol transform followed by a general...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
13.05.2020
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.2005.06410 |
Cover
Summary: | The considerable impact of Convolutional Neural Networks on many Artificial
Intelligence tasks has led to the development of various high performance
algorithms for the convolution operator present in this type of networks. One
of these approaches leverages the ımcol transform followed by a general matrix
multiplication (GEMM) in order to take advantage of the highly optimized
realizations of the GEMM kernel in many linear algebra libraries. The main
problems of this approach are 1) the large memory workspace required to host
the intermediate matrices generated by the IM2COL transform; and 2) the time to
perform the IM2COL transform, which is not negligible for complex neural
networks. This paper presents a portable high performance convolution algorithm
based on the BLIS realization of the GEMM kernel that avoids the use of the
intermediate memory by taking advantage of the BLIS structure. In addition, the
proposed algorithm eliminates the cost of the explicit IM2COL transform, while
maintaining the portability and performance of the underlying realization of
GEMM in BLIS. |
---|---|
DOI: | 10.48550/arxiv.2005.06410 |