Energy-Efficient Convolution Architecture Based on Rescheduled Dataflow

This paper presents a rescheduled dataflow of convolution and its hardware architecture that can enhance energy efficiency. For convolution involving a large amount of computations and memory accesses, previous accelerators employed parallel processing elements to meet real-time constraints. Though...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems. I, Regular papers Vol. 65; no. 12; pp. 4196 - 4207
Main Authors Jo, Jihyuck, Kim, Suchang, Park, In-Cheol
Format Journal Article
LanguageEnglish
Published New York IEEE 01.12.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper presents a rescheduled dataflow of convolution and its hardware architecture that can enhance energy efficiency. For convolution involving a large amount of computations and memory accesses, previous accelerators employed parallel processing elements to meet real-time constraints. Though the previous approaches made a success in implementing complex convolution models, they load the same input features and filter weights from on-chip memories multiple times due to the iterative property of convolution operations, suffering from high energy consumption. To mitigate redundant memory accesses, a novel dataflow is proposed that computes convolution operations incrementally so as to reuse the loaded data as maximally as possible. In addition, several convolution accelerators supporting the rescheduled dataflow are investigated, and qualitative and quantitative analyses are performed to suggest a promising candidate for various convolution models. Simulation results show that the energy efficiency of the proposed accelerator outperforms that of the previous accelerator significantly.
ISSN:1549-8328
1558-0806
DOI:10.1109/TCSI.2018.2840092