NetPU: Prototyping a Generic Reconfigurable Neural Network Accelerator Architecture
FPGA-based Neural Network (NN) accelerator is a rapidly advancing subject in recent research. Related works can be classified as two hardware architectures: i) Heterogeneous Streaming Dataflow (HSD) architecture and ii) Processing Element Matrix (PEM) architecture. HSD architecture explores the reco...
Saved in:
Published in | 2022 International Conference on Field-Programmable Technology (ICFPT) p. 1 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
05.12.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | FPGA-based Neural Network (NN) accelerator is a rapidly advancing subject in recent research. Related works can be classified as two hardware architectures: i) Heterogeneous Streaming Dataflow (HSD) architecture and ii) Processing Element Matrix (PEM) architecture. HSD architecture explores the reconfigurability of FPGAs to support the customization and optimization of hardware design to implement a complete network on FPGA for one given trained model. PEM architecture achieves relatively generic support for different network models, essentially implementing the neuron processing modules on the FPGA scheduled by the runtime software environment. In summary, the HSD architecture requires more resources with simplified runtime software control. The PEM architecture consumes fewer resources than the HSD architecture. However, the runtime software environment can be a heavy payload for lightweight systems, such as the low-power microcontroller of IoT or edge devices. |
---|---|
DOI: | 10.1109/ICFPT56656.2022.9974206 |