RADAR: An Efficient FPGA-based ResNet Accelerator with Data-aware Reordering of Processing Sequences

The deployment of compact convolutional neural network (CNN) models with skip connections on edge devices through dedicated hardware accelerators is increasingly prevalent. However, optimizing the use of limited on-chip memory (OCM) across multiple CNN layers, especially those with skip connections,...

Full description

Saved in:
Bibliographic Details
Published inJournal of semiconductor technology and science Vol. 25; no. 4; pp. 451 - 458
Main Authors Park, Juntae, Choi, Dahun, Kim, Hyun
Format Journal Article
LanguageEnglish
Published 대한전자공학회 31.08.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The deployment of compact convolutional neural network (CNN) models with skip connections on edge devices through dedicated hardware accelerators is increasingly prevalent. However, optimizing the use of limited on-chip memory (OCM) across multiple CNN layers, especially those with skip connections, remains a challenge. In this paper, we propose a novel CNN accelerator technique that reorders the computation sequence for each layer to maximize data reuse within the OCM, thereby minimizing DRAM access and improving the utilization of both the OCM and the convolution processor. Additionally, we introduce a shared buffer design that efficiently manages OCM usage across different layers, particularly those involving skip connections. Finally, we present a ResNet-18 accelerator IP, RADAR, implemented with the proposed technique on a Xilinx ZCU102 FPGA. RADAR achieves 64.9 GOPS/W and 446.9 GOPS while maintaining high accuracy, demonstrating significant improvements over prior works in terms of the trade-off between throughput, hardware resource efficiency, and model accuracy. KCI Citation Count: 0
ISSN:1598-1657
2233-4866
DOI:10.5573/JSTS.2025.25.4.451