FPGA Realization of Stacked Auto-encoder with Three Fully Connected Layers

To accelerate the hardware design and reduce the resource requirements, this paper proposes a realization of a fully connected stacked auto-encoder (SAE) with fixed-point number representations in a field programmable gate array (FPGA). This SAE neural network structure can process the input feature...

Full description

Saved in:
Bibliographic Details
Published in2021 IEEE International Conference on Advances in Electrical Engineering and Computer Applications (AEECA) pp. 997 - 1001
Main Authors Li, Zerun, Zhu, Man, Zhu, Yufei, Yang, Sen, Shi, Hongfa, Jiang, Jingfei, Wang, Qinglin, Xing, Zuocheng
Format Conference Proceeding
LanguageEnglish
Published IEEE 27.08.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:To accelerate the hardware design and reduce the resource requirements, this paper proposes a realization of a fully connected stacked auto-encoder (SAE) with fixed-point number representations in a field programmable gate array (FPGA). This SAE neural network structure can process the input feature space, including spectral-based features and high-order cumulants, to classify the modulation types intelligently. A series of synthesizable Verilog codes were created and simulated with Xilinx Vivado software. Matrix multiplication implemented by cyclic multiplication operation and activation function is realized by the piece-wise function method with Verilog codes. There is also an SAE model operating on GPU platform, and this paper compares the performance of SAE with fixed-point numbers on FPGA platform to that with floating-point numbers on GPU platform. From experiment results, the running speed of the SAE on FPGA is faster than that on GPU, but the precision of FPGA is lower than that on GPU within an acceptable range.
DOI:10.1109/AEECA52519.2021.9574428