Toward Large Scale All-Optical Spiking Neural Networks

Silicon Photonics is a promising technology to develop neuromorphic hardware accelerators. Most optical neural networks rely on wavelength division multiplexing (WDM), which calls for power hungry calibration to compensate for non-uniformity fabrication process and thermal variations of microring re...

Full description

Saved in:
Bibliographic Details
Published in2022 IFIP/IEEE 30th International Conference on Very Large Scale Integration (VLSI-SoC) pp. 1 - 6
Main Authors Eslaminia, Milad, Beux, Sebastien Le
Format Conference Proceeding
LanguageEnglish
Published IEEE 03.10.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Silicon Photonics is a promising technology to develop neuromorphic hardware accelerators. Most optical neural networks rely on wavelength division multiplexing (WDM), which calls for power hungry calibration to compensate for non-uniformity fabrication process and thermal variations of microring resonators (MRR). This imposes practical limits on neuromorphic photonic hardware since only a small number of synaptic connections per neuron can be implemented. As a result, the mapping of neural networks (NN) on a hardware platform require pruning of synaptic connections, which drastically affects the accuracy. In this work, we propose a method to efficiently map pre-trained NN on an all-optical spiking neural network (SNN), with the aim to optimize hardware utilization while minimizing accuracy loss. The method relies on weight partitioning and unrolling to reduce synaptic connections. The resulting neural networks are mapped on an architecture we propose, allowing to estimate accuracy and power consumption. Results show the capability of weight partitioning to implement a realistic NN while attaining 58% reduction in energy consumption compared with unrolling.
ISSN:2324-8440
DOI:10.1109/VLSI-SoC54400.2022.9939647