Efficient Processing of Spatio-Temporal Data Streams With Spiking Neural Networks

Spiking neural networks (SNNs) are potentially highly efficient models for inference on fully parallel neuromorphic hardware, but existing training methods that convert conventional artificial neural networks (ANNs) into SNNs are unable to exploit these advantages. Although ANN-to-SNN conversion has...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in neuroscience Vol. 14; p. 439
Main Authors Kugele, Alexander, Pfeil, Thomas, Pfeiffer, Michael, Chicca, Elisabetta
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Research Foundation 05.05.2020
Frontiers Media S.A
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Spiking neural networks (SNNs) are potentially highly efficient models for inference on fully parallel neuromorphic hardware, but existing training methods that convert conventional artificial neural networks (ANNs) into SNNs are unable to exploit these advantages. Although ANN-to-SNN conversion has achieved state-of-the-art accuracy for static image classification tasks, the following subtle but important difference in the way SNNs and ANNs integrate information over time makes the direct application of conversion techniques for sequence processing tasks challenging. Whereas all connections in SNNs have a certain propagation delay larger than zero, ANNs assign different roles to feed-forward connections, which immediately update all neurons within the same time step, and recurrent connections, which have to be rolled out in time and are typically assigned a delay of one time step. Here, we present a novel method to obtain highly accurate SNNs for sequence processing by modifying the ANN training before conversion, such that delays induced by ANN rollouts match the propagation delays in the targeted SNN implementation. Our method builds on the recently introduced framework of streaming rollouts, which aims for fully parallel model execution of ANNs and inherently allows for temporal integration by merging paths of different delays between input and output of the network. The resulting networks achieve state-of-the-art accuracy for multiple event-based benchmark datasets, including N-MNIST, CIFAR10-DVS, N-CARS, and DvsGesture, and through the use of spatio-temporal shortcut connections yield low-latency approximate network responses that improve over time as more of the input sequence is processed. In addition, our converted SNNs are consistently more energy-efficient than their corresponding ANNs.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
This article was submitted to Neuromorphic Engineering, a section of the journal Frontiers in Neuroscience
Reviewed by: Jim Harkin, Ulster University, United Kingdom; Guoqi Li, Tsinghua University, China
Edited by: Kaushik Roy, Purdue University, United States
ISSN:1662-4548
1662-453X
1662-453X
DOI:10.3389/fnins.2020.00439