Effective and Efficient Computation with Multiple-timescale Spiking Recurrent Neural Networks
The emergence of brain-inspired neuromorphic computing as a paradigm for edge AI is motivating the search for high-performance and efficient spiking neural networks to run on this hardware. However, compared to classical neural networks in deep learning, current spiking neural networks lack competit...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
23.05.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The emergence of brain-inspired neuromorphic computing as a paradigm for edge
AI is motivating the search for high-performance and efficient spiking neural
networks to run on this hardware. However, compared to classical neural
networks in deep learning, current spiking neural networks lack competitive
performance in compelling areas. Here, for sequential and streaming tasks, we
demonstrate how a novel type of adaptive spiking recurrent neural network
(SRNN) is able to achieve state-of-the-art performance compared to other
spiking neural networks and almost reach or exceed the performance of classical
recurrent neural networks (RNNs) while exhibiting sparse activity. From this,
we calculate a $>$100x energy improvement for our SRNNs over classical RNNs on
the harder tasks. To achieve this, we model standard and adaptive
multiple-timescale spiking neurons as self-recurrent neural units, and leverage
surrogate gradients and auto-differentiation in the PyTorch Deep Learning
framework to efficiently implement backpropagation-through-time, including
learning of the important spiking neuron parameters to adapt our spiking
neurons to the tasks. |
---|---|
DOI: | 10.48550/arxiv.2005.11633 |