Comparing SNNs and RNNs on neuromorphic vision datasets: Similarities and differences

Neuromorphic data, recording frameless spike events, have attracted considerable attention for the spatiotemporal information components and the event-driven processing fashion. Spiking neural networks (SNNs) represent a family of event-driven models with spatiotemporal dynamics for neuromorphic com...

Full description

Saved in:
Bibliographic Details
Published inNeural networks Vol. 132; pp. 108 - 120
Main Authors He, Weihua, Wu, YuJie, Deng, Lei, Li, Guoqi, Wang, Haoyu, Tian, Yang, Ding, Wei, Wang, Wenhui, Xie, Yuan
Format Journal Article
LanguageEnglish
Published United States Elsevier Ltd 01.12.2020
Subjects
Online AccessGet full text
ISSN0893-6080
1879-2782
1879-2782
DOI10.1016/j.neunet.2020.08.001

Cover

Loading…
Abstract Neuromorphic data, recording frameless spike events, have attracted considerable attention for the spatiotemporal information components and the event-driven processing fashion. Spiking neural networks (SNNs) represent a family of event-driven models with spatiotemporal dynamics for neuromorphic computing, which are widely benchmarked on neuromorphic data. Interestingly, researchers in the machine learning community can argue that recurrent (artificial) neural networks (RNNs) also have the capability to extract spatiotemporal features although they are not event-driven. Thus, the question of “what will happen if we benchmark these two kinds of models together on neuromorphic data” comes out but remains unclear. In this work, we make a systematic study to compare SNNs and RNNs on neuromorphic data, taking the vision datasets as a case study. First, we identify the similarities and differences between SNNs and RNNs (including the vanilla RNNs and LSTM) from the modeling and learning perspectives. To improve comparability and fairness, we unify the supervised learning algorithm based on backpropagation through time (BPTT), the loss function exploiting the outputs at all timesteps, the network structure with stacked fully-connected or convolutional layers, and the hyper-parameters during training. Especially, given the mainstream loss function used in RNNs, we modify it inspired by the rate coding scheme to approach that of SNNs. Furthermore, we tune the temporal resolution of datasets to test model robustness and generalization. At last, a series of contrast experiments are conducted on two types of neuromorphic datasets: DVS-converted (N-MNIST) and DVS-captured (DVS Gesture). Extensive insights regarding recognition accuracy, feature extraction, temporal resolution and contrast, learning generalization, computational complexity and parameter volume are provided, which are beneficial for the model selection on different workloads and even for the invention of novel neural models in the future.
AbstractList Neuromorphic data, recording frameless spike events, have attracted considerable attention for the spatiotemporal information components and the event-driven processing fashion. Spiking neural networks (SNNs) represent a family of event-driven models with spatiotemporal dynamics for neuromorphic computing, which are widely benchmarked on neuromorphic data. Interestingly, researchers in the machine learning community can argue that recurrent (artificial) neural networks (RNNs) also have the capability to extract spatiotemporal features although they are not event-driven. Thus, the question of "what will happen if we benchmark these two kinds of models together on neuromorphic data" comes out but remains unclear. In this work, we make a systematic study to compare SNNs and RNNs on neuromorphic data, taking the vision datasets as a case study. First, we identify the similarities and differences between SNNs and RNNs (including the vanilla RNNs and LSTM) from the modeling and learning perspectives. To improve comparability and fairness, we unify the supervised learning algorithm based on backpropagation through time (BPTT), the loss function exploiting the outputs at all timesteps, the network structure with stacked fully-connected or convolutional layers, and the hyper-parameters during training. Especially, given the mainstream loss function used in RNNs, we modify it inspired by the rate coding scheme to approach that of SNNs. Furthermore, we tune the temporal resolution of datasets to test model robustness and generalization. At last, a series of contrast experiments are conducted on two types of neuromorphic datasets: DVS-converted (N-MNIST) and DVS-captured (DVS Gesture). Extensive insights regarding recognition accuracy, feature extraction, temporal resolution and contrast, learning generalization, computational complexity and parameter volume are provided, which are beneficial for the model selection on different workloads and even for the invention of novel neural models in the future.
Neuromorphic data, recording frameless spike events, have attracted considerable attention for the spatiotemporal information components and the event-driven processing fashion. Spiking neural networks (SNNs) represent a family of event-driven models with spatiotemporal dynamics for neuromorphic computing, which are widely benchmarked on neuromorphic data. Interestingly, researchers in the machine learning community can argue that recurrent (artificial) neural networks (RNNs) also have the capability to extract spatiotemporal features although they are not event-driven. Thus, the question of "what will happen if we benchmark these two kinds of models together on neuromorphic data" comes out but remains unclear. In this work, we make a systematic study to compare SNNs and RNNs on neuromorphic data, taking the vision datasets as a case study. First, we identify the similarities and differences between SNNs and RNNs (including the vanilla RNNs and LSTM) from the modeling and learning perspectives. To improve comparability and fairness, we unify the supervised learning algorithm based on backpropagation through time (BPTT), the loss function exploiting the outputs at all timesteps, the network structure with stacked fully-connected or convolutional layers, and the hyper-parameters during training. Especially, given the mainstream loss function used in RNNs, we modify it inspired by the rate coding scheme to approach that of SNNs. Furthermore, we tune the temporal resolution of datasets to test model robustness and generalization. At last, a series of contrast experiments are conducted on two types of neuromorphic datasets: DVS-converted (N-MNIST) and DVS-captured (DVS Gesture). Extensive insights regarding recognition accuracy, feature extraction, temporal resolution and contrast, learning generalization, computational complexity and parameter volume are provided, which are beneficial for the model selection on different workloads and even for the invention of novel neural models in the future.Neuromorphic data, recording frameless spike events, have attracted considerable attention for the spatiotemporal information components and the event-driven processing fashion. Spiking neural networks (SNNs) represent a family of event-driven models with spatiotemporal dynamics for neuromorphic computing, which are widely benchmarked on neuromorphic data. Interestingly, researchers in the machine learning community can argue that recurrent (artificial) neural networks (RNNs) also have the capability to extract spatiotemporal features although they are not event-driven. Thus, the question of "what will happen if we benchmark these two kinds of models together on neuromorphic data" comes out but remains unclear. In this work, we make a systematic study to compare SNNs and RNNs on neuromorphic data, taking the vision datasets as a case study. First, we identify the similarities and differences between SNNs and RNNs (including the vanilla RNNs and LSTM) from the modeling and learning perspectives. To improve comparability and fairness, we unify the supervised learning algorithm based on backpropagation through time (BPTT), the loss function exploiting the outputs at all timesteps, the network structure with stacked fully-connected or convolutional layers, and the hyper-parameters during training. Especially, given the mainstream loss function used in RNNs, we modify it inspired by the rate coding scheme to approach that of SNNs. Furthermore, we tune the temporal resolution of datasets to test model robustness and generalization. At last, a series of contrast experiments are conducted on two types of neuromorphic datasets: DVS-converted (N-MNIST) and DVS-captured (DVS Gesture). Extensive insights regarding recognition accuracy, feature extraction, temporal resolution and contrast, learning generalization, computational complexity and parameter volume are provided, which are beneficial for the model selection on different workloads and even for the invention of novel neural models in the future.
Neuromorphic data, recording frameless spike events, have attracted considerable attention for the spatiotemporal information components and the event-driven processing fashion. Spiking neural networks (SNNs) represent a family of event-driven models with spatiotemporal dynamics for neuromorphic computing, which are widely benchmarked on neuromorphic data. Interestingly, researchers in the machine learning community can argue that recurrent (artificial) neural networks (RNNs) also have the capability to extract spatiotemporal features although they are not event-driven. Thus, the question of “what will happen if we benchmark these two kinds of models together on neuromorphic data” comes out but remains unclear. In this work, we make a systematic study to compare SNNs and RNNs on neuromorphic data, taking the vision datasets as a case study. First, we identify the similarities and differences between SNNs and RNNs (including the vanilla RNNs and LSTM) from the modeling and learning perspectives. To improve comparability and fairness, we unify the supervised learning algorithm based on backpropagation through time (BPTT), the loss function exploiting the outputs at all timesteps, the network structure with stacked fully-connected or convolutional layers, and the hyper-parameters during training. Especially, given the mainstream loss function used in RNNs, we modify it inspired by the rate coding scheme to approach that of SNNs. Furthermore, we tune the temporal resolution of datasets to test model robustness and generalization. At last, a series of contrast experiments are conducted on two types of neuromorphic datasets: DVS-converted (N-MNIST) and DVS-captured (DVS Gesture). Extensive insights regarding recognition accuracy, feature extraction, temporal resolution and contrast, learning generalization, computational complexity and parameter volume are provided, which are beneficial for the model selection on different workloads and even for the invention of novel neural models in the future.
Author Li, Guoqi
He, Weihua
Deng, Lei
Wang, Wenhui
Wang, Haoyu
Xie, Yuan
Tian, Yang
Wu, YuJie
Ding, Wei
Author_xml – sequence: 1
  givenname: Weihua
  orcidid: 0000-0002-2704-9475
  surname: He
  fullname: He, Weihua
  email: hewh16@mails.tsinghua.edu.cn
  organization: Department of Precision Instrument, Tsinghua University, Beijing 100084, China
– sequence: 2
  givenname: YuJie
  surname: Wu
  fullname: Wu, YuJie
  email: wu-yj16@mails.tsinghua.edu.cn
  organization: Department of Precision Instrument, Tsinghua University, Beijing 100084, China
– sequence: 3
  givenname: Lei
  orcidid: 0000-0002-5172-9411
  surname: Deng
  fullname: Deng, Lei
  email: leideng@ucsb.edu
  organization: Department of Electrical and Computer Engineering, University of California, Santa Barbara, CA 93106, USA
– sequence: 4
  givenname: Guoqi
  surname: Li
  fullname: Li, Guoqi
  email: liguoqi@mail.tsinghua.edu.cn
  organization: Department of Precision Instrument, Tsinghua University, Beijing 100084, China
– sequence: 5
  givenname: Haoyu
  surname: Wang
  fullname: Wang, Haoyu
  email: haoyu-wa16@mails.tsinghua.edu.cn
  organization: Department of Precision Instrument, Tsinghua University, Beijing 100084, China
– sequence: 6
  givenname: Yang
  surname: Tian
  fullname: Tian, Yang
  email: tianyang16@mails.tsinghua.edu.cn
  organization: Lab of Cognitive Neuroscience, THBI, Tsinghua University, Beijing 100084, China
– sequence: 7
  givenname: Wei
  surname: Ding
  fullname: Ding, Wei
  email: dingw17@mails.tsinghua.edu.cn
  organization: Department of Precision Instrument, Tsinghua University, Beijing 100084, China
– sequence: 8
  givenname: Wenhui
  orcidid: 0000-0002-5884-6098
  surname: Wang
  fullname: Wang, Wenhui
  email: wwh@mail.tsinghua.edu.cn
  organization: Department of Precision Instrument, Tsinghua University, Beijing 100084, China
– sequence: 9
  givenname: Yuan
  surname: Xie
  fullname: Xie, Yuan
  email: yuanxie@ucsb.edu
  organization: Department of Electrical and Computer Engineering, University of California, Santa Barbara, CA 93106, USA
BackLink https://www.ncbi.nlm.nih.gov/pubmed/32866745$$D View this record in MEDLINE/PubMed
BookMark eNqFkE9rHCEYh6WkNJu036CUOeYy01dnxnFzCISl_yCk0DRncfQ1cZnRjbqBfPu6TJJDDy0eFHme3-E5IUc-eCTkI4WGAuWft43HvcfcMGDQgGgA6BuyomJY12wQ7IisQKzbmoOAY3KS0hYAuOjad-S4ZYLzoetX5HYT5p2Kzt9VN9fXqVLeVL8Oj-Crsh_DHOLu3unq0SVX_ozKKmFO59WNm91UzOxw0YyzFiN6jek9eWvVlPDD831Kbr9--b35Xl_9_PZjc3lV65azXFsq2MhHGNTAhe6Z6jom0Nquw4FqK_SoR6b6niE3goNh5YgejVWaDxSgPSVny-4uhoc9pixnlzROk_IY9kmyrl1zRoeWF_TTM7ofZzRyF92s4pN8SVGAbgF0DClFtK8IBXkoLrdyKS4PxSUIWYoX7fwvTbuscmmVo3LT_-SLRcYS6dFhlEm7Q0HjIuosTXD_HvgDWmCfHw
CitedBy_id crossref_primary_10_1162_neco_a_01571
crossref_primary_10_3389_fnins_2023_1047008
crossref_primary_10_1016_j_dche_2022_100012
crossref_primary_10_1109_COMST_2023_3312221
crossref_primary_10_1016_j_neucom_2024_128819
crossref_primary_10_1109_TII_2024_3393007
crossref_primary_10_3389_fnins_2023_1225871
crossref_primary_10_1109_TVLSI_2023_3282239
crossref_primary_10_1109_TNNLS_2024_3355393
crossref_primary_10_1109_TNNLS_2023_3310263
crossref_primary_10_1109_TASLP_2022_3221011
crossref_primary_10_3389_fnins_2024_1412559
crossref_primary_10_1088_1674_1056_acb9f6
crossref_primary_10_1109_TCSII_2023_3282589
crossref_primary_10_1109_TETCI_2024_3359539
crossref_primary_10_3390_brainsci12070863
crossref_primary_10_3389_fncom_2021_665662
crossref_primary_10_1016_j_neucom_2023_02_026
crossref_primary_10_1088_2634_4386_ac889b
crossref_primary_10_1109_TCSII_2021_3108798
crossref_primary_10_1109_TCAD_2021_3138347
crossref_primary_10_1016_j_neucom_2021_05_020
crossref_primary_10_1038_s41598_021_96751_4
crossref_primary_10_1360_TB_2023_0775
crossref_primary_10_1088_2634_4386_ad6cef
crossref_primary_10_1007_s11063_023_11247_8
crossref_primary_10_1093_nsr_nwae037
crossref_primary_10_1002_adma_202403937
crossref_primary_10_1038_s41467_024_55094_0
crossref_primary_10_1109_TNNLS_2021_3105961
crossref_primary_10_1088_1741_2552_aceca3
crossref_primary_10_1007_s11390_021_1326_8
crossref_primary_10_1016_j_neucom_2023_126832
crossref_primary_10_1016_j_jmsy_2022_09_003
crossref_primary_10_1016_j_neucom_2022_06_036
crossref_primary_10_3390_electronics14040761
crossref_primary_10_3390_rs16142680
crossref_primary_10_1007_s11633_022_1340_5
crossref_primary_10_1109_TCDS_2024_3396431
crossref_primary_10_1007_s10489_024_05629_1
crossref_primary_10_1038_s41598_023_49579_z
crossref_primary_10_1016_j_engappai_2024_109415
crossref_primary_10_3390_electronics11244179
crossref_primary_10_3389_fnins_2021_608567
crossref_primary_10_3390_atmos14020315
crossref_primary_10_1002_adfm_202423548
crossref_primary_10_1109_JPROC_2024_3429360
crossref_primary_10_3390_s22166090
crossref_primary_10_1007_s11432_021_3510_6
crossref_primary_10_1109_TNNLS_2023_3278265
crossref_primary_10_1016_j_neunet_2023_07_008
crossref_primary_10_3390_biomimetics9070444
crossref_primary_10_1109_ACCESS_2024_3523411
crossref_primary_10_3390_electronics12173546
crossref_primary_10_1162_neco_a_01480
crossref_primary_10_1109_TNSRE_2023_3260301
crossref_primary_10_1016_j_neunet_2024_106677
crossref_primary_10_4103_REGENMED_REGENMED_D_24_00012
crossref_primary_10_1109_TKDE_2022_3178176
crossref_primary_10_1007_s11760_023_02569_0
crossref_primary_10_1109_TCDS_2023_3308347
crossref_primary_10_1007_s11571_024_10199_6
crossref_primary_10_1007_s11629_021_6824_1
crossref_primary_10_1038_s41467_024_51641_x
crossref_primary_10_1109_TCASAI_2024_3496837
crossref_primary_10_3389_fnins_2022_951164
crossref_primary_10_1016_j_neucom_2025_129804
crossref_primary_10_1038_s41467_024_47811_6
crossref_primary_10_3390_jmse11010051
crossref_primary_10_1007_s11571_024_10133_w
crossref_primary_10_1109_ACCESS_2022_3209671
crossref_primary_10_1007_s00521_024_10191_5
crossref_primary_10_1016_j_neunet_2022_07_010
crossref_primary_10_1016_j_neucom_2024_128173
Cites_doi 10.1016/j.neunet.2019.09.005
10.1109/TBCAS.2018.2834558
10.1038/s41586-019-1424-8
10.1016/j.neunet.2020.02.016
10.1609/aaai.v33i01.33011311
10.1016/S0361-9230(99)00161-6
10.3389/fncom.2015.00099
10.3389/fnins.2015.00437
10.1109/MM.2018.112130359
10.1109/JSSC.2020.2970709
10.3389/fnins.2017.00083
10.1109/LRA.2018.2793357
10.1109/TPAMI.2015.2392947
10.1109/JSSC.2012.2230553
10.1109/CVPR.2017.781
10.1126/science.1254642
10.1109/JSSC.2015.2425886
10.3389/fnins.2016.00049
10.1113/jphysiol.1952.sp004764
10.1162/neco.1997.9.8.1735
10.1109/TPAMI.2019.2919301
10.1016/S0893-6080(97)00011-7
10.1109/TNN.2004.832719
10.3389/fnins.2020.00199
10.1109/JSSC.2007.914337
10.1109/TNN.2003.820440
10.3389/fnins.2016.00405
10.3389/fnins.2015.00481
10.3389/fnins.2018.00836
10.1007/s00348-011-1207-y
10.3389/fnins.2018.00331
10.3389/fnins.2017.00309
10.1609/aaai.v33i01.33011327
10.3389/fnins.2016.00184
10.1109/5.58337
10.1109/JSSC.2010.2085952
10.3389/fnins.2013.00223
10.1109/TNNLS.2014.2362542
10.3389/fnins.2016.00508
ContentType Journal Article
Copyright 2020 Elsevier Ltd
Copyright © 2020 Elsevier Ltd. All rights reserved.
Copyright_xml – notice: 2020 Elsevier Ltd
– notice: Copyright © 2020 Elsevier Ltd. All rights reserved.
DBID AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7X8
DOI 10.1016/j.neunet.2020.08.001
DatabaseName CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
MEDLINE - Academic
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
MEDLINE - Academic
DatabaseTitleList MEDLINE
MEDLINE - Academic

Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: EIF
  name: MEDLINE
  url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search
  sourceTypes: Index Database
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1879-2782
EndPage 120
ExternalDocumentID 32866745
10_1016_j_neunet_2020_08_001
S0893608020302902
Genre Journal Article
Comparative Study
GroupedDBID ---
--K
--M
-~X
.DC
.~1
0R~
123
186
1B1
1RT
1~.
1~5
29N
4.4
457
4G.
53G
5RE
5VS
6TJ
7-5
71M
8P~
9JM
9JN
AABNK
AACTN
AADPK
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXLA
AAXUO
AAYFN
ABAOU
ABBOA
ABCQJ
ABEFU
ABFNM
ABFRF
ABHFT
ABIVO
ABJNI
ABLJU
ABMAC
ABXDB
ABYKQ
ACAZW
ACDAQ
ACGFO
ACGFS
ACIUM
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADGUI
ADJOM
ADMUD
ADRHT
AEBSH
AECPX
AEFWE
AEKER
AENEX
AFKWA
AFTJW
AFXIZ
AGHFR
AGUBO
AGWIK
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ARUGR
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
F0J
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-2
G-Q
G8K
GBLVA
GBOLZ
HLZ
HMQ
HVGLF
HZ~
IHE
J1W
JJJVA
K-O
KOM
KZ1
LG9
LMP
M2V
M41
MHUIS
MO0
MOBAO
MVM
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
ROL
RPZ
SBC
SCC
SDF
SDG
SDP
SES
SEW
SNS
SPC
SPCBC
SSN
SST
SSV
SSW
SSZ
T5K
TAE
UAP
UNMZH
VOH
WUQ
XPP
ZMT
~G-
AATTM
AAXKI
AAYWO
AAYXX
ABDPE
ABWVN
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AGCQF
AGQPQ
AGRNS
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
BNPGV
CITATION
SSH
CGR
CUY
CVF
ECM
EFKBS
EIF
NPM
7X8
ID FETCH-LOGICAL-c362t-f182b6b07a768c52a4428eff44e71cf8cbcb2a552e6d860d2d2d85edfac671003
IEDL.DBID AIKHN
ISSN 0893-6080
1879-2782
IngestDate Fri Sep 05 13:55:14 EDT 2025
Mon Jul 21 05:28:09 EDT 2025
Tue Jul 01 01:24:36 EDT 2025
Thu Apr 24 22:53:09 EDT 2025
Fri Feb 23 02:46:26 EST 2024
IsPeerReviewed true
IsScholarly true
Keywords Long short-term memory
Recurrent neural networks
Spatiotemporal dynamics
Neuromorphic dataset
Spiking neural networks
Language English
License Copyright © 2020 Elsevier Ltd. All rights reserved.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c362t-f182b6b07a768c52a4428eff44e71cf8cbcb2a552e6d860d2d2d85edfac671003
Notes ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ORCID 0000-0002-5884-6098
0000-0002-2704-9475
0000-0002-5172-9411
PMID 32866745
PQID 2439621736
PQPubID 23479
PageCount 13
ParticipantIDs proquest_miscellaneous_2439621736
pubmed_primary_32866745
crossref_primary_10_1016_j_neunet_2020_08_001
crossref_citationtrail_10_1016_j_neunet_2020_08_001
elsevier_sciencedirect_doi_10_1016_j_neunet_2020_08_001
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate December 2020
2020-12-00
2020-Dec
20201201
PublicationDateYYYYMMDD 2020-12-01
PublicationDate_xml – month: 12
  year: 2020
  text: December 2020
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle Neural networks
PublicationTitleAlternate Neural Netw
PublicationYear 2020
Publisher Elsevier Ltd
Publisher_xml – name: Elsevier Ltd
References Zhang, M., Wu, J., Chua, Y., Luo, X., Pan, Z., & Liu, D., et al. (2019). MPD-AL: an efficient membrane potential driven aggregate-label learning algorithm for spiking neurons. In
Iyer, Chua, Li (b22) 2018
Izhikevich (b24) 2004; 15
Vlachas, Pathak, Hunt, Sapsis, Girvan, Ott (b48) 2020
(pp. 21–26).
Lichtsteiner, Posch, Delbruck (b31) 2008; 43
Hodgkin, Huxley (b20) 1952; 117
Gers, Schmidhuber, Cummins (b17) 1999
Amir, A., Taba, B., Berg, D., Melano, T., McKinstry, J., & Di Nolfo, C., et al. (2017). A low power, fully event-based gesture recognition system. In
Drazen, Lichtsteiner, Häfliger, Delbrück, Jensen (b15) 2011; 51
Diehl, Cook (b13) 2015; 9
Pan, Chua, Wu, Zhang, Li, Ambikairajah (b39) 2019
Ramesh, Yang, Orchard, Le Thi, Zhang, Xiang (b43) 2019
Mishra, Ghosh, Principe, Thakor, Kukreja (b36) 2017; 11
Paszke, Gross, Massa, Lerer, Bradbury, Chanan (b40) 2019
Deng, Wu, Hu, Liang, Ding, Li (b12) 2020; 121
Mikolov (b35) 2012
Boden (b4) 2002
Li, Liu, Ji, Li, Shi (b30) 2017; 11
Orchard, Jayawant, Cohen, Thakor (b37) 2015; 9
Serrano-Gotarredona, Linares-Barranco (b44) 2013; 48
Orchard, Meyer, Etienne-Cummings, Posch, Thakor, Benosman (b38) 2015; 37
Hochreiter, Schmidhuber (b19) 1997; 9
Kaiser, Mostafa, Neftci (b27) 2018
Cohen, Orchard, Leng, Tapson, Benosman, Van Schaik (b6) 2016; 10
Barranco, Fermuller, Aloimonos, Delbruck (b3) 2016; 10
Conradt, Berner, Cook, Delbruck (b7) 2009
Dua, Graff (b16) 2017
Wu, Y., Deng, L., Li, G., Zhu, J., Xie, Y., & Shi, L. (2019). Direct training for spiking neural networks: Faster, larger, better. In
Wu, Yılmaz, Zhang, Li, Tan (b53) 2020; 14
Werbos (b49) 1990; 78
Delbruck, T. (2008). Frame-free dynamic digital vision. In
Maass (b32) 1997; 10
Jozefowicz, Zaremba, Sutskever (b26) 2015
Merolla, Arthur, Alvarez-Icaza, Cassidy, Sawada, Akopyan (b33) 2014; 345
Yang, Liu, Delbruck (b55) 2015; 50
Shrestha, Orchard (b46) 2018
(pp. 7243–7252).
Deng, Wang, Li, Li, Liang, Zhu (b11) 2020
Zhao, Ding, Chen, Linares-Barranco, Tang (b57) 2014; 26
Cho, Van Merriënboer, Gulcehre, Bahdanau, Bougares, Schwenk (b5) 2014
Vidal, Rebecq, Horstschaefer, Scaramuzza (b47) 2018; 3
(pp. 1311–1318).
Serrano-Gotarredona, Linares-Barranco (b45) 2015; 9
Posch, Matolin, Wohlgenannt (b42) 2010; 46
Davies, Srinivasa, Lin, Chinya, Cao, Choday (b8) 2018; 38
Delbruck, Lang (b10) 2013; 7
Kingma, Ba (b28) 2014
Abbott (b1) 1999; 50
Haessig, Cassidy, Alvarez, Benosman, Orchard (b18) 2018; 12
Diehl, Neil, Binas, Cook, Liu, Pfeiffer (b14) 2015
Wu, Chua, Zhang, Li, Tan (b50) 2018; 12
(pp. 1327–1334).
Wu, Deng, Li, Zhu, Shi (b51) 2018; 12
Pei, Deng, Song, Zhao, Zhang, Wu (b41) 2019; 572
Miao, Gowayyed, Metze (b34) 2015
Izhikevich (b23) 2003; 14
Hu, Liu, Pfeiffer, Delbruck (b21) 2016; 10
Lee, Delbruck, Pfeiffer (b29) 2016; 10
Xiao, Tang, Ma, Yan, Orchard (b54) 2019
Jin, Zhang, Li (b25) 2018
Diehl (10.1016/j.neunet.2020.08.001_b13) 2015; 9
Iyer (10.1016/j.neunet.2020.08.001_b22) 2018
Yang (10.1016/j.neunet.2020.08.001_b55) 2015; 50
Ramesh (10.1016/j.neunet.2020.08.001_b43) 2019
Dua (10.1016/j.neunet.2020.08.001_b16) 2017
Izhikevich (10.1016/j.neunet.2020.08.001_b23) 2003; 14
Hu (10.1016/j.neunet.2020.08.001_b21) 2016; 10
10.1016/j.neunet.2020.08.001_b2
Deng (10.1016/j.neunet.2020.08.001_b12) 2020; 121
Hodgkin (10.1016/j.neunet.2020.08.001_b20) 1952; 117
Diehl (10.1016/j.neunet.2020.08.001_b14) 2015
Mishra (10.1016/j.neunet.2020.08.001_b36) 2017; 11
10.1016/j.neunet.2020.08.001_b9
Conradt (10.1016/j.neunet.2020.08.001_b7) 2009
Abbott (10.1016/j.neunet.2020.08.001_b1) 1999; 50
Deng (10.1016/j.neunet.2020.08.001_b11) 2020
Lichtsteiner (10.1016/j.neunet.2020.08.001_b31) 2008; 43
Posch (10.1016/j.neunet.2020.08.001_b42) 2010; 46
Wu (10.1016/j.neunet.2020.08.001_b50) 2018; 12
Kaiser (10.1016/j.neunet.2020.08.001_b27) 2018
Davies (10.1016/j.neunet.2020.08.001_b8) 2018; 38
Kingma (10.1016/j.neunet.2020.08.001_b28) 2014
Jin (10.1016/j.neunet.2020.08.001_b25) 2018
Shrestha (10.1016/j.neunet.2020.08.001_b46) 2018
Serrano-Gotarredona (10.1016/j.neunet.2020.08.001_b45) 2015; 9
Mikolov (10.1016/j.neunet.2020.08.001_b35) 2012
Maass (10.1016/j.neunet.2020.08.001_b32) 1997; 10
Miao (10.1016/j.neunet.2020.08.001_b34) 2015
Delbruck (10.1016/j.neunet.2020.08.001_b10) 2013; 7
Vlachas (10.1016/j.neunet.2020.08.001_b48) 2020
Orchard (10.1016/j.neunet.2020.08.001_b38) 2015; 37
Werbos (10.1016/j.neunet.2020.08.001_b49) 1990; 78
Merolla (10.1016/j.neunet.2020.08.001_b33) 2014; 345
Haessig (10.1016/j.neunet.2020.08.001_b18) 2018; 12
10.1016/j.neunet.2020.08.001_b56
Wu (10.1016/j.neunet.2020.08.001_b51) 2018; 12
Serrano-Gotarredona (10.1016/j.neunet.2020.08.001_b44) 2013; 48
Boden (10.1016/j.neunet.2020.08.001_b4) 2002
Cohen (10.1016/j.neunet.2020.08.001_b6) 2016; 10
10.1016/j.neunet.2020.08.001_b52
Wu (10.1016/j.neunet.2020.08.001_b53) 2020; 14
Drazen (10.1016/j.neunet.2020.08.001_b15) 2011; 51
Pei (10.1016/j.neunet.2020.08.001_b41) 2019; 572
Gers (10.1016/j.neunet.2020.08.001_b17) 1999
Izhikevich (10.1016/j.neunet.2020.08.001_b24) 2004; 15
Cho (10.1016/j.neunet.2020.08.001_b5) 2014
Jozefowicz (10.1016/j.neunet.2020.08.001_b26) 2015
Li (10.1016/j.neunet.2020.08.001_b30) 2017; 11
Vidal (10.1016/j.neunet.2020.08.001_b47) 2018; 3
Hochreiter (10.1016/j.neunet.2020.08.001_b19) 1997; 9
Paszke (10.1016/j.neunet.2020.08.001_b40) 2019
Orchard (10.1016/j.neunet.2020.08.001_b37) 2015; 9
Lee (10.1016/j.neunet.2020.08.001_b29) 2016; 10
Pan (10.1016/j.neunet.2020.08.001_b39) 2019
Xiao (10.1016/j.neunet.2020.08.001_b54) 2019
Zhao (10.1016/j.neunet.2020.08.001_b57) 2014; 26
Barranco (10.1016/j.neunet.2020.08.001_b3) 2016; 10
References_xml – reference: Amir, A., Taba, B., Berg, D., Melano, T., McKinstry, J., & Di Nolfo, C., et al. (2017). A low power, fully event-based gesture recognition system. In
– volume: 12
  start-page: 860
  year: 2018
  end-page: 870
  ident: b18
  article-title: Spiking optical flow for event-based sensors using ibm’s truenorth neurosynaptic system
  publication-title: IEEE Transactions on Biomedical Circuits and Systems
– volume: 26
  start-page: 1963
  year: 2014
  end-page: 1978
  ident: b57
  article-title: Feedforward categorization on AER motion events using cortex-like features in a spiking neural network
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– year: 2019
  ident: b39
  article-title: An efficient and perceptually motivated auditory neural encoding and decoding algorithm for spiking neural networks
– volume: 38
  start-page: 82
  year: 2018
  end-page: 99
  ident: b8
  article-title: Loihi: A neuromorphic manycore processor with on-chip learning
  publication-title: IEEE Micro
– volume: 121
  start-page: 294
  year: 2020
  end-page: 307
  ident: b12
  article-title: Rethinking the performance comparison between SNNS and ANNS
  publication-title: Neural Networks
– year: 2019
  ident: b54
  article-title: An event-driven categorization model for AER image sensors using multispike encoding and learning
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– reference: Zhang, M., Wu, J., Chua, Y., Luo, X., Pan, Z., & Liu, D., et al. (2019). MPD-AL: an efficient membrane potential driven aggregate-label learning algorithm for spiking neurons. In
– volume: 48
  start-page: 827
  year: 2013
  end-page: 838
  ident: b44
  article-title: A
  publication-title: IEEE Journal of Solid-State Circuits
– volume: 37
  start-page: 2028
  year: 2015
  end-page: 2040
  ident: b38
  article-title: Hfirst: a temporal approach to object recognition
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
– volume: 572
  start-page: 106
  year: 2019
  end-page: 111
  ident: b41
  article-title: Towards artificial general intelligence with hybrid Tianjic chip architecture
  publication-title: Nature
– volume: 10
  start-page: 405
  year: 2016
  ident: b21
  article-title: DVS benchmark datasets for object tracking, action recognition, and object recognition
  publication-title: Frontiers in Neuroscience
– start-page: 8024
  year: 2019
  end-page: 8035
  ident: b40
  article-title: Pytorch: An imperative style, high-performance deep learning library
  publication-title: Advances in neural information processing systems
– volume: 9
  start-page: 481
  year: 2015
  ident: b45
  article-title: Poker-DVS and MNIST-DVS. Their history, how they were made, and other details
  publication-title: Frontiers in Neuroscience
– reference: (pp. 21–26).
– volume: 43
  start-page: 566
  year: 2008
  end-page: 576
  ident: b31
  article-title: A 128
  publication-title: IEEE Journal of Solid-State Circuits
– year: 2002
  ident: b4
  article-title: A guide to recurrent neural networks and backpropagation
  publication-title: The Dallas project
– year: 2018
  ident: b22
  article-title: Is neuromorphic mnist neuromorphic? analyzing the discriminative power of neuromorphic datasets in the time domain
– year: 2020
  ident: b11
  article-title: Tianjic: A unified and scalable chip bridging spike-based and continuous neural computation
  publication-title: IEEE Journal of Solid-State Circuits
– start-page: 1
  year: 2015
  end-page: 8
  ident: b14
  article-title: Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing
  publication-title: 2015 international joint conference on neural networks (IJCNN)
– start-page: 780
  year: 2009
  end-page: 785
  ident: b7
  article-title: An embedded AER dynamic vision sensor for low-latency pole balancing
  publication-title: 2009 IEEE 12th international conference on computer vision workshops, ICCV workshops
– year: 2014
  ident: b5
  article-title: Learning phrase representations using RNN encoder-decoder for statistical machine translation
– reference: Delbruck, T. (2008). Frame-free dynamic digital vision. In
– volume: 14
  start-page: 199
  year: 2020
  ident: b53
  article-title: Deep spiking neural networks for large vocabulary automatic speech recognition
  publication-title: Frontiers in Neuroscience
– reference: (pp. 1327–1334).
– start-page: 2342
  year: 2015
  end-page: 2350
  ident: b26
  article-title: An empirical exploration of recurrent network architectures
  publication-title: International conference on machine learning
– volume: 11
  start-page: 309
  year: 2017
  ident: b30
  article-title: Cifar10-dvs: An event-stream dataset for object classification
  publication-title: Frontiers in Neuroscience
– volume: 9
  start-page: 437
  year: 2015
  ident: b37
  article-title: Converting static image datasets to spiking neuromorphic datasets using saccades
  publication-title: Frontiers in Neuroscience
– volume: 12
  year: 2018
  ident: b51
  article-title: Spatio-temporal backpropagation for training high-performance spiking neural networks
  publication-title: Frontiers in Neuroscience
– volume: 50
  start-page: 2149
  year: 2015
  end-page: 2160
  ident: b55
  article-title: A dynamic vision sensor with 1% temporal contrast sensitivity and in-pixel asynchronous delta modulator for event encoding
  publication-title: IEEE Journal of Solid-State Circuits
– volume: 10
  start-page: 49
  year: 2016
  ident: b3
  article-title: A dataset for visual navigation with neuromorphic methods
  publication-title: Frontiers in Neuroscience
– volume: 9
  start-page: 1735
  year: 1997
  end-page: 1780
  ident: b19
  article-title: Long short-term memory
  publication-title: Neural Computation
– volume: 10
  start-page: 1659
  year: 1997
  end-page: 1671
  ident: b32
  article-title: Networks of spiking neurons: the third generation of neural network models
  publication-title: Neural Networks
– volume: 9
  start-page: 99
  year: 2015
  ident: b13
  article-title: Unsupervised learning of digit recognition using spike-timing-dependent plasticity
  publication-title: Frontiers in Computational Neuroscience
– volume: 117
  start-page: 500
  year: 1952
  end-page: 544
  ident: b20
  article-title: A quantitative description of membrane current and its application to conduction and excitation in nerve
  publication-title: The Journal of Physiology
– volume: 51
  start-page: 1465
  year: 2011
  ident: b15
  article-title: Toward real-time particle tracking using an event-based dynamic vision sensor
  publication-title: Experiments in Fluids
– year: 2012
  ident: b35
  article-title: Statistical language models based on neural networks
  publication-title: Presentation at google, mountain view, 2nd april, Vol. 80
– volume: 46
  start-page: 259
  year: 2010
  end-page: 275
  ident: b42
  article-title: A QVGA 143 dB dynamic range frame-free PWM image sensor with lossless pixel-level video compression and time-domain CDS
  publication-title: IEEE Journal of Solid-State Circuits
– year: 1999
  ident: b17
  article-title: Learning to forget: Continual prediction with LSTM
– year: 2014
  ident: b28
  article-title: Adam: A method for stochastic optimization
– volume: 3
  start-page: 994
  year: 2018
  end-page: 1001
  ident: b47
  article-title: Ultimate SLAM? Combining events, images, and IMU for robust visual slam in HDR and high-speed scenarios
  publication-title: IEEE Robotics and Automation Letters
– volume: 78
  start-page: 1550
  year: 1990
  end-page: 1560
  ident: b49
  article-title: Backpropagation through time: what it does and how to do it
  publication-title: Proceedings of the IEEE
– year: 2017
  ident: b16
  article-title: UCI machine learning repository
– reference: (pp. 1311–1318).
– year: 2019
  ident: b43
  article-title: DART: distribution aware retinal transform for event-based cameras
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
– start-page: 1412
  year: 2018
  end-page: 1421
  ident: b46
  article-title: SLAYER: Spike layer error reassignment in time
  publication-title: Advances in neural information processing systems
– volume: 345
  start-page: 668
  year: 2014
  end-page: 673
  ident: b33
  article-title: A million spiking-neuron integrated circuit with a scalable communication network and interface
  publication-title: Science
– volume: 10
  start-page: 508
  year: 2016
  ident: b29
  article-title: Training deep spiking neural networks using backpropagation
  publication-title: Frontiers in Neuroscience
– start-page: 7005
  year: 2018
  end-page: 7015
  ident: b25
  article-title: Hybrid macro/micro level backpropagation for training deep spiking neural networks
  publication-title: Advances in neural information processing systems
– volume: 7
  start-page: 223
  year: 2013
  ident: b10
  article-title: Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor
  publication-title: Frontiers in Neuroscience
– volume: 11
  start-page: 83
  year: 2017
  ident: b36
  article-title: A saccade based framework for real-time motion segmentation using event based vision sensors
  publication-title: Frontiers in Neuroscience
– volume: 12
  start-page: 836
  year: 2018
  ident: b50
  article-title: A spiking neural network framework for robust sound classification
  publication-title: Frontiers in Neuroscience
– reference: (pp. 7243–7252).
– volume: 15
  start-page: 1063
  year: 2004
  end-page: 1070
  ident: b24
  article-title: Which model to use for cortical spiking neurons?
  publication-title: IEEE Transactions on Neural Networks
– year: 2018
  ident: b27
  article-title: Synaptic plasticity dynamics for deep continuous local learning
– reference: Wu, Y., Deng, L., Li, G., Zhu, J., Xie, Y., & Shi, L. (2019). Direct training for spiking neural networks: Faster, larger, better. In
– volume: 10
  start-page: 184
  year: 2016
  ident: b6
  article-title: Skimming digits: neuromorphic classification of spike-encoded images
  publication-title: Frontiers in Neuroscience
– start-page: 167
  year: 2015
  end-page: 174
  ident: b34
  article-title: EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding
  publication-title: 2015 IEEE workshop on automatic speech recognition and understanding (ASRU)
– volume: 50
  start-page: 303
  year: 1999
  end-page: 304
  ident: b1
  article-title: Lapicque’s introduction of the integrate-and-fire model neuron (1907)
  publication-title: Brain Research Bulletin
– year: 2020
  ident: b48
  article-title: Backpropagation algorithms and reservoir computing in recurrent neural networks for the forecasting of complex spatiotemporal dynamics
  publication-title: Neural Networks
– volume: 14
  start-page: 1569
  year: 2003
  end-page: 1572
  ident: b23
  article-title: Simple model of spiking neurons
  publication-title: IEEE Transactions on Neural Networks
– volume: 121
  start-page: 294
  year: 2020
  ident: 10.1016/j.neunet.2020.08.001_b12
  article-title: Rethinking the performance comparison between SNNS and ANNS
  publication-title: Neural Networks
  doi: 10.1016/j.neunet.2019.09.005
– volume: 12
  start-page: 860
  issue: 4
  year: 2018
  ident: 10.1016/j.neunet.2020.08.001_b18
  article-title: Spiking optical flow for event-based sensors using ibm’s truenorth neurosynaptic system
  publication-title: IEEE Transactions on Biomedical Circuits and Systems
  doi: 10.1109/TBCAS.2018.2834558
– year: 2014
  ident: 10.1016/j.neunet.2020.08.001_b5
– volume: 572
  start-page: 106
  issue: 7767
  year: 2019
  ident: 10.1016/j.neunet.2020.08.001_b41
  article-title: Towards artificial general intelligence with hybrid Tianjic chip architecture
  publication-title: Nature
  doi: 10.1038/s41586-019-1424-8
– year: 2020
  ident: 10.1016/j.neunet.2020.08.001_b48
  article-title: Backpropagation algorithms and reservoir computing in recurrent neural networks for the forecasting of complex spatiotemporal dynamics
  publication-title: Neural Networks
  doi: 10.1016/j.neunet.2020.02.016
– ident: 10.1016/j.neunet.2020.08.001_b52
  doi: 10.1609/aaai.v33i01.33011311
– year: 2019
  ident: 10.1016/j.neunet.2020.08.001_b54
  article-title: An event-driven categorization model for AER image sensors using multispike encoding and learning
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– volume: 50
  start-page: 303
  issue: 5–6
  year: 1999
  ident: 10.1016/j.neunet.2020.08.001_b1
  article-title: Lapicque’s introduction of the integrate-and-fire model neuron (1907)
  publication-title: Brain Research Bulletin
  doi: 10.1016/S0361-9230(99)00161-6
– start-page: 1
  year: 2015
  ident: 10.1016/j.neunet.2020.08.001_b14
  article-title: Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing
– volume: 9
  start-page: 99
  year: 2015
  ident: 10.1016/j.neunet.2020.08.001_b13
  article-title: Unsupervised learning of digit recognition using spike-timing-dependent plasticity
  publication-title: Frontiers in Computational Neuroscience
  doi: 10.3389/fncom.2015.00099
– volume: 9
  start-page: 437
  year: 2015
  ident: 10.1016/j.neunet.2020.08.001_b37
  article-title: Converting static image datasets to spiking neuromorphic datasets using saccades
  publication-title: Frontiers in Neuroscience
  doi: 10.3389/fnins.2015.00437
– volume: 38
  start-page: 82
  issue: 1
  year: 2018
  ident: 10.1016/j.neunet.2020.08.001_b8
  article-title: Loihi: A neuromorphic manycore processor with on-chip learning
  publication-title: IEEE Micro
  doi: 10.1109/MM.2018.112130359
– year: 2020
  ident: 10.1016/j.neunet.2020.08.001_b11
  article-title: Tianjic: A unified and scalable chip bridging spike-based and continuous neural computation
  publication-title: IEEE Journal of Solid-State Circuits
  doi: 10.1109/JSSC.2020.2970709
– volume: 11
  start-page: 83
  year: 2017
  ident: 10.1016/j.neunet.2020.08.001_b36
  article-title: A saccade based framework for real-time motion segmentation using event based vision sensors
  publication-title: Frontiers in Neuroscience
  doi: 10.3389/fnins.2017.00083
– start-page: 8024
  year: 2019
  ident: 10.1016/j.neunet.2020.08.001_b40
  article-title: Pytorch: An imperative style, high-performance deep learning library
– start-page: 2342
  year: 2015
  ident: 10.1016/j.neunet.2020.08.001_b26
  article-title: An empirical exploration of recurrent network architectures
– ident: 10.1016/j.neunet.2020.08.001_b9
– year: 2017
  ident: 10.1016/j.neunet.2020.08.001_b16
– volume: 3
  start-page: 994
  issue: 2
  year: 2018
  ident: 10.1016/j.neunet.2020.08.001_b47
  article-title: Ultimate SLAM? Combining events, images, and IMU for robust visual slam in HDR and high-speed scenarios
  publication-title: IEEE Robotics and Automation Letters
  doi: 10.1109/LRA.2018.2793357
– volume: 37
  start-page: 2028
  issue: 10
  year: 2015
  ident: 10.1016/j.neunet.2020.08.001_b38
  article-title: Hfirst: a temporal approach to object recognition
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2015.2392947
– volume: 48
  start-page: 827
  issue: 3
  year: 2013
  ident: 10.1016/j.neunet.2020.08.001_b44
  article-title: A 128×1281.5% contrast sensitivity 0.9% FPN 3 μs latency 4 mW asynchronous frame-free dynamic vision sensor using transimpedance preamplifiers
  publication-title: IEEE Journal of Solid-State Circuits
  doi: 10.1109/JSSC.2012.2230553
– ident: 10.1016/j.neunet.2020.08.001_b2
  doi: 10.1109/CVPR.2017.781
– volume: 345
  start-page: 668
  issue: 6197
  year: 2014
  ident: 10.1016/j.neunet.2020.08.001_b33
  article-title: A million spiking-neuron integrated circuit with a scalable communication network and interface
  publication-title: Science
  doi: 10.1126/science.1254642
– volume: 50
  start-page: 2149
  issue: 9
  year: 2015
  ident: 10.1016/j.neunet.2020.08.001_b55
  article-title: A dynamic vision sensor with 1% temporal contrast sensitivity and in-pixel asynchronous delta modulator for event encoding
  publication-title: IEEE Journal of Solid-State Circuits
  doi: 10.1109/JSSC.2015.2425886
– volume: 10
  start-page: 49
  year: 2016
  ident: 10.1016/j.neunet.2020.08.001_b3
  article-title: A dataset for visual navigation with neuromorphic methods
  publication-title: Frontiers in Neuroscience
  doi: 10.3389/fnins.2016.00049
– volume: 117
  start-page: 500
  issue: 4
  year: 1952
  ident: 10.1016/j.neunet.2020.08.001_b20
  article-title: A quantitative description of membrane current and its application to conduction and excitation in nerve
  publication-title: The Journal of Physiology
  doi: 10.1113/jphysiol.1952.sp004764
– year: 2002
  ident: 10.1016/j.neunet.2020.08.001_b4
  article-title: A guide to recurrent neural networks and backpropagation
– volume: 9
  start-page: 1735
  issue: 8
  year: 1997
  ident: 10.1016/j.neunet.2020.08.001_b19
  article-title: Long short-term memory
  publication-title: Neural Computation
  doi: 10.1162/neco.1997.9.8.1735
– year: 2019
  ident: 10.1016/j.neunet.2020.08.001_b43
  article-title: DART: distribution aware retinal transform for event-based cameras
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2019.2919301
– start-page: 1412
  year: 2018
  ident: 10.1016/j.neunet.2020.08.001_b46
  article-title: SLAYER: Spike layer error reassignment in time
– year: 2012
  ident: 10.1016/j.neunet.2020.08.001_b35
  article-title: Statistical language models based on neural networks
– volume: 10
  start-page: 1659
  issue: 9
  year: 1997
  ident: 10.1016/j.neunet.2020.08.001_b32
  article-title: Networks of spiking neurons: the third generation of neural network models
  publication-title: Neural Networks
  doi: 10.1016/S0893-6080(97)00011-7
– volume: 15
  start-page: 1063
  issue: 5
  year: 2004
  ident: 10.1016/j.neunet.2020.08.001_b24
  article-title: Which model to use for cortical spiking neurons?
  publication-title: IEEE Transactions on Neural Networks
  doi: 10.1109/TNN.2004.832719
– year: 2018
  ident: 10.1016/j.neunet.2020.08.001_b27
– volume: 14
  start-page: 199
  year: 2020
  ident: 10.1016/j.neunet.2020.08.001_b53
  article-title: Deep spiking neural networks for large vocabulary automatic speech recognition
  publication-title: Frontiers in Neuroscience
  doi: 10.3389/fnins.2020.00199
– start-page: 167
  year: 2015
  ident: 10.1016/j.neunet.2020.08.001_b34
  article-title: EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding
– volume: 43
  start-page: 566
  issue: 2
  year: 2008
  ident: 10.1016/j.neunet.2020.08.001_b31
  article-title: A 128 × 128 120 dB 15 μs latency asynchronous temporal contrast vision sensor
  publication-title: IEEE Journal of Solid-State Circuits
  doi: 10.1109/JSSC.2007.914337
– volume: 14
  start-page: 1569
  issue: 6
  year: 2003
  ident: 10.1016/j.neunet.2020.08.001_b23
  article-title: Simple model of spiking neurons
  publication-title: IEEE Transactions on Neural Networks
  doi: 10.1109/TNN.2003.820440
– volume: 10
  start-page: 405
  year: 2016
  ident: 10.1016/j.neunet.2020.08.001_b21
  article-title: DVS benchmark datasets for object tracking, action recognition, and object recognition
  publication-title: Frontiers in Neuroscience
  doi: 10.3389/fnins.2016.00405
– volume: 9
  start-page: 481
  year: 2015
  ident: 10.1016/j.neunet.2020.08.001_b45
  article-title: Poker-DVS and MNIST-DVS. Their history, how they were made, and other details
  publication-title: Frontiers in Neuroscience
  doi: 10.3389/fnins.2015.00481
– volume: 12
  start-page: 836
  year: 2018
  ident: 10.1016/j.neunet.2020.08.001_b50
  article-title: A spiking neural network framework for robust sound classification
  publication-title: Frontiers in Neuroscience
  doi: 10.3389/fnins.2018.00836
– year: 1999
  ident: 10.1016/j.neunet.2020.08.001_b17
– start-page: 780
  year: 2009
  ident: 10.1016/j.neunet.2020.08.001_b7
  article-title: An embedded AER dynamic vision sensor for low-latency pole balancing
– start-page: 7005
  year: 2018
  ident: 10.1016/j.neunet.2020.08.001_b25
  article-title: Hybrid macro/micro level backpropagation for training deep spiking neural networks
– volume: 51
  start-page: 1465
  issue: 5
  year: 2011
  ident: 10.1016/j.neunet.2020.08.001_b15
  article-title: Toward real-time particle tracking using an event-based dynamic vision sensor
  publication-title: Experiments in Fluids
  doi: 10.1007/s00348-011-1207-y
– volume: 12
  year: 2018
  ident: 10.1016/j.neunet.2020.08.001_b51
  article-title: Spatio-temporal backpropagation for training high-performance spiking neural networks
  publication-title: Frontiers in Neuroscience
  doi: 10.3389/fnins.2018.00331
– volume: 11
  start-page: 309
  year: 2017
  ident: 10.1016/j.neunet.2020.08.001_b30
  article-title: Cifar10-dvs: An event-stream dataset for object classification
  publication-title: Frontiers in Neuroscience
  doi: 10.3389/fnins.2017.00309
– year: 2018
  ident: 10.1016/j.neunet.2020.08.001_b22
– year: 2014
  ident: 10.1016/j.neunet.2020.08.001_b28
– ident: 10.1016/j.neunet.2020.08.001_b56
  doi: 10.1609/aaai.v33i01.33011327
– year: 2019
  ident: 10.1016/j.neunet.2020.08.001_b39
– volume: 10
  start-page: 184
  year: 2016
  ident: 10.1016/j.neunet.2020.08.001_b6
  article-title: Skimming digits: neuromorphic classification of spike-encoded images
  publication-title: Frontiers in Neuroscience
  doi: 10.3389/fnins.2016.00184
– volume: 78
  start-page: 1550
  issue: 10
  year: 1990
  ident: 10.1016/j.neunet.2020.08.001_b49
  article-title: Backpropagation through time: what it does and how to do it
  publication-title: Proceedings of the IEEE
  doi: 10.1109/5.58337
– volume: 46
  start-page: 259
  issue: 1
  year: 2010
  ident: 10.1016/j.neunet.2020.08.001_b42
  article-title: A QVGA 143 dB dynamic range frame-free PWM image sensor with lossless pixel-level video compression and time-domain CDS
  publication-title: IEEE Journal of Solid-State Circuits
  doi: 10.1109/JSSC.2010.2085952
– volume: 7
  start-page: 223
  year: 2013
  ident: 10.1016/j.neunet.2020.08.001_b10
  article-title: Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor
  publication-title: Frontiers in Neuroscience
  doi: 10.3389/fnins.2013.00223
– volume: 26
  start-page: 1963
  issue: 9
  year: 2014
  ident: 10.1016/j.neunet.2020.08.001_b57
  article-title: Feedforward categorization on AER motion events using cortex-like features in a spiking neural network
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
  doi: 10.1109/TNNLS.2014.2362542
– volume: 10
  start-page: 508
  year: 2016
  ident: 10.1016/j.neunet.2020.08.001_b29
  article-title: Training deep spiking neural networks using backpropagation
  publication-title: Frontiers in Neuroscience
  doi: 10.3389/fnins.2016.00508
SSID ssj0006843
Score 2.6415298
Snippet Neuromorphic data, recording frameless spike events, have attracted considerable attention for the spatiotemporal information components and the event-driven...
SourceID proquest
pubmed
crossref
elsevier
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 108
SubjectTerms Action Potentials - physiology
Algorithms
Databases, Factual
Humans
Long short-term memory
Machine Learning
Neural Networks, Computer
Neuromorphic dataset
Neurons - physiology
Recognition, Psychology - physiology
Recurrent neural networks
Spatiotemporal dynamics
Spiking neural networks
Vision, Ocular - physiology
Title Comparing SNNs and RNNs on neuromorphic vision datasets: Similarities and differences
URI https://dx.doi.org/10.1016/j.neunet.2020.08.001
https://www.ncbi.nlm.nih.gov/pubmed/32866745
https://www.proquest.com/docview/2439621736
Volume 132
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3dT9swED9BedkLgwGjYyAj8WqaOLGT7A1VVN0m-kCp1DfLcZwpiLrV2r7ub985dipNGkJCeUkin-Lc2fch3_0O4EYXLCsTpigrdERTFRuqoqqgqY4jzcu84srVDj9MxHiW_pjz-R4Mu1oYl1YZdL_X6a22Dm8GgZuDVdMMphGaWuFKRXGdssIBSh6wpBC8Bwd333-OJzuFLHKfPIfjqSPoKujaNC9rtta4pEoWtVieoTvMfyzUax5oa4lGR3AYXEhy52d5DHvGfoKPXXsGEnbrCcyGvseg_UWmk8maKFuRR3eztKSFsVwskcmNJr6-nLhk0bXZrL-RabNoMOJtwVZbsq6NCiqVU5iN7p-GYxq6KFCNxmlDa4wgSlFGmcLIQnOmUow4TF2nqcliXee61CVTnDMjqlxEFcMr56aqlRYO-ic5g55dWnMOBJ2XhHGVMV0kGHdy5YRpci04Ryc95n1IOs5JHSDGXaeLF9nlkj1Lz2_p-C1dA8wo7gPdUa08xMYb47NOKPKfpSLRCrxBed3JUOIuckcjyprldi0Z_prA6CwRffjshbubS8JyIbKUf3n3dy_gg3vyWTBfobf5vTWX6MtsyivYv_0TX4UV-xebrfLn
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3NS8MwFA9DD3rx-2N-RvAa16ZN2nqT4Zg6d3AbeAtpmkrFZcNtV_92X5p2IiiC9FLahKbvpe-D_t77IXSpEhqlAZWEJsojofQ1kV6WkFD5nmJpnDFpa4cf-7w7Cu-f2XMDtetaGAurrGy_s-mlta6utCpptqZF0Rp44Gq5LRWFfUoT21ByNWRBZHF9Vx9fOA8eO-gcjCZ2eF0_V4K8jF4YbSGV1Cs7eVbcMD_4p9_iz9IPdbbQRhVA4hu3xm3U0GYHbdbkDLj6VnfRqO0YBs0LHvT7MyxNhp_sycTgsonleAIiLhR21eXYQkVnej67xoNiXEC-W7ZaLafVJCpgUvbQqHM7bHdJxaFAFLimOckhf0h56kUS8grFqAwh39B5HoY68lUeq1SlVDJGNc9i7mUUjpjpLJeK28Y_wT5aMROjDxGG0CWgTEZUJQFknUxaVepYccYgRPdZEwW15ISqGoxbnos3USPJXoWTt7DyFpb-0vObiCxnTV2DjT_GR7VSxLeNIsAH_DHzotahgG_I_hiRRk8WM0Hh1TjkZgFvogOn3OVaAhpzHoXs6N_PPUdr3eFjT_Tu-g_HaN3ecXiYE7Qyf1_oU4hq5ulZuWs_AaiM87I
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Comparing+SNNs+and+RNNs+on+neuromorphic+vision+datasets%3A+Similarities+and+differences&rft.jtitle=Neural+networks&rft.au=He%2C+Weihua&rft.au=Wu%2C+YuJie&rft.au=Deng%2C+Lei&rft.au=Li%2C+Guoqi&rft.date=2020-12-01&rft.issn=0893-6080&rft.volume=132&rft.spage=108&rft.epage=120&rft_id=info:doi/10.1016%2Fj.neunet.2020.08.001&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_neunet_2020_08_001
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0893-6080&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0893-6080&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0893-6080&client=summon