STDP-based spiking deep convolutional neural networks for object recognition

Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainab...

Full description

Saved in:
Bibliographic Details
Published inNeural networks Vol. 99; pp. 56 - 67
Main Authors Kheradpisheh, Saeed Reza, Ganjtabesh, Mohammad, Thorpe, Simon J., Masquelier, Timothée
Format Journal Article
LanguageEnglish
Published United States Elsevier Ltd 01.03.2018
Elsevier
Subjects
Online AccessGet full text
ISSN0893-6080
1879-2782
1879-2782
DOI10.1016/j.neunet.2017.12.005

Cover

Abstract Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated – using rate-based neural networks trained with back-propagation – that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions.
AbstractList Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions.
Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated-using rate-based neural networks trained with back-propagationthat having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higherorder neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions.
Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions.Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions.
Author Kheradpisheh, Saeed Reza
Ganjtabesh, Mohammad
Thorpe, Simon J.
Masquelier, Timothée
Author_xml – sequence: 1
  givenname: Saeed Reza
  surname: Kheradpisheh
  fullname: Kheradpisheh, Saeed Reza
  email: kheradpisheh@ut.ac.ir
  organization: Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Tehran, Iran
– sequence: 2
  givenname: Mohammad
  surname: Ganjtabesh
  fullname: Ganjtabesh, Mohammad
  email: mgtabesh@ut.ac.ir
  organization: Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Tehran, Iran
– sequence: 3
  givenname: Simon J.
  surname: Thorpe
  fullname: Thorpe, Simon J.
  email: simon.thorpe@cnrs.fr
  organization: CERCO UMR 5549, CNRS –Université Toulouse 3, France
– sequence: 4
  givenname: Timothée
  orcidid: 0000-0001-8629-9506
  surname: Masquelier
  fullname: Masquelier, Timothée
  email: timothee.masquelier@cnrs.fr
  organization: CERCO UMR 5549, CNRS –Université Toulouse 3, France
BackLink https://www.ncbi.nlm.nih.gov/pubmed/29328958$$D View this record in MEDLINE/PubMed
https://hal.science/hal-02341957$$DView record in HAL
BookMark eNqFkU9v1DAQxS1URLeFb4BQjvSQMLbjxOaAVJU_RVoJJMrZcuxJ8TZrL3ayiG9PQloOHOA00uj33mjeOyMnIQYk5DmFigJtXu2qgFPAsWJA24qyCkA8IhsqW1WyVrITsgGpeNmAhFNylvMOABpZ8yfklCnOpBJyQ7Zfbt5-LjuT0RX54O98uC0c4qGwMRzjMI0-BjMU86n0e4w_YrrLRR9TEbsd2rFIaONt8Av4lDzuzZDx2f08J1_fv7u5ui63nz58vLrclrau2Vi6TnDeC-xaZ7CpnaXIJXSqR8EYKAnCtbRTNdZ9rRyn0IGQjVFOONtwkPycXKy-38ygD8nvTfqpo_H6-nKrlx0wXlMl2iOd2Zcre0jx-4R51HufLQ6DCRinrKmag2iV5GJGX9yjU7dH98f5Ia0ZeL0CNsWcE_ba-tEsn4_J-EFT0Es1eqfXavRSjaZMz9XM4vov8YP_f2RvVhnOgR49Jp2tx2DR-Tn5Ubvo_23wC1Gqqa4
CitedBy_id crossref_primary_10_3390_photonics9020120
crossref_primary_10_1016_j_neucom_2020_05_031
crossref_primary_10_1088_1742_6596_1845_1_012026
crossref_primary_10_1093_cercor_bhac050
crossref_primary_10_1016_j_patcog_2019_04_016
crossref_primary_10_1007_s11071_018_4730_z
crossref_primary_10_1016_j_comcom_2023_06_006
crossref_primary_10_1109_TCSVT_2021_3083978
crossref_primary_10_1007_s12652_020_02357_5
crossref_primary_10_1007_s12559_023_10181_0
crossref_primary_10_3390_math11234777
crossref_primary_10_1109_ACCESS_2020_2968240
crossref_primary_10_3390_brainsci13010068
crossref_primary_10_1038_s42256_021_00388_x
crossref_primary_10_2478_jaiscr_2019_0009
crossref_primary_10_3390_math10224191
crossref_primary_10_1016_j_neucom_2023_126247
crossref_primary_10_1109_TC_2022_3191968
crossref_primary_10_1002_aisy_202300383
crossref_primary_10_3389_fnins_2023_1151949
crossref_primary_10_3390_sym11010005
crossref_primary_10_1155_2021_5524611
crossref_primary_10_3389_fncom_2023_1250908
crossref_primary_10_1109_TBCAS_2019_2945406
crossref_primary_10_3389_fnins_2020_590164
crossref_primary_10_1109_TCAD_2021_3138347
crossref_primary_10_1142_S0218126622501833
crossref_primary_10_1109_TBCAS_2019_2948920
crossref_primary_10_1109_TNNLS_2020_3006263
crossref_primary_10_3390_nano14191575
crossref_primary_10_3389_fnins_2022_775457
crossref_primary_10_1360_TB_2023_0775
crossref_primary_10_1016_j_neunet_2022_06_001
crossref_primary_10_1109_TCSI_2021_3061766
crossref_primary_10_1007_s00779_019_01292_3
crossref_primary_10_3390_electronics11142114
crossref_primary_10_3389_frai_2022_680165
crossref_primary_10_1016_j_comnet_2025_111063
crossref_primary_10_1016_j_neucom_2023_01_087
crossref_primary_10_1109_JSTSP_2020_2983547
crossref_primary_10_1007_s11042_023_16344_3
crossref_primary_10_1016_j_neunet_2019_06_001
crossref_primary_10_1038_s41598_024_52299_7
crossref_primary_10_1109_ACCESS_2020_3047993
crossref_primary_10_1007_s00138_023_01494_z
crossref_primary_10_1007_s00500_020_05501_7
crossref_primary_10_1016_j_jallcom_2020_156675
crossref_primary_10_1016_j_patcog_2024_111070
crossref_primary_10_1109_TCDS_2021_3139444
crossref_primary_10_3389_fnbot_2024_1518878
crossref_primary_10_1007_s11063_020_10322_8
crossref_primary_10_1002_int_22772
crossref_primary_10_1109_ACCESS_2020_2984383
crossref_primary_10_1109_ACCESS_2020_3034353
crossref_primary_10_1016_j_neunet_2021_08_009
crossref_primary_10_1109_ACCESS_2025_3544379
crossref_primary_10_3389_fnins_2020_00653
crossref_primary_10_1021_acsami_0c10796
crossref_primary_10_1007_s11571_020_09605_6
crossref_primary_10_1021_acsaelm_4c02015
crossref_primary_10_1109_TII_2019_2960536
crossref_primary_10_1016_j_neunet_2020_02_011
crossref_primary_10_1109_TNNLS_2021_3106961
crossref_primary_10_1371_journal_pcbi_1008127
crossref_primary_10_1109_ACCESS_2022_3149577
crossref_primary_10_1109_ACCESS_2024_3500134
crossref_primary_10_3389_fnins_2023_1187252
crossref_primary_10_1142_S0129065720500276
crossref_primary_10_1155_2023_3135668
crossref_primary_10_3389_fncom_2024_1418115
crossref_primary_10_3389_fnins_2022_857513
crossref_primary_10_3390_app13084809
crossref_primary_10_1007_s11042_023_16852_2
crossref_primary_10_1007_s11063_021_10680_x
crossref_primary_10_1016_j_neunet_2018_12_002
crossref_primary_10_1109_TNNLS_2021_3085966
crossref_primary_10_1109_TVLSI_2022_3208191
crossref_primary_10_1103_PhysRevResearch_6_L012009
crossref_primary_10_1108_IMDS_02_2021_0093
crossref_primary_10_1109_TCSII_2022_3207989
crossref_primary_10_1007_s00422_023_00956_x
crossref_primary_10_3390_bdcc5040067
crossref_primary_10_1109_MSP_2019_2933719
crossref_primary_10_3389_fnins_2024_1507654
crossref_primary_10_3389_fnins_2020_00423
crossref_primary_10_12677_mos_2024_133186
crossref_primary_10_1007_s11263_024_02046_2
crossref_primary_10_1016_j_neucom_2022_10_055
crossref_primary_10_1038_s41565_020_0655_z
crossref_primary_10_1007_s11831_023_09901_4
crossref_primary_10_1016_j_neucom_2024_127762
crossref_primary_10_3390_mi14071353
crossref_primary_10_1109_ACCESS_2020_2998098
crossref_primary_10_1007_s42452_021_04553_0
crossref_primary_10_1109_TFUZZ_2021_3062899
crossref_primary_10_1063_5_0182699
crossref_primary_10_1016_j_neunet_2023_01_026
crossref_primary_10_3389_fnins_2021_615279
crossref_primary_10_1088_1741_2552_ad2d30
crossref_primary_10_1103_PhysRevApplied_19_064010
crossref_primary_10_1109_LED_2022_3219465
crossref_primary_10_1016_j_neucom_2024_128173
crossref_primary_10_1109_TNNLS_2021_3111051
crossref_primary_10_1109_ACCESS_2020_3040895
crossref_primary_10_1109_TNNLS_2021_3055421
crossref_primary_10_3389_fninf_2018_00079
crossref_primary_10_3390_electronics13183619
crossref_primary_10_3390_electronics14010043
crossref_primary_10_1007_s11071_024_09525_8
crossref_primary_10_1016_j_engappai_2023_106322
crossref_primary_10_1109_TBCAS_2019_2963676
crossref_primary_10_1039_D1TC01660A
crossref_primary_10_1016_j_patcog_2019_05_015
crossref_primary_10_3389_fnins_2019_00189
crossref_primary_10_1109_JLT_2020_3000670
crossref_primary_10_1016_j_knosys_2022_108257
crossref_primary_10_1016_j_patter_2022_100522
crossref_primary_10_1116_6_0000591
crossref_primary_10_3389_fnbot_2020_568319
crossref_primary_10_3389_fnins_2018_00774
crossref_primary_10_3390_brainsci12070863
crossref_primary_10_3390_jlpea8040034
crossref_primary_10_3389_fnins_2024_1401690
crossref_primary_10_34133_icomputing_0032
crossref_primary_10_1016_j_engstruct_2021_111859
crossref_primary_10_1016_j_surfin_2024_105515
crossref_primary_10_1007_s11432_021_3217_0
crossref_primary_10_1016_j_neunet_2019_09_024
crossref_primary_10_1016_j_asoc_2024_111681
crossref_primary_10_1016_j_neucom_2020_07_109
crossref_primary_10_1016_j_psep_2022_10_080
crossref_primary_10_1155_2020_8851351
crossref_primary_10_1038_s41598_023_34517_w
crossref_primary_10_1049_bme2_12099
crossref_primary_10_3389_fnins_2018_00524
crossref_primary_10_1109_JSTQE_2023_3240248
crossref_primary_10_3389_fnins_2019_00625
crossref_primary_10_1016_j_knosys_2022_110193
crossref_primary_10_1016_j_neucom_2023_126832
crossref_primary_10_3390_app11052059
crossref_primary_10_1016_j_ijleo_2020_164261
crossref_primary_10_1109_TNANO_2018_2871680
crossref_primary_10_1109_TNNLS_2021_3069683
crossref_primary_10_1016_j_knosys_2023_111024
crossref_primary_10_1016_j_neunet_2025_107256
crossref_primary_10_1109_JSEN_2024_3397884
crossref_primary_10_1109_ACCESS_2020_2995886
crossref_primary_10_1109_TNNLS_2021_3110991
crossref_primary_10_3389_fnins_2021_654786
crossref_primary_10_1145_3510854
crossref_primary_10_1016_j_mejo_2024_106377
crossref_primary_10_1016_j_knosys_2024_112099
crossref_primary_10_3390_electronics11091392
crossref_primary_10_1109_TCSII_2021_3063784
crossref_primary_10_3389_fnins_2023_1153999
crossref_primary_10_1109_TNNLS_2024_3352653
crossref_primary_10_3390_app11041383
crossref_primary_10_3389_fnins_2021_608567
crossref_primary_10_1007_s11432_020_3203_0
crossref_primary_10_3389_fnins_2021_603433
crossref_primary_10_1088_2634_4386_adad0e
crossref_primary_10_1145_3266229
crossref_primary_10_1109_JPROC_2024_3429360
crossref_primary_10_1016_j_chip_2024_100093
crossref_primary_10_1016_j_neunet_2024_106318
crossref_primary_10_1145_3304103
crossref_primary_10_1088_1361_6641_ac3cc7
crossref_primary_10_1016_j_neucom_2023_126984
crossref_primary_10_1111_coin_70001
crossref_primary_10_1016_j_icte_2020_05_002
crossref_primary_10_1016_j_knosys_2024_112865
crossref_primary_10_1109_JETCAS_2022_3207514
crossref_primary_10_1109_JETCAS_2023_3328926
crossref_primary_10_1109_JIOT_2022_3150307
crossref_primary_10_23919_cje_2022_00_162
crossref_primary_10_1007_s40295_020_00212_5
crossref_primary_10_1109_TCSI_2022_3204645
crossref_primary_10_3390_brainsci12020281
crossref_primary_10_1016_j_cap_2022_07_004
crossref_primary_10_3390_app14209607
crossref_primary_10_3390_biomimetics9100646
crossref_primary_10_3390_sym10110626
crossref_primary_10_1109_TNNLS_2021_3095724
crossref_primary_10_3390_electronics11132097
crossref_primary_10_1109_ACCESS_2023_3236800
crossref_primary_10_3389_fnins_2023_1203956
crossref_primary_10_3389_fnins_2019_00405
crossref_primary_10_1016_j_neunet_2019_09_007
crossref_primary_10_1016_j_neunet_2022_09_003
crossref_primary_10_1109_TNNLS_2020_3044364
crossref_primary_10_1016_j_neucom_2024_128655
crossref_primary_10_1371_journal_pone_0313547
crossref_primary_10_3389_fnins_2018_00665
crossref_primary_10_1016_j_neucom_2023_126292
crossref_primary_10_1109_TETCI_2018_2872014
crossref_primary_10_1126_sciadv_ade4838
crossref_primary_10_1109_JETCAS_2023_3328916
crossref_primary_10_1016_j_neunet_2023_06_019
crossref_primary_10_1038_s41598_023_31365_6
crossref_primary_10_1371_journal_pone_0244683
crossref_primary_10_1002_bies_201800248
crossref_primary_10_1109_TASE_2024_3359641
crossref_primary_10_1080_13682199_2020_1757294
crossref_primary_10_1109_ACCESS_2025_3548318
crossref_primary_10_1002_mma_6241
crossref_primary_10_3390_s19050993
crossref_primary_10_1016_j_neucom_2023_127059
crossref_primary_10_1016_j_neunet_2019_08_002
crossref_primary_10_1109_TCAD_2024_3443003
crossref_primary_10_1109_TNNLS_2018_2826721
crossref_primary_10_1016_j_neunet_2019_01_010
crossref_primary_10_1038_s41598_020_60572_8
crossref_primary_10_1002_cta_2753
crossref_primary_10_1016_j_neucom_2024_128662
crossref_primary_10_1109_MNANO_2018_2845078
crossref_primary_10_1088_2634_4386_ad05da
crossref_primary_10_3390_s22186998
crossref_primary_10_1145_3447778
crossref_primary_10_1145_3571155
crossref_primary_10_1142_S0129065718500594
crossref_primary_10_1109_JSTQE_2019_2911565
crossref_primary_10_1016_j_measen_2023_100861
crossref_primary_10_1109_ACCESS_2020_3044646
crossref_primary_10_1007_s11063_023_11247_8
crossref_primary_10_3389_fncom_2021_658764
crossref_primary_10_1016_j_bbr_2021_113484
crossref_primary_10_1109_ACCESS_2022_3179968
crossref_primary_10_1016_j_cmpbup_2024_100171
crossref_primary_10_1109_JSEN_2021_3120845
crossref_primary_10_3390_brainsci14111149
crossref_primary_10_1109_ACCESS_2024_3479968
crossref_primary_10_1016_j_jallcom_2024_177992
crossref_primary_10_3389_fncom_2021_594337
crossref_primary_10_3390_app12115749
crossref_primary_10_1088_2634_4386_acad98
crossref_primary_10_1109_TCDS_2018_2833071
crossref_primary_10_3389_fnins_2023_1261543
crossref_primary_10_12688_f1000research_22296_1
crossref_primary_10_2139_ssrn_4018613
crossref_primary_10_1016_j_neucom_2019_07_009
crossref_primary_10_1149_1945_7111_ac1699
crossref_primary_10_3389_fncom_2021_627567
crossref_primary_10_1016_j_engappai_2024_109415
crossref_primary_10_1016_j_neunet_2019_09_036
crossref_primary_10_1002_aisy_202000149
crossref_primary_10_32604_cmc_2024_047240
crossref_primary_10_3389_fncom_2018_00074
crossref_primary_10_3389_fnins_2021_712667
crossref_primary_10_1016_j_sse_2019_03_023
crossref_primary_10_1007_s00521_021_05910_1
crossref_primary_10_1007_s11063_023_11274_5
crossref_primary_10_1088_1755_1315_252_2_022046
crossref_primary_10_3390_math11051224
crossref_primary_10_1109_TCDS_2019_2918228
crossref_primary_10_3390_biomimetics7040246
crossref_primary_10_1109_TCDS_2023_3329747
crossref_primary_10_1016_j_neucom_2020_11_052
crossref_primary_10_3389_fnins_2022_983950
crossref_primary_10_1007_s00521_022_07513_w
crossref_primary_10_1109_ACCESS_2020_2990416
crossref_primary_10_1038_s41928_022_00840_9
crossref_primary_10_1007_s11063_021_10562_2
crossref_primary_10_1016_j_neunet_2019_08_016
crossref_primary_10_1109_TNNLS_2021_3071976
crossref_primary_10_1016_j_neucom_2023_126773
crossref_primary_10_1002_pssr_202100255
crossref_primary_10_1007_s12559_022_10097_1
crossref_primary_10_1109_TCSII_2022_3199033
crossref_primary_10_3389_fnins_2021_695357
crossref_primary_10_1002_aisy_202000154
crossref_primary_10_1007_s10462_022_10272_8
crossref_primary_10_1038_s41467_024_51110_5
crossref_primary_10_1007_s00422_024_00998_9
crossref_primary_10_1007_s12652_022_04460_1
crossref_primary_10_1016_j_neucom_2025_129440
crossref_primary_10_1016_j_ins_2024_120998
crossref_primary_10_3389_fnbot_2021_629210
crossref_primary_10_1109_ACCESS_2019_2946422
crossref_primary_10_3389_fnins_2023_1047008
crossref_primary_10_1109_TCYB_2022_3188015
crossref_primary_10_1162_neco_a_01499
crossref_primary_10_3389_fncom_2021_617862
crossref_primary_10_1109_TCSII_2023_3301180
crossref_primary_10_3389_fnins_2021_638474
crossref_primary_10_1007_s10489_024_06128_z
crossref_primary_10_1007_s13534_024_00436_6
crossref_primary_10_1038_s41598_020_65237_0
crossref_primary_10_3389_fnins_2020_00634
crossref_primary_10_3390_electronics9101605
crossref_primary_10_1109_TVT_2024_3415438
crossref_primary_10_1016_j_neucom_2021_02_027
crossref_primary_10_1016_j_neures_2022_12_002
crossref_primary_10_3389_fnbot_2019_00029
crossref_primary_10_1007_s00521_020_04755_4
crossref_primary_10_1016_j_neucom_2024_128587
crossref_primary_10_1109_TPAMI_2019_2903179
crossref_primary_10_3390_app8101857
crossref_primary_10_1051_e3sconf_202346004012
crossref_primary_10_1088_2634_4386_ad2d5c
crossref_primary_10_1109_TETCI_2024_3359539
crossref_primary_10_1002_aisy_202100054
crossref_primary_10_1016_j_conb_2023_102707
crossref_primary_10_1016_j_neucom_2023_02_026
crossref_primary_10_1016_j_neucom_2023_02_029
crossref_primary_10_1039_D4MH01182A
crossref_primary_10_1109_TNNLS_2022_3232106
crossref_primary_10_1109_ACCESS_2022_3187033
crossref_primary_10_1016_j_peva_2024_102423
crossref_primary_10_1117_1_JRS_18_036509
crossref_primary_10_2139_ssrn_4237475
crossref_primary_10_1126_sciadv_adi1480
crossref_primary_10_3390_math12182846
crossref_primary_10_1140_epjs_s11734_025_01512_3
crossref_primary_10_1038_s41586_019_1677_2
crossref_primary_10_1109_TVT_2022_3178808
crossref_primary_10_1016_j_neucom_2024_127934
crossref_primary_10_1016_j_oceaneng_2019_05_042
crossref_primary_10_1109_TNNLS_2021_3131356
crossref_primary_10_1007_s10278_023_00776_2
crossref_primary_10_1109_TCDS_2020_2971655
crossref_primary_10_3389_frobt_2024_1435197
crossref_primary_10_1016_j_eswa_2025_126490
crossref_primary_10_1109_JSEN_2023_3329559
crossref_primary_10_1109_TCSII_2020_2980054
crossref_primary_10_1049_ipr2_12935
crossref_primary_10_1088_2634_4386_ad6733
crossref_primary_10_3390_app12125980
crossref_primary_10_1109_TNNLS_2020_3043415
crossref_primary_10_3389_fnins_2020_615756
crossref_primary_10_1007_s12559_024_10355_4
crossref_primary_10_3389_fnins_2024_1291053
crossref_primary_10_1109_TNNLS_2020_3015208
crossref_primary_10_1007_s11042_023_15864_2
crossref_primary_10_1016_j_neunet_2021_01_016
crossref_primary_10_3390_s21186006
crossref_primary_10_3389_fnins_2018_00829
crossref_primary_10_1088_1757_899X_949_1_012074
crossref_primary_10_1109_TCYB_2021_3079097
crossref_primary_10_1088_2634_4386_ad5c97
crossref_primary_10_1039_D2TC03544H
crossref_primary_10_7717_peerj_cs_2549
crossref_primary_10_3389_fnins_2022_838832
crossref_primary_10_1109_TAI_2024_3352533
crossref_primary_10_1038_s42003_023_05257_4
crossref_primary_10_1016_j_neucom_2024_128364
crossref_primary_10_31988_SciTrends_40920
crossref_primary_10_1016_j_sciaf_2022_e01151
crossref_primary_10_1016_j_neunet_2018_05_018
crossref_primary_10_1016_j_chaos_2021_110649
crossref_primary_10_3390_s21093240
crossref_primary_10_3390_electronics8101087
crossref_primary_10_1109_JIOT_2024_3349533
crossref_primary_10_1109_TCDS_2023_3308347
crossref_primary_10_1016_j_compeleceng_2024_109806
crossref_primary_10_35741_issn_0258_2724_54_5_29
crossref_primary_10_1002_adfm_202113050
crossref_primary_10_1109_LSP_2021_3059172
crossref_primary_10_1063_5_0003696
crossref_primary_10_3389_fnins_2022_999029
crossref_primary_10_1109_TNNLS_2023_3286458
crossref_primary_10_3389_fnins_2021_756876
crossref_primary_10_1371_journal_pcbi_1011315
crossref_primary_10_3390_electronics7120396
crossref_primary_10_1109_JSEN_2021_3098013
crossref_primary_10_1007_s11571_024_10133_w
crossref_primary_10_3389_fncom_2018_00042
crossref_primary_10_3389_fncom_2021_686239
crossref_primary_10_3389_fncom_2018_00046
crossref_primary_10_1016_j_neucom_2021_10_080
crossref_primary_10_1038_s42256_021_00397_w
crossref_primary_10_1109_TIP_2021_3122092
crossref_primary_10_1016_j_mex_2023_102157
Cites_doi 10.1162/08997660152002852
10.1007/s00359-006-0117-6
10.1109/TNNLS.2014.2362542
10.1016/j.visres.2005.10.002
10.1016/S0925-2312(01)00403-9
10.1109/EBCCSP.2015.7300698
10.1016/j.neuron.2009.02.025
10.1523/ENEURO.0134-15.2016
10.1523/JNEUROSCI.0983-14.2014
10.1016/j.neunet.2013.07.012
10.3389/fncom.2015.00099
10.1109/IJCNN.2016.7727212
10.1109/WACV.2011.5711540
10.1038/srep27755
10.1109/5.726791
10.1038/srep32672
10.1073/pnas.0700622104
10.1126/science.1117593
10.1109/TNANO.2013.2250995
10.1016/j.cub.2012.01.003
10.1007/s11263-014-0788-3
10.3389/fncom.2016.00092
10.1371/journal.pcbi.1003963
10.1162/neco.2007.19.11.2881
10.1016/S0959-4388(00)00153-7
10.1007/978-3-319-10590-1_53
10.1109/JSSC.2007.914337
10.1371/journal.pcbi.1004566
10.1016/S1364-6613(03)00023-8
10.1007/s10827-008-0108-4
10.1016/j.neucom.2016.04.029
10.1142/S0129065716500301
10.1007/BF00344251
10.1371/journal.pcbi.0030031
10.1109/ICRC.2016.7738691
10.1016/j.tics.2007.06.010
10.1371/journal.pcbi.1003915
10.1038/381520a0
10.1109/ISCAS.2014.6865715
10.1016/j.neuron.2012.01.010
10.1016/j.neuron.2015.04.015
10.1016/j.neuron.2005.12.009
10.1109/IJCNN.2015.7280696
10.1109/TPAMI.2007.56
10.1016/S0893-6080(01)00083-1
10.1038/nature14539
ContentType Journal Article
Copyright 2017 Elsevier Ltd
Copyright © 2017 Elsevier Ltd. All rights reserved.
Distributed under a Creative Commons Attribution 4.0 International License
Copyright_xml – notice: 2017 Elsevier Ltd
– notice: Copyright © 2017 Elsevier Ltd. All rights reserved.
– notice: Distributed under a Creative Commons Attribution 4.0 International License
DBID AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7X8
1XC
VOOES
DOI 10.1016/j.neunet.2017.12.005
DatabaseName CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
MEDLINE - Academic
Hyper Article en Ligne (HAL)
Hyper Article en Ligne (HAL) (Open Access)
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
MEDLINE - Academic
DatabaseTitleList MEDLINE


MEDLINE - Academic
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: EIF
  name: MEDLINE
  url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search
  sourceTypes: Index Database
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1879-2782
EndPage 67
ExternalDocumentID oai_HAL_hal_02341957v1
29328958
10_1016_j_neunet_2017_12_005
S0893608017302903
Genre Journal Article
GroupedDBID ---
--K
--M
-~X
.DC
.~1
0R~
123
186
1B1
1RT
1~.
1~5
29N
4.4
457
4G.
53G
5RE
5VS
6TJ
7-5
71M
8P~
9JM
9JN
AABNK
AACTN
AADPK
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXLA
AAXUO
AAYFN
ABAOU
ABBOA
ABCQJ
ABEFU
ABFNM
ABFRF
ABHFT
ABIVO
ABJNI
ABLJU
ABMAC
ABXDB
ABYKQ
ACAZW
ACDAQ
ACGFO
ACGFS
ACIUM
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADGUI
ADJOM
ADMUD
ADRHT
AEBSH
AECPX
AEFWE
AEKER
AENEX
AFKWA
AFTJW
AFXIZ
AGHFR
AGUBO
AGWIK
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ARUGR
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
F0J
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-2
G-Q
G8K
GBLVA
GBOLZ
HLZ
HMQ
HVGLF
HZ~
IHE
J1W
JJJVA
K-O
KOM
KZ1
LG9
LMP
M2V
M41
MHUIS
MO0
MOBAO
MVM
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
ROL
RPZ
SBC
SCC
SDF
SDG
SDP
SES
SEW
SNS
SPC
SPCBC
SSN
SST
SSV
SSW
SSZ
T5K
TAE
UAP
UNMZH
VOH
WUQ
XPP
ZMT
~G-
AATTM
AAXKI
AAYWO
AAYXX
ABDPE
ABWVN
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AGCQF
AGQPQ
AGRNS
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
BNPGV
CITATION
SSH
CGR
CUY
CVF
ECM
EFKBS
EIF
NPM
7X8
1XC
VOOES
ID FETCH-LOGICAL-c442t-db533f5eb7dae64dc1e380b9fe52209805d71b94e4f49d310b0586a9d5dc63083
IEDL.DBID AIKHN
ISSN 0893-6080
1879-2782
IngestDate Sun Sep 07 03:28:38 EDT 2025
Thu Sep 04 21:46:57 EDT 2025
Mon Jul 21 05:42:18 EDT 2025
Thu Apr 24 22:56:07 EDT 2025
Tue Jul 01 01:24:31 EDT 2025
Fri Feb 23 02:28:37 EST 2024
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords Deep learning
STDP
Object recognition
Spiking neural network
Temporal coding
Language English
License Copyright © 2017 Elsevier Ltd. All rights reserved.
Distributed under a Creative Commons Attribution 4.0 International License: http://creativecommons.org/licenses/by/4.0
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c442t-db533f5eb7dae64dc1e380b9fe52209805d71b94e4f49d310b0586a9d5dc63083
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0001-8629-9506
0000-0001-6168-4379
0000-0003-4997-3367
OpenAccessLink https://hal.science/hal-02341957
PMID 29328958
PQID 1989579835
PQPubID 23479
PageCount 12
ParticipantIDs hal_primary_oai_HAL_hal_02341957v1
proquest_miscellaneous_1989579835
pubmed_primary_29328958
crossref_citationtrail_10_1016_j_neunet_2017_12_005
crossref_primary_10_1016_j_neunet_2017_12_005
elsevier_sciencedirect_doi_10_1016_j_neunet_2017_12_005
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2018-03-01
PublicationDateYYYYMMDD 2018-03-01
PublicationDate_xml – month: 03
  year: 2018
  text: 2018-03-01
  day: 01
PublicationDecade 2010
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle Neural networks
PublicationTitleAlternate Neural Netw
PublicationYear 2018
Publisher Elsevier Ltd
Elsevier
Publisher_xml – name: Elsevier Ltd
– name: Elsevier
References DiCarlo, Zoccolan, Rust (b10) 2012; 73
Doya (b14) 2000; 10
Hung, Kreiman, Poggio, DiCarlo (b19) 2005; 310
Diehl, P. U., Zarrella, G., Cassidy, A., Pedroni, B. U., & Neftci, E. Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware. In
Panda, P., & Roy, K. (2016). Unsupervised regenerative learning of hierarchical features in spiking deep networks for object recognition. In
Lake Tahoe, Nevada, USA (pp. 773–781).
Melbourne, VIC, Australia (pp. 2640–2643).
Kheradpisheh, Ghodrati, Ganjtabesh, Masquelier (b24) 2016; 6
Lee, Grosse, Ranganath, Ng (b31) 2009
LeCun, Bengio (b28) 1998
Bengio, Y., Lee, D.-H., Bornschein, J., & Lin, Z. (2015). Towards biologically plausible deep learning
Pinto, N., Barhomi, Y., Cox, D. D., & DiCarlo, J. J. (2011). Comparing state-of-the-art visual features on invariant object recognition tasks. In
Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition
Zhao, Ding, Chen, Linares-Barranco, Tang (b59) 2015; 26
Diehl, Cook (b11) 2015; 9
Khaligh-Razavi, Kriegeskorte (b22) 2014; 10
Vancouver, Canada (pp. 1–8).
Lichtsteiner, Posch, Delbruck (b32) 2007; 43
Cadieu, Hong, Yamins, Pinto, Ardila, Solomon (b5) 2014; 10
Brader, Senn, Fusi (b3) 2007; 19
LeCun, Bottou, Bengio, Haffner (b30) 1998; 86
Sivilotti (b52) 1991
Zurich, Switzerland (pp. 818–833).
Cao, Chen, Khosla (b6) 2015; 113
Habenschuss, S., Bill, J., & Nessler, B. (2012). Homeostatic plasticity in Bayesian spiking networks as expectation maximization with posterior constraints. In
Wohrer, Kornprobst (b56) 2009; 26
Thorpe, Delorme, Van Rullen (b53) 2001; 14
Shoham, OConnor, Segev (b50) 2006; 192
.
Masquelier, Thorpe (b36) 2007; 3
Rousselet, Thorpe, Fabre-Thorpe (b46) 2003; 7
Querlioz, Bichler, Dollfus, Gamrat (b44) 2013; 12
Burbank (b4) 2015; 11
Yousefzadeh, A., Serrano-Gotarredona, T., & Linares-Barranco, B. (2015). Fast Pipeline 128??128 pixel spiking convolution core for event-driven vision processing in FPGAs. In
O’Connor, Neil, Liu, Delbruck, Pfeiffer (b39) 2013; 7
Martínez-Cañada, Morillas, Pino, Ros, Pelayo (b35) 2016; 26
Thorpe, Fize, Marlot (b54) 1996; 381
Fukushima (b15) 1980; 36
Kirchner, Thorpe (b26) 2006; 46
Kona, Hawaii, USA (pp. 463–470).
Ghodrati, Farzmahdi, Rajaei, Ebrahimpour, Khaligh-Razavi (b16) 2014; 8
Meliza, Dan (b38) 2006; 49
Cichy, Khosla, Pantazis, Torralba, Oliva (b7) 2016; 6
Kheradpisheh, Ganjtabesh, Masquelier (b23) 2016; 205
Kheradpisheh, Ghodrati, Ganjtabesh, Masquelier (b25) 2016; 10
San Diego, California, USA (pp. 1–8).
Lake Tahoe, Nevada, USA (pp. 1–9).
Hunsberger, E., & Eliasmith, C. (2015). Spiking deep networks with LIF neurons
DiCarlo, Cox (b9) 2007; 11
Serre, Oliva, Poggio (b48) 2007; 104
Diehl, P. U., Neil, D., Binas, J., Cook, M., Liu, S.-C., & Pfeiffer, M. (2015). Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In
Delorme, Perrinet, Thorpe (b8) 2001; 38
Liu, Agam, Madsen, Kreiman (b33) 2009; 62
Beyeler, Dutt, Krichmar (b2) 2013; 48
Rolls, Deco (b45) 2002
Hussain, S., Liu, S.-C., & Basu, A. (2014). Improved margin multi-class classification using dendritic neurons with morphological learning. In
Portelli, Barrett, Hilgen, Masquelier, Maccione, Di Marco (b43) 2016; 3
Van Rullen, Thorpe (b55) 2001; 13
Maass (b34) 2002; 8
Serrano-Gotarredona, Masquelier, Prodromakis, Indiveri, Linares-Barranco (b47) 2013; 7
LeCun, Bengio, Hinton (b29) 2015; 521
Pignatelli, Bonci (b41) 2015; 86
Serre, Wolf, Bileschi, Riesenhuber, Poggio (b49) 2007; 29
McMahon, Leopold (b37) 2012; 22
Krizhevsky, A., Sutskever, I., & Hinton, G. (2012). ImageNet classification with deep convolutional neural networks. In
Huang, Rozas, Treviño, Contreras, Yang, Song (b18) 2014; 34
Killarney, Ireland (pp. 1–8).
Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In
Cadieu (10.1016/j.neunet.2017.12.005_b5) 2014; 10
Van Rullen (10.1016/j.neunet.2017.12.005_b55) 2001; 13
Liu (10.1016/j.neunet.2017.12.005_b33) 2009; 62
Kheradpisheh (10.1016/j.neunet.2017.12.005_b23) 2016; 205
Martínez-Cañada (10.1016/j.neunet.2017.12.005_b35) 2016; 26
Huang (10.1016/j.neunet.2017.12.005_b18) 2014; 34
Thorpe (10.1016/j.neunet.2017.12.005_b54) 1996; 381
Thorpe (10.1016/j.neunet.2017.12.005_b53) 2001; 14
Zhao (10.1016/j.neunet.2017.12.005_b59) 2015; 26
DiCarlo (10.1016/j.neunet.2017.12.005_b10) 2012; 73
McMahon (10.1016/j.neunet.2017.12.005_b37) 2012; 22
Pignatelli (10.1016/j.neunet.2017.12.005_b41) 2015; 86
Doya (10.1016/j.neunet.2017.12.005_b14) 2000; 10
Lee (10.1016/j.neunet.2017.12.005_b31) 2009
LeCun (10.1016/j.neunet.2017.12.005_b29) 2015; 521
Brader (10.1016/j.neunet.2017.12.005_b3) 2007; 19
Serre (10.1016/j.neunet.2017.12.005_b48) 2007; 104
10.1016/j.neunet.2017.12.005_b27
Lichtsteiner (10.1016/j.neunet.2017.12.005_b32) 2007; 43
Portelli (10.1016/j.neunet.2017.12.005_b43) 2016; 3
10.1016/j.neunet.2017.12.005_b1
LeCun (10.1016/j.neunet.2017.12.005_b28) 1998
Serre (10.1016/j.neunet.2017.12.005_b49) 2007; 29
10.1016/j.neunet.2017.12.005_b21
10.1016/j.neunet.2017.12.005_b20
Cao (10.1016/j.neunet.2017.12.005_b6) 2015; 113
Delorme (10.1016/j.neunet.2017.12.005_b8) 2001; 38
Khaligh-Razavi (10.1016/j.neunet.2017.12.005_b22) 2014; 10
Rousselet (10.1016/j.neunet.2017.12.005_b46) 2003; 7
Beyeler (10.1016/j.neunet.2017.12.005_b2) 2013; 48
Maass (10.1016/j.neunet.2017.12.005_b34) 2002; 8
10.1016/j.neunet.2017.12.005_b58
10.1016/j.neunet.2017.12.005_b13
Masquelier (10.1016/j.neunet.2017.12.005_b36) 2007; 3
10.1016/j.neunet.2017.12.005_b57
10.1016/j.neunet.2017.12.005_b17
Hung (10.1016/j.neunet.2017.12.005_b19) 2005; 310
10.1016/j.neunet.2017.12.005_b51
Shoham (10.1016/j.neunet.2017.12.005_b50) 2006; 192
10.1016/j.neunet.2017.12.005_b12
Rolls (10.1016/j.neunet.2017.12.005_b45) 2002
Kheradpisheh (10.1016/j.neunet.2017.12.005_b24) 2016; 6
Querlioz (10.1016/j.neunet.2017.12.005_b44) 2013; 12
Burbank (10.1016/j.neunet.2017.12.005_b4) 2015; 11
Diehl (10.1016/j.neunet.2017.12.005_b11) 2015; 9
DiCarlo (10.1016/j.neunet.2017.12.005_b9) 2007; 11
Kirchner (10.1016/j.neunet.2017.12.005_b26) 2006; 46
Wohrer (10.1016/j.neunet.2017.12.005_b56) 2009; 26
Ghodrati (10.1016/j.neunet.2017.12.005_b16) 2014; 8
Serrano-Gotarredona (10.1016/j.neunet.2017.12.005_b47) 2013; 7
Kheradpisheh (10.1016/j.neunet.2017.12.005_b25) 2016; 10
Meliza (10.1016/j.neunet.2017.12.005_b38) 2006; 49
Fukushima (10.1016/j.neunet.2017.12.005_b15) 1980; 36
LeCun (10.1016/j.neunet.2017.12.005_b30) 1998; 86
10.1016/j.neunet.2017.12.005_b40
Sivilotti (10.1016/j.neunet.2017.12.005_b52) 1991
O’Connor (10.1016/j.neunet.2017.12.005_b39) 2013; 7
Cichy (10.1016/j.neunet.2017.12.005_b7) 2016; 6
10.1016/j.neunet.2017.12.005_b42
References_xml – volume: 8
  start-page: 1
  year: 2014
  end-page: 17
  ident: b16
  article-title: Feedforward object-vision models only tolerate small image variations compared to human
  publication-title: Frontiers in Computational Neuroscience
– reference: Panda, P., & Roy, K. (2016). Unsupervised regenerative learning of hierarchical features in spiking deep networks for object recognition. In
– volume: 73
  start-page: 415
  year: 2012
  end-page: 434
  ident: b10
  article-title: How does the brain solve visual object recognition?
  publication-title: Neuron
– reference: , Melbourne, VIC, Australia (pp. 2640–2643).
– volume: 29
  start-page: 411
  year: 2007
  end-page: 426
  ident: b49
  article-title: Robust object recognition with cortex-like mechanisms
  publication-title: IEEE Transactions on Pattern Analysis Machine Intelligence
– volume: 62
  start-page: 281
  year: 2009
  end-page: 290
  ident: b33
  article-title: Timing, timing, timing: fast decoding of object information from intracranial field potentials in human visual cortex
  publication-title: Neuron
– volume: 34
  start-page: 7575
  year: 2014
  end-page: 7579
  ident: b18
  article-title: Associative Hebbian synaptic plasticity in primate visual cortex
  publication-title: The Journal of Neuroscience
– volume: 205
  start-page: 382
  year: 2016
  end-page: 392
  ident: b23
  article-title: Bio-inspired unsupervised learning of visual features leads to robust invariant object recognition
  publication-title: Neurocomputing
– reference: Yousefzadeh, A., Serrano-Gotarredona, T., & Linares-Barranco, B. (2015). Fast Pipeline 128??128 pixel spiking convolution core for event-driven vision processing in FPGAs. In
– volume: 192
  start-page: 777
  year: 2006
  end-page: 784
  ident: b50
  article-title: How silent is the brain: is there a dark matter problem in neuroscience?
  publication-title: Journal of Comparative Physiology A
– volume: 86
  start-page: 1145
  year: 2015
  end-page: 1157
  ident: b41
  article-title: Role of dopamine neurons in reward and aversion: a synaptic plasticity perspective
  publication-title: Neuron
– volume: 7
  start-page: 99
  year: 2003
  end-page: 102
  ident: b46
  article-title: Taking the max from neuronal responses
  publication-title: Trends in Cognitive Sciences
– reference: Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition,
– reference: , Killarney, Ireland (pp. 1–8).
– reference: Hussain, S., Liu, S.-C., & Basu, A. (2014). Improved margin multi-class classification using dendritic neurons with morphological learning. In
– volume: 3
  year: 2016
  ident: b43
  article-title: Rank order coding: a retinal information decoding strategy revealed by large-scale multielectrode array retinal recordings
  publication-title: Eneuro
– volume: 7
  start-page: 2
  year: 2013
  ident: b47
  article-title: STDP and STDP variations with memristors for spiking neuromorphic learning systems
  publication-title: Frontiers in Neuroscience
– volume: 310
  start-page: 863
  year: 2005
  end-page: 866
  ident: b19
  article-title: Fast readout of object identity from macaque inferior temporal cortex
  publication-title: Science
– volume: 381
  start-page: 520
  year: 1996
  end-page: 522
  ident: b54
  article-title: Speed of processing in the human visual system
  publication-title: Nature
– volume: 10
  start-page: e1003963
  year: 2014
  ident: b5
  article-title: Deep neural networks rival the representation of primate it cortex for core visual object recognition
  publication-title: PLoS Computational Biology
– volume: 521
  start-page: 436
  year: 2015
  end-page: 444
  ident: b29
  article-title: Deep learning
  publication-title: Nature
– reference: Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In
– reference: , Zurich, Switzerland (pp. 818–833).
– year: 2002
  ident: b45
  publication-title: Computational neuroscience of vision
– start-page: 255
  year: 1998
  end-page: 258
  ident: b28
  article-title: Convolutional networks for images, speech, and time series
  publication-title: The handbook of brain theory and neural networks
– volume: 38
  start-page: 539
  year: 2001
  end-page: 545
  ident: b8
  article-title: Networks of integrate-and-fire neurons using rank order coding b: Spike timing dependent plasticity and emergence of orientation selectivity
  publication-title: Neurocomputing
– volume: 8
  start-page: 32
  year: 2002
  end-page: 36
  ident: b34
  article-title: Computing with spikes
  publication-title: Special Issue on Foundations of Information Processing of TELEMATIK
– start-page: 1
  year: 2009
  end-page: 8
  ident: b31
  article-title: Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations
– volume: 26
  start-page: 219
  year: 2009
  end-page: 249
  ident: b56
  article-title: Virtual Retina: a biological retina model and simulator, with contrast gain control
  publication-title: Journal of Computational Neuroscience
– reference: .
– volume: 10
  start-page: 92
  year: 2016
  ident: b25
  article-title: Humans and deep networks largely agree on which kinds of variation make object recognition harder
  publication-title: Frontiers in Computational Neuroscience
– volume: 3
  start-page: e31
  year: 2007
  ident: b36
  article-title: Unsupervised learning of visual features through spike timing dependent plasticity
  publication-title: PLoS Computational Biology
– volume: 26
  start-page: 1963
  year: 2015
  end-page: 1978
  ident: b59
  article-title: Feedforward categorization on aer motion events using cortex-like features in a spiking neural network
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– volume: 14
  start-page: 715
  year: 2001
  end-page: 725
  ident: b53
  article-title: Spike-based strategies for rapid processing
  publication-title: Neural Networks
– volume: 22
  start-page: 332
  year: 2012
  end-page: 337
  ident: b37
  article-title: Stimulus timing-dependent plasticity in high-level vision
  publication-title: Current Biology
– volume: 113
  start-page: 54
  year: 2015
  end-page: 66
  ident: b6
  article-title: Spiking deep convolutional neural networks for energy-efficient object recognition
  publication-title: International Journal of Computer Vision
– volume: 7
  start-page: 178
  year: 2013
  ident: b39
  article-title: Real-time classification and sensor fusion with a spiking deep belief network
  publication-title: Frontiers in Neuroscience
– reference: , San Diego, California, USA (pp. 1–8).
– volume: 48
  start-page: 109
  year: 2013
  end-page: 124
  ident: b2
  article-title: Categorization and decision-making in a neurobiologically plausible spiking network using a stdp-like learning rule
  publication-title: Neural Networks
– reference: , Lake Tahoe, Nevada, USA (pp. 773–781).
– volume: 46
  start-page: 1762
  year: 2006
  end-page: 1776
  ident: b26
  article-title: Ultra-rapid object detection with saccadic eye movements: Visual processing speed revisited
  publication-title: Vision Reserch
– reference: Bengio, Y., Lee, D.-H., Bornschein, J., & Lin, Z. (2015). Towards biologically plausible deep learning,
– volume: 12
  start-page: 288
  year: 2013
  end-page: 295
  ident: b44
  article-title: Immunity to device variations in a spiking neural network with memristive nanodevices
  publication-title: IEEE Transactions on Nanotechnology
– volume: 19
  start-page: 2881
  year: 2007
  end-page: 2912
  ident: b3
  article-title: Learning real-world stimuli in a neural network with spike-driven synaptic dynamics
  publication-title: Neural Computation
– volume: 13
  start-page: 1255
  year: 2001
  end-page: 1283
  ident: b55
  article-title: Rate coding versus temporal order coding: what the retinal ganglion cells tell the visual cortex
  publication-title: Neural Computation
– volume: 6
  year: 2016
  ident: b7
  article-title: Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence
  publication-title: Scientific Reports
– reference: Diehl, P. U., Neil, D., Binas, J., Cook, M., Liu, S.-C., & Pfeiffer, M. (2015). Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In
– volume: 104
  start-page: 6424
  year: 2007
  end-page: 6429
  ident: b48
  article-title: A feedforward architecture accounts for rapid categorization
  publication-title: Proceedings of the National Academy of Sciences
– reference: Hunsberger, E., & Eliasmith, C. (2015). Spiking deep networks with LIF neurons,
– volume: 10
  start-page: e1003915
  year: 2014
  ident: b22
  article-title: Deep supervised, but not unsupervised, models may explain it cortical representation
  publication-title: PLoS Computational Biology
– reference: Diehl, P. U., Zarrella, G., Cassidy, A., Pedroni, B. U., & Neftci, E. Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware. In
– volume: 86
  start-page: 2278
  year: 1998
  end-page: 2324
  ident: b30
  article-title: Gradient-based learning applied to document recognition
  publication-title: Proceedings of the IEEE
– reference: Pinto, N., Barhomi, Y., Cox, D. D., & DiCarlo, J. J. (2011). Comparing state-of-the-art visual features on invariant object recognition tasks. In
– volume: 43
  start-page: 566
  year: 2007
  end-page: 576
  ident: b32
  article-title: An 128x128 120dB 15us-latency temporal contrast vision sensor
  publication-title: IEEE Journal of Solid State Circuits
– reference: , Kona, Hawaii, USA (pp. 463–470).
– volume: 9
  start-page: 99
  year: 2015
  ident: b11
  article-title: Unsupervised learning of digit recognition using spike-timing-dependent plasticity
  publication-title: Frontiers in Computational Neuroscience
– year: 1991
  ident: b52
  publication-title: Wiring considerations in analog VLSI systems with application to field-programmable networks
– reference: , Vancouver, Canada (pp. 1–8).
– volume: 11
  start-page: e1004566
  year: 2015
  ident: b4
  article-title: Mirrored stdp implements autoencoder learning in a network of spiking neurons
  publication-title: PLoS Computational Biology
– volume: 36
  start-page: 193
  year: 1980
  end-page: 202
  ident: b15
  article-title: Neocognitron : a self organizing neural network model for a mechanism of pattern recognition unaffected by shift in position
  publication-title: Biological Cybernetics
– reference: Habenschuss, S., Bill, J., & Nessler, B. (2012). Homeostatic plasticity in Bayesian spiking networks as expectation maximization with posterior constraints. In
– reference: .
– reference: , Lake Tahoe, Nevada, USA (pp. 1–9).
– volume: 49
  start-page: 183
  year: 2006
  end-page: 189
  ident: b38
  article-title: Receptive-field modification in rat visual cortex induced by paired visual stimulation and single-cell spiking
  publication-title: Neuron
– volume: 6
  start-page: 32672
  year: 2016
  ident: b24
  article-title: Deep networks resemble human feed-forward vision in invariant object recognition
  publication-title: Scientific Reports
– reference: Krizhevsky, A., Sutskever, I., & Hinton, G. (2012). ImageNet classification with deep convolutional neural networks. In
– volume: 26
  start-page: 1650030
  year: 2016
  ident: b35
  article-title: A computational framework for realistic Retina modeling
  publication-title: International Journal of Neural Systems
– volume: 11
  start-page: 333
  year: 2007
  end-page: 341
  ident: b9
  article-title: Untangling invariant object recognition
  publication-title: Trends in Cognitive Sciences
– volume: 10
  start-page: 732
  year: 2000
  end-page: 739
  ident: b14
  article-title: Complementary roles of basal ganglia and cerebellum in learning and motor control
  publication-title: Current Opinion in Neurobiology
– volume: 13
  start-page: 1255
  issue: 6
  year: 2001
  ident: 10.1016/j.neunet.2017.12.005_b55
  article-title: Rate coding versus temporal order coding: what the retinal ganglion cells tell the visual cortex
  publication-title: Neural Computation
  doi: 10.1162/08997660152002852
– ident: 10.1016/j.neunet.2017.12.005_b1
– volume: 192
  start-page: 777
  issue: 8
  year: 2006
  ident: 10.1016/j.neunet.2017.12.005_b50
  article-title: How silent is the brain: is there a dark matter problem in neuroscience?
  publication-title: Journal of Comparative Physiology A
  doi: 10.1007/s00359-006-0117-6
– volume: 26
  start-page: 1963
  issue: 9
  year: 2015
  ident: 10.1016/j.neunet.2017.12.005_b59
  article-title: Feedforward categorization on aer motion events using cortex-like features in a spiking neural network
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
  doi: 10.1109/TNNLS.2014.2362542
– volume: 8
  start-page: 1
  issue: 74
  year: 2014
  ident: 10.1016/j.neunet.2017.12.005_b16
  article-title: Feedforward object-vision models only tolerate small image variations compared to human
  publication-title: Frontiers in Computational Neuroscience
– ident: 10.1016/j.neunet.2017.12.005_b20
– volume: 46
  start-page: 1762
  issue: 11
  year: 2006
  ident: 10.1016/j.neunet.2017.12.005_b26
  article-title: Ultra-rapid object detection with saccadic eye movements: Visual processing speed revisited
  publication-title: Vision Reserch
  doi: 10.1016/j.visres.2005.10.002
– volume: 38
  start-page: 539
  year: 2001
  ident: 10.1016/j.neunet.2017.12.005_b8
  article-title: Networks of integrate-and-fire neurons using rank order coding b: Spike timing dependent plasticity and emergence of orientation selectivity
  publication-title: Neurocomputing
  doi: 10.1016/S0925-2312(01)00403-9
– ident: 10.1016/j.neunet.2017.12.005_b57
  doi: 10.1109/EBCCSP.2015.7300698
– volume: 62
  start-page: 281
  issue: 2
  year: 2009
  ident: 10.1016/j.neunet.2017.12.005_b33
  article-title: Timing, timing, timing: fast decoding of object information from intracranial field potentials in human visual cortex
  publication-title: Neuron
  doi: 10.1016/j.neuron.2009.02.025
– volume: 3
  issue: 3
  year: 2016
  ident: 10.1016/j.neunet.2017.12.005_b43
  article-title: Rank order coding: a retinal information decoding strategy revealed by large-scale multielectrode array retinal recordings
  publication-title: Eneuro
  doi: 10.1523/ENEURO.0134-15.2016
– year: 2002
  ident: 10.1016/j.neunet.2017.12.005_b45
– volume: 34
  start-page: 7575
  issue: 22
  year: 2014
  ident: 10.1016/j.neunet.2017.12.005_b18
  article-title: Associative Hebbian synaptic plasticity in primate visual cortex
  publication-title: The Journal of Neuroscience
  doi: 10.1523/JNEUROSCI.0983-14.2014
– volume: 48
  start-page: 109
  year: 2013
  ident: 10.1016/j.neunet.2017.12.005_b2
  article-title: Categorization and decision-making in a neurobiologically plausible spiking network using a stdp-like learning rule
  publication-title: Neural Networks
  doi: 10.1016/j.neunet.2013.07.012
– volume: 9
  start-page: 99
  year: 2015
  ident: 10.1016/j.neunet.2017.12.005_b11
  article-title: Unsupervised learning of digit recognition using spike-timing-dependent plasticity
  publication-title: Frontiers in Computational Neuroscience
  doi: 10.3389/fncom.2015.00099
– ident: 10.1016/j.neunet.2017.12.005_b40
  doi: 10.1109/IJCNN.2016.7727212
– ident: 10.1016/j.neunet.2017.12.005_b42
  doi: 10.1109/WACV.2011.5711540
– volume: 8
  start-page: 32
  issue: 1
  year: 2002
  ident: 10.1016/j.neunet.2017.12.005_b34
  article-title: Computing with spikes
  publication-title: Special Issue on Foundations of Information Processing of TELEMATIK
– volume: 6
  year: 2016
  ident: 10.1016/j.neunet.2017.12.005_b7
  article-title: Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence
  publication-title: Scientific Reports
  doi: 10.1038/srep27755
– volume: 86
  start-page: 2278
  issue: 11
  year: 1998
  ident: 10.1016/j.neunet.2017.12.005_b30
  article-title: Gradient-based learning applied to document recognition
  publication-title: Proceedings of the IEEE
  doi: 10.1109/5.726791
– volume: 6
  start-page: 32672
  year: 2016
  ident: 10.1016/j.neunet.2017.12.005_b24
  article-title: Deep networks resemble human feed-forward vision in invariant object recognition
  publication-title: Scientific Reports
  doi: 10.1038/srep32672
– start-page: 1
  year: 2009
  ident: 10.1016/j.neunet.2017.12.005_b31
  article-title: Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations
– volume: 104
  start-page: 6424
  issue: 15
  year: 2007
  ident: 10.1016/j.neunet.2017.12.005_b48
  article-title: A feedforward architecture accounts for rapid categorization
  publication-title: Proceedings of the National Academy of Sciences
  doi: 10.1073/pnas.0700622104
– volume: 310
  start-page: 863
  issue: 5749
  year: 2005
  ident: 10.1016/j.neunet.2017.12.005_b19
  article-title: Fast readout of object identity from macaque inferior temporal cortex
  publication-title: Science
  doi: 10.1126/science.1117593
– volume: 12
  start-page: 288
  issue: 3
  year: 2013
  ident: 10.1016/j.neunet.2017.12.005_b44
  article-title: Immunity to device variations in a spiking neural network with memristive nanodevices
  publication-title: IEEE Transactions on Nanotechnology
  doi: 10.1109/TNANO.2013.2250995
– volume: 22
  start-page: 332
  issue: 4
  year: 2012
  ident: 10.1016/j.neunet.2017.12.005_b37
  article-title: Stimulus timing-dependent plasticity in high-level vision
  publication-title: Current Biology
  doi: 10.1016/j.cub.2012.01.003
– volume: 113
  start-page: 54
  issue: 1
  year: 2015
  ident: 10.1016/j.neunet.2017.12.005_b6
  article-title: Spiking deep convolutional neural networks for energy-efficient object recognition
  publication-title: International Journal of Computer Vision
  doi: 10.1007/s11263-014-0788-3
– volume: 10
  start-page: 92
  year: 2016
  ident: 10.1016/j.neunet.2017.12.005_b25
  article-title: Humans and deep networks largely agree on which kinds of variation make object recognition harder
  publication-title: Frontiers in Computational Neuroscience
  doi: 10.3389/fncom.2016.00092
– volume: 10
  start-page: e1003963
  issue: 12
  year: 2014
  ident: 10.1016/j.neunet.2017.12.005_b5
  article-title: Deep neural networks rival the representation of primate it cortex for core visual object recognition
  publication-title: PLoS Computational Biology
  doi: 10.1371/journal.pcbi.1003963
– volume: 19
  start-page: 2881
  issue: 11
  year: 2007
  ident: 10.1016/j.neunet.2017.12.005_b3
  article-title: Learning real-world stimuli in a neural network with spike-driven synaptic dynamics
  publication-title: Neural Computation
  doi: 10.1162/neco.2007.19.11.2881
– volume: 10
  start-page: 732
  issue: 6
  year: 2000
  ident: 10.1016/j.neunet.2017.12.005_b14
  article-title: Complementary roles of basal ganglia and cerebellum in learning and motor control
  publication-title: Current Opinion in Neurobiology
  doi: 10.1016/S0959-4388(00)00153-7
– ident: 10.1016/j.neunet.2017.12.005_b58
  doi: 10.1007/978-3-319-10590-1_53
– volume: 7
  start-page: 2
  issue: February
  year: 2013
  ident: 10.1016/j.neunet.2017.12.005_b47
  article-title: STDP and STDP variations with memristors for spiking neuromorphic learning systems
  publication-title: Frontiers in Neuroscience
– volume: 43
  start-page: 566
  issue: 2
  year: 2007
  ident: 10.1016/j.neunet.2017.12.005_b32
  article-title: An 128x128 120dB 15us-latency temporal contrast vision sensor
  publication-title: IEEE Journal of Solid State Circuits
  doi: 10.1109/JSSC.2007.914337
– volume: 11
  start-page: e1004566
  issue: 12
  year: 2015
  ident: 10.1016/j.neunet.2017.12.005_b4
  article-title: Mirrored stdp implements autoencoder learning in a network of spiking neurons
  publication-title: PLoS Computational Biology
  doi: 10.1371/journal.pcbi.1004566
– volume: 7
  start-page: 99
  issue: 3
  year: 2003
  ident: 10.1016/j.neunet.2017.12.005_b46
  article-title: Taking the max from neuronal responses
  publication-title: Trends in Cognitive Sciences
  doi: 10.1016/S1364-6613(03)00023-8
– volume: 26
  start-page: 219
  issue: 2
  year: 2009
  ident: 10.1016/j.neunet.2017.12.005_b56
  article-title: Virtual Retina: a biological retina model and simulator, with contrast gain control
  publication-title: Journal of Computational Neuroscience
  doi: 10.1007/s10827-008-0108-4
– volume: 205
  start-page: 382
  year: 2016
  ident: 10.1016/j.neunet.2017.12.005_b23
  article-title: Bio-inspired unsupervised learning of visual features leads to robust invariant object recognition
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2016.04.029
– volume: 26
  start-page: 1650030
  issue: 7
  year: 2016
  ident: 10.1016/j.neunet.2017.12.005_b35
  article-title: A computational framework for realistic Retina modeling
  publication-title: International Journal of Neural Systems
  doi: 10.1142/S0129065716500301
– year: 1991
  ident: 10.1016/j.neunet.2017.12.005_b52
– volume: 36
  start-page: 193
  issue: 4
  year: 1980
  ident: 10.1016/j.neunet.2017.12.005_b15
  article-title: Neocognitron : a self organizing neural network model for a mechanism of pattern recognition unaffected by shift in position
  publication-title: Biological Cybernetics
  doi: 10.1007/BF00344251
– volume: 3
  start-page: e31
  issue: 2
  year: 2007
  ident: 10.1016/j.neunet.2017.12.005_b36
  article-title: Unsupervised learning of visual features through spike timing dependent plasticity
  publication-title: PLoS Computational Biology
  doi: 10.1371/journal.pcbi.0030031
– ident: 10.1016/j.neunet.2017.12.005_b13
  doi: 10.1109/ICRC.2016.7738691
– ident: 10.1016/j.neunet.2017.12.005_b51
– volume: 11
  start-page: 333
  issue: 8
  year: 2007
  ident: 10.1016/j.neunet.2017.12.005_b9
  article-title: Untangling invariant object recognition
  publication-title: Trends in Cognitive Sciences
  doi: 10.1016/j.tics.2007.06.010
– ident: 10.1016/j.neunet.2017.12.005_b17
– ident: 10.1016/j.neunet.2017.12.005_b27
– volume: 10
  start-page: e1003915
  issue: 11
  year: 2014
  ident: 10.1016/j.neunet.2017.12.005_b22
  article-title: Deep supervised, but not unsupervised, models may explain it cortical representation
  publication-title: PLoS Computational Biology
  doi: 10.1371/journal.pcbi.1003915
– start-page: 255
  year: 1998
  ident: 10.1016/j.neunet.2017.12.005_b28
  article-title: Convolutional networks for images, speech, and time series
– volume: 381
  start-page: 520
  issue: 6582
  year: 1996
  ident: 10.1016/j.neunet.2017.12.005_b54
  article-title: Speed of processing in the human visual system
  publication-title: Nature
  doi: 10.1038/381520a0
– ident: 10.1016/j.neunet.2017.12.005_b21
  doi: 10.1109/ISCAS.2014.6865715
– volume: 73
  start-page: 415
  issue: 3
  year: 2012
  ident: 10.1016/j.neunet.2017.12.005_b10
  article-title: How does the brain solve visual object recognition?
  publication-title: Neuron
  doi: 10.1016/j.neuron.2012.01.010
– volume: 86
  start-page: 1145
  issue: 5
  year: 2015
  ident: 10.1016/j.neunet.2017.12.005_b41
  article-title: Role of dopamine neurons in reward and aversion: a synaptic plasticity perspective
  publication-title: Neuron
  doi: 10.1016/j.neuron.2015.04.015
– volume: 49
  start-page: 183
  issue: 2
  year: 2006
  ident: 10.1016/j.neunet.2017.12.005_b38
  article-title: Receptive-field modification in rat visual cortex induced by paired visual stimulation and single-cell spiking
  publication-title: Neuron
  doi: 10.1016/j.neuron.2005.12.009
– ident: 10.1016/j.neunet.2017.12.005_b12
  doi: 10.1109/IJCNN.2015.7280696
– volume: 29
  start-page: 411
  issue: 3
  year: 2007
  ident: 10.1016/j.neunet.2017.12.005_b49
  article-title: Robust object recognition with cortex-like mechanisms
  publication-title: IEEE Transactions on Pattern Analysis Machine Intelligence
  doi: 10.1109/TPAMI.2007.56
– volume: 7
  start-page: 178
  year: 2013
  ident: 10.1016/j.neunet.2017.12.005_b39
  article-title: Real-time classification and sensor fusion with a spiking deep belief network
  publication-title: Frontiers in Neuroscience
– volume: 14
  start-page: 715
  issue: 6
  year: 2001
  ident: 10.1016/j.neunet.2017.12.005_b53
  article-title: Spike-based strategies for rapid processing
  publication-title: Neural Networks
  doi: 10.1016/S0893-6080(01)00083-1
– volume: 521
  start-page: 436
  issue: 7553
  year: 2015
  ident: 10.1016/j.neunet.2017.12.005_b29
  article-title: Deep learning
  publication-title: Nature
  doi: 10.1038/nature14539
SSID ssj0006843
Score 2.674693
Snippet Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or...
SourceID hal
proquest
pubmed
crossref
elsevier
SourceType Open Access Repository
Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 56
SubjectTerms Action Potentials - physiology
Animals
Cognitive science
Computer Simulation - trends
Deep learning
Humans
Learning - physiology
Models, Neurological
Neural Networks (Computer)
Neuronal Plasticity - physiology
Neurons - physiology
Neuroscience
Object recognition
Pattern Recognition, Visual - physiology
Photic Stimulation - methods
Spiking neural network
STDP
Temporal coding
Visual Perception - physiology
Title STDP-based spiking deep convolutional neural networks for object recognition
URI https://dx.doi.org/10.1016/j.neunet.2017.12.005
https://www.ncbi.nlm.nih.gov/pubmed/29328958
https://www.proquest.com/docview/1989579835
https://hal.science/hal-02341957
Volume 99
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1La9wwEB6SzaWXvh_btEEtvapr2ZJtHZc82LZpKCSB3IRljcmW4l26uz32t2fGlg2FlkBPxkaSxcxo5rP8aQbgA9pKqbr00mfaS40hkb7RjUSFGTYEUHXG552_XuSLa_35xtzswfFwFoZpldH39z6989bxySxKc7ZeLmeXCYXanACPIiNNLWf8PEgzm5sJHMw_fVlcjA45L3vyHLWX3GE4QdfRvFrctcikSlV0-4Jcx-7vEWr_lqmS_8KhXTw6ewwPI5AU836uT2AP26fwaCjSIOKafQbnl1cn3yQHqyA26yXvjIuAuBbMN492R-NwXsvu0rHCN4KwrFh53qQRI8do1T6H67PTq-OFjCUUZK11upXBE5xrDPoiVJjrUJMKysTbBgl3JbZMTCiUtxp1o20gqOcTU-aVDSbUeUbw7AVM2lWLr0AUFOsJO2RYcU76YGxF2Clp0ibNvcciTCEbxObqmF-cy1z8cAOR7Lvrhe1Y2E6ljoQ9BTn2Wvf5Ne5pXwwacX_YiaMQcE_P96TA8SWcVnsxP3f8jHCLVtYUv9QU3g36dbTO-OdJ1eJqt3HMLTOFJcA6hZe94sexCDLRd6spX__35A7hAd2VPbntDUy2P3f4ltDO1h_B_sff6ija9B2Q_f4C
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9QwEB6VcoALLe8tBVzE1Wyc2HF8rArVAtuqUrdSb1YcT8QilF11d3vktzOTxyIkUCVOkRzbsWbGns_O5xmA9-hKpaoiyJDpIDXGRIZa1xIVZlgTQNUZ33c-O88nV_rLtbnegZPhLgzTKvu1v1vT29W6Lxn30hwv5_PxZUKuNifAo8hIU8cRP-9rk1nm9X34-ZvnkRcddY5qS64-3J9rSV4NbhpkSqWy7akgZ7H7u3-6942Jkv9Coa03Ot2HRz2MFMfdSB_DDjZPYG9I0SD6GfsUppezjxeSXVUUq-Wcz8VFRFwKZpv3Vkf9cFTL9tFywleCkKxYBD6iEVuG0aJ5Blenn2YnE9knUJCV1ulaxkBgrjYYbCwx17EiBRRJcDUS6kpckZhoVXAada1dJKAXElPkpYsmVnlG4Ow57DaLBl-CsOTpCTlkWHJE-mhcScgpqdM6zUNAG0eQDWLzVR9dnJNc_PADjey774TtWdhepZ6EPQK5bbXsomvcUd8OGvF_WIknB3BHy3ekwO1HOKj25HjquYxQi1bO2Fs1gqNBv55mGf86KRtcbFaemWXGOoKrI3jRKX7bFwEm2rWa4uC_B_cWHkxmZ1M__Xz-9RU8pDdFR3M7hN31zQZfE-5ZhzetXf8Ctbb-zQ
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=STDP-based+spiking+deep+convolutional+neural+networks+for+object+recognition&rft.jtitle=Neural+networks&rft.au=Kheradpisheh%2C+Saeed+Reza&rft.au=Ganjtabesh%2C+Mohammad&rft.au=Thorpe%2C+Simon+J.&rft.au=Masquelier%2C+Timoth%C3%A9e&rft.date=2018-03-01&rft.issn=0893-6080&rft.volume=99&rft.spage=56&rft.epage=67&rft_id=info:doi/10.1016%2Fj.neunet.2017.12.005&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_neunet_2017_12_005
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0893-6080&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0893-6080&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0893-6080&client=summon