Spike timing-based unsupervised learning of orientation, disparity, and motion representations in a spiking neural network

Neuromorphic vision sensors present unique advantages over their frame based counterparts. However, unsupervised learning of efficient visual representations from their asynchronous output is still a challenge, requiring a re-thinking of traditional image and video processing methods. Here we presen...

Full description

Saved in:
Bibliographic Details
Published inIEEE Computer Society Conference on Computer Vision and Pattern Recognition workshops pp. 1377 - 1386
Main Authors Barbier, Thomas, Teuliere, Celine, Triesch, Jochen
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Neuromorphic vision sensors present unique advantages over their frame based counterparts. However, unsupervised learning of efficient visual representations from their asynchronous output is still a challenge, requiring a re-thinking of traditional image and video processing methods. Here we present a network of leaky integrate and fire neurons that learns representations similar to those of simple and complex cells in the primary visual cortex of mammals from the input of two event-based vision sensors. Through the combination of spike timing-dependent plasticity and homeostatic mechanisms, the network learns visual feature detectors for orientation, disparity, and motion in a fully un-supervised fashion. We validate our approach on a mobile robotic platform.
ISSN:2160-7516
DOI:10.1109/CVPRW53098.2021.00152