Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition

We present an unsupervised method for learning a hierarchy of sparse feature detectors that are invariant to small shifts and distortions. The resulting feature extractor consists of multiple convolution filters, followed by a feature-pooling layer that computes the max of each filter output within...

Full description

Saved in:
Bibliographic Details
Published in2007 IEEE Conference on Computer Vision and Pattern Recognition pp. 1 - 8
Main Authors Ranzato, M.A., Fu Jie Huang, Boureau, Y.-L., Yann LeCun
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2007
Subjects
Online AccessGet full text
ISBN9781424411795
1424411793
ISSN1063-6919
1063-6919
DOI10.1109/CVPR.2007.383157

Cover

Loading…
More Information
Summary:We present an unsupervised method for learning a hierarchy of sparse feature detectors that are invariant to small shifts and distortions. The resulting feature extractor consists of multiple convolution filters, followed by a feature-pooling layer that computes the max of each filter output within adjacent windows, and a point-wise sigmoid non-linearity. A second level of larger and more invariant features is obtained by training the same algorithm on patches of features from the first level. Training a supervised classifier on these features yields 0.64% error on MNIST, and 54% average recognition rate on Caltech 101 with 30 training samples per category. While the resulting architecture is similar to convolutional networks, the layer-wise unsupervised training procedure alleviates the over-parameterization problems that plague purely supervised learning procedures, and yields good performance with very few labeled training samples.
ISBN:9781424411795
1424411793
ISSN:1063-6919
1063-6919
DOI:10.1109/CVPR.2007.383157