Transfer Sparse Coding for Robust Image Representation

Sparse coding learns a set of basis functions such that each input signal can be well approximated by a linear combination of just a few of the bases. It has attracted increasing interest due to its state-of-the-art performance in BoW based image representation. However, when labeled and unlabeled i...

Full description

Saved in:
Bibliographic Details
Published in2013 IEEE Conference on Computer Vision and Pattern Recognition pp. 407 - 414
Main Authors Mingsheng Long, Guiguang Ding, Jianmin Wang, Jiaguang Sun, Yuchen Guo, Yu, Philip S.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2013
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Sparse coding learns a set of basis functions such that each input signal can be well approximated by a linear combination of just a few of the bases. It has attracted increasing interest due to its state-of-the-art performance in BoW based image representation. However, when labeled and unlabeled images are sampled from different distributions, they may be quantized into different visual words of the codebook and encoded with different representations, which may severely degrade classification performance. In this paper, we propose a Transfer Sparse Coding (TSC) approach to construct robust sparse representations for classifying cross-distribution images accurately. Specifically, we aim to minimize the distribution divergence between the labeled and unlabeled images, and incorporate this criterion into the objective function of sparse coding to make the new representations robust to the distribution difference. Experiments show that TSC can significantly outperform state-of-the-art methods on three types of computer vision datasets.
ISSN:1063-6919
1063-6919
DOI:10.1109/CVPR.2013.59