A kernel-based sparsity preserving method for semi-supervised classification

In this paper, we propose an effective approach to semi-supervised classification through kernel-based sparse representation. The new method computes the sparse representation of data in the feature space, and then the learner is subject to a cost function which aims to preserve the sparse represent...

Full description

Saved in:
Bibliographic Details
Published inNeurocomputing (Amsterdam) Vol. 139; pp. 345 - 356
Main Authors Gu, Nannan, Wang, Di, Fan, Mingyu, Meng, Deyu
Format Journal Article
LanguageEnglish
Published Amsterdam Elsevier B.V 02.09.2014
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, we propose an effective approach to semi-supervised classification through kernel-based sparse representation. The new method computes the sparse representation of data in the feature space, and then the learner is subject to a cost function which aims to preserve the sparse representing coefficients. By mapping the data into the feature space, the so-called “l2-norm problem” that may be encountered when directly applying sparse representations to non-image data classification tasks will be naturally alleviated, and meanwhile, the label of a data point can be reconstructed more precisely by the labels of other data points using the sparse representing coefficients. Inherited from sparse representation, our method can adaptively establish the relationship between data points, and has high discriminative ability. Furthermore, the new method has a natural multi-class explicit expression for new samples. Experimental results on several benchmark data sets are provided to show the effectiveness of our method.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2014.02.022