PCA-based dictionary building for accurate facial expression recognition via sparse representation
•We propose a new dictionary building algorithm based on Principal Component Analysis.•The new DB algorithm is used for facial expression recognition.•The PCA-based dictionary is less dependent to identity and more reliable.•The state-of-the-art sparse solvers including BP, MP and SL0 are compared.•...
Saved in:
Published in | Journal of visual communication and image representation Vol. 25; no. 5; pp. 1082 - 1092 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Amsterdam
Elsevier Inc
01.07.2014
Elsevier |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •We propose a new dictionary building algorithm based on Principal Component Analysis.•The new DB algorithm is used for facial expression recognition.•The PCA-based dictionary is less dependent to identity and more reliable.•The state-of-the-art sparse solvers including BP, MP and SL0 are compared.•The generalization power of the proposed algorithm across datasets is studied.
Sparse representation is a new approach that has received significant attention for image classification and recognition. This paper presents a PCA-based dictionary building for sparse representation and classification of universal facial expressions. In our method, expressive facials images of each subject are subtracted from a neutral facial image of the same subject. Then the PCA is applied to these difference images to model the variations within each class of facial expressions. The learned principal components are used as the atoms of the dictionary. In the classification step, a given test image is sparsely represented as a linear combination of the principal components of six basic facial expressions. Our extensive experiments on several publicly available face datasets (CK+, MMI, and Bosphorus datasets) show that our framework outperforms the recognition rate of the state-of-the-art techniques by about 6%. This approach is promising and can further be applied to visual object recognition. |
---|---|
Bibliography: | ObjectType-Article-2 SourceType-Scholarly Journals-1 ObjectType-Feature-1 content type line 23 |
ISSN: | 1047-3203 1095-9076 |
DOI: | 10.1016/j.jvcir.2014.03.006 |