3D facial expression recognition using SIFT descriptors of automatically detected keypoints

Methods to recognize humans’ facial expressions have been proposed mainly focusing on 2D still images and videos. In this paper, the problem of person-independent facial expression recognition is addressed using the 3D geometry information extracted from the 3D shape of the face. To this end, a comp...

Full description

Saved in:
Bibliographic Details
Published inThe Visual computer Vol. 27; no. 11; pp. 1021 - 1036
Main Authors Berretti, Stefano, Ben Amor, Boulbaba, Daoudi, Mohamed, del Bimbo, Alberto
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer-Verlag 01.11.2011
Springer Nature B.V
Springer Verlag
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Methods to recognize humans’ facial expressions have been proposed mainly focusing on 2D still images and videos. In this paper, the problem of person-independent facial expression recognition is addressed using the 3D geometry information extracted from the 3D shape of the face. To this end, a completely automatic approach is proposed that relies on identifying a set of facial keypoints, computing SIFT feature descriptors of depth images of the face around sample points defined starting from the facial keypoints, and selecting the subset of features with maximum relevance. Training a Support Vector Machine (SVM) for each facial expression to be recognized, and combining them to form a multi-class classifier, an average recognition rate of 78.43% on the BU-3DFE database has been obtained. Comparison with competitor approaches using a common experimental setting on the BU-3DFE database shows that our solution is capable of obtaining state of the art results. The same 3D face representation framework and testing database have been also used to perform 3D facial expression retrieval (i.e., retrieve 3D scans with the same facial expression as shown by a target subject), with results proving the viability of the proposed solution.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0178-2789
1432-2315
1432-8726
DOI:10.1007/s00371-011-0611-x