Cross-Domain Facial Expression Recognition Using Supervised Kernel Mean Matching

Even though facial expressions have universal meaning in communications, their appearances show a large amount of variation due to many factors, such as different image acquisition setups, different ages, genders, and cultural backgrounds etc. Collecting enough amounts of annotated samples for each...

Full description

Saved in:
Bibliographic Details
Published in2012 Eleventh International Conference on Machine Learning and Applications Vol. 2; pp. 326 - 332
Main Authors Yun-Qian Miao, Araujo, R., Kamel, M. S.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.12.2012
Subjects
Online AccessGet full text
ISBN1467346519
9781467346511
DOI10.1109/ICMLA.2012.178

Cover

More Information
Summary:Even though facial expressions have universal meaning in communications, their appearances show a large amount of variation due to many factors, such as different image acquisition setups, different ages, genders, and cultural backgrounds etc. Collecting enough amounts of annotated samples for each target domain is impractical, this paper investigates the problem of facial expression recognition in the more challenging situation, where the training and testing samples are taken from different domains. To address this problem, after observing the fact of unsatisfactory performance of the Kernel Mean Matching (KMM) algorithm, we propose a supervised extension that matches the distributions in a class-to-class manner, called Supervised Kernel Mean Matching (SKMM). The new approach stands out by taking into consideration both matching the distributions and preserving the discriminative information between classes at the same time. The extensive experimental studies on four cross-dataset facial expression recognition tasks show promising improvements of the proposed method, in which a small number of labeled samples guide the matching process.
ISBN:1467346519
9781467346511
DOI:10.1109/ICMLA.2012.178