Implicit video emotion tagging from audiences’ facial expression

In this paper, we propose a novel implicit video emotion tagging approach by exploring the relationships between videos’ common emotions, subjects’ individualized emotions and subjects’ outer facial expressions. First, head motion and face appearance features are extracted. Then, the spontaneous fac...

Full description

Saved in:
Bibliographic Details
Published inMultimedia tools and applications Vol. 74; no. 13; pp. 4679 - 4706
Main Authors Wang, Shangfei, Liu, Zhilei, Zhu, Yachen, He, Menghua, Chen, Xiaoping, Ji, Qiang
Format Journal Article
LanguageEnglish
Published New York Springer US 01.06.2015
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, we propose a novel implicit video emotion tagging approach by exploring the relationships between videos’ common emotions, subjects’ individualized emotions and subjects’ outer facial expressions. First, head motion and face appearance features are extracted. Then, the spontaneous facial expressions of subjects are recognized by Bayesian networks. After that, the relationships between the outer facial expressions, the inner individualized emotions and the video’s common emotions are captured by another Bayesian network, which can be used to infer the emotional tags of videos. To validate the effectiveness of our approach, an emotion tagging experiment is conducted on the NVIE database. The experimental results show that head motion features improve the performance of both facial expression recognition and emotion tagging, and that the captured relations between the outer facial expressions, the inner individualized emotions and the common emotions improve the performance of common and individualized emotion tagging.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-013-1830-0