Video classification and recommendation based on affective analysis of viewers

Most previous works on video classification and recommendation were only based on video contents, without considering the affective analysis of viewers. In this paper, we presented a novel method to classify and recommend videos based on affective analysis, mainly on facial expression recognition of...

Full description

Saved in:
Bibliographic Details
Published inNeurocomputing (Amsterdam) Vol. 119; pp. 101 - 110
Main Authors Zhao, Sicheng, Yao, Hongxun, Sun, Xiaoshuai
Format Journal Article
LanguageEnglish
Published Elsevier B.V 07.11.2013
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Most previous works on video classification and recommendation were only based on video contents, without considering the affective analysis of viewers. In this paper, we presented a novel method to classify and recommend videos based on affective analysis, mainly on facial expression recognition of viewers, by fusing the spatio–temporal features. For spatial features, we integrate Haar-like features into compositional ones according to the features’ correlation, and train a mid classifier. Then this process is embedded into the improved AdaBoost learning algorithm to obtain spatial features. And for temporal feature fusion, we adopt HDCRFs based on HCRFs by introducing a time dimension variable. The spatial features are embedded into HDCRFs to recognize facial expressions. Experiments on the Cohn–Kanada database show that the proposed method has a promising performance. Then viewers' changing facial expressions are collected frame by frame from the camera when they are watching videos. Finally, we draw affective curves which tell the process of affection changes. Through the curves, we segment each video into affective sections, classify videos into categories, and list recommendation scores. Experimental results on our collected database show that most subjects are satisfied with the classification and recommendation results.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2012.04.042