Real-time classification of evoked emotions using facial feature tracking and physiological responses

We present automated, real-time models built with machine learning algorithms which use videotapes of subjects’ faces in conjunction with physiological measurements to predict rated emotion (trained coders’ second-by-second assessments of sadness or amusement). Input consisted of videotapes of 41 su...

Full description

Saved in:
Bibliographic Details
Published inInternational journal of human-computer studies Vol. 66; no. 5; pp. 303 - 317
Main Authors Bailenson, Jeremy N., Pontikakis, Emmanuel D., Mauss, Iris B., Gross, James J., Jabon, Maria E., Hutcherson, Cendri A.C., Nass, Clifford, John, Oliver
Format Journal Article
LanguageEnglish
Published London Elsevier Ltd 01.05.2008
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We present automated, real-time models built with machine learning algorithms which use videotapes of subjects’ faces in conjunction with physiological measurements to predict rated emotion (trained coders’ second-by-second assessments of sadness or amusement). Input consisted of videotapes of 41 subjects watching emotionally evocative films along with measures of their cardiovascular activity, somatic activity, and electrodermal responding. We built algorithms based on extracted points from the subjects’ faces as well as their physiological responses. Strengths of the current approach are (1) we are assessing real behavior of subjects watching emotional videos instead of actors making facial poses, (2) the training data allow us to predict both emotion type (amusement versus sadness) as well as the intensity level of each emotion, (3) we provide a direct comparison between person-specific, gender-specific, and general models. Results demonstrated good fits for the models overall, with better performance for emotion categories than for emotion intensity, for amusement ratings than sadness ratings, for a full model using both physiological measures and facial tracking than for either cue alone, and for person-specific models than for gender-specific or general models.
ISSN:1071-5819
1095-9300
DOI:10.1016/j.ijhcs.2007.10.011