Wait, are you sad or angry? Large exposure time differences required for the categorization of facial expressions of emotion

Facial expressions of emotion are essential components of human behavior, yet little is known about the hierarchical organization of their cognitive analysis. We study the minimum exposure time needed to successfully classify the six classical facial expressions of emotion (joy, surprise, sadness, a...

Full description

Saved in:
Bibliographic Details
Published inJournal of vision (Charlottesville, Va.) Vol. 13; no. 4; p. 13
Main Authors Du, S., Martinez, A. M.
Format Journal Article
LanguageEnglish
Published United States The Association for Research in Vision and Ophthalmology 18.03.2013
Subjects
Online AccessGet full text
ISSN1534-7362
1534-7362
DOI10.1167/13.4.13

Cover

Loading…
More Information
Summary:Facial expressions of emotion are essential components of human behavior, yet little is known about the hierarchical organization of their cognitive analysis. We study the minimum exposure time needed to successfully classify the six classical facial expressions of emotion (joy, surprise, sadness, anger, disgust, fear) plus neutral as seen at different image resolutions (240 × 160 to 15 × 10 pixels). Our results suggest a consistent hierarchical analysis of these facial expressions regardless of the resolution of the stimuli. Happiness and surprise can be recognized after very short exposure times (10-20 ms), even at low resolutions. Fear and anger are recognized the slowest (100-250 ms), even in high-resolution images, suggesting a later computation. Sadness and disgust are recognized in between (70-200 ms). The minimum exposure time required for successful classification of each facial expression correlates with the ability of a human subject to identify it correctly at low resolutions. These results suggest a fast, early computation of expressions represented mostly by low spatial frequencies or global configural cues and a later, slower process for those categories requiring a more fine-grained analysis of the image. We also demonstrate that those expressions that are mostly visible in higher-resolution images are not recognized as accurately. We summarize implications for current computational models.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1534-7362
1534-7362
DOI:10.1167/13.4.13