Dynamic and static facial expressions decoded from motion-sensitive areas in the macaque monkey

Humans adeptly use visual motion to recognize socially relevant facial information. The macaque provides a model visual system for studying neural coding of expression movements, as its superior temporal sulcus (STS) possesses brain areas selective for faces and areas sensitive to visual motion. We...

Full description

Saved in:
Bibliographic Details
Published inThe Journal of neuroscience Vol. 32; no. 45; pp. 15952 - 15962
Main Authors Furl, Nicholas, Hadj-Bouziane, Fadila, Liu, Ning, Averbeck, Bruno B, Ungerleider, Leslie G
Format Journal Article
LanguageEnglish
Published United States Society for Neuroscience 07.11.2012
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Humans adeptly use visual motion to recognize socially relevant facial information. The macaque provides a model visual system for studying neural coding of expression movements, as its superior temporal sulcus (STS) possesses brain areas selective for faces and areas sensitive to visual motion. We used functional magnetic resonance imaging and facial stimuli to localize motion-sensitive areas [motion in faces (Mf) areas], which responded more to dynamic faces compared with static faces, and face-selective areas, which responded selectively to faces compared with objects and places. Using multivariate analysis, we found that information about both dynamic and static facial expressions could be robustly decoded from Mf areas. By contrast, face-selective areas exhibited relatively less facial expression information. Classifiers trained with expressions from one motion type (dynamic or static) showed poor generalization to the other motion type, suggesting that Mf areas employ separate and nonconfusable neural codes for dynamic and static presentations of the same expressions. We also show that some of the motion sensitivity elicited by facial stimuli was not specific to faces but could also be elicited by moving dots, particularly in fundus of the superior temporal and middle superior temporal polysensory/lower superior temporal areas, confirming their already well established low-level motion sensitivity. A different pattern was found in anterior STS, which responded more to dynamic than to static faces but was not sensitive to dot motion. Overall, we show that emotional expressions are mostly represented outside of face-selective cortex, in areas sensitive to motion. These regions may play a fundamental role in enhancing recognition of facial expression despite the complex stimulus changes associated with motion.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
PMCID: PMC3539420
B.A. and L.G.U. contributed equally and are co-senior authors.
Author contributions: F.H.-B. and L.U. designed research; F.H.-B. and N.L. performed research; N.F. and F.H.-B. analyzed data; N.F., B.A., and L.U. wrote the paper.
N.F. and F.H.-B. contributed equally and are co-first authors
ISSN:0270-6474
1529-2401
DOI:10.1523/jneurosci.1992-12.2012