Face neurons encode nonsemantic features

The primate inferior temporal cortex contains neurons that respond more strongly to faces than to other objects. Termed “face neurons,” these neurons are thought to be selective for faces as a semantic category. However, face neurons also partly respond to clocks, fruits, and single eyes, raising th...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the National Academy of Sciences - PNAS Vol. 119; no. 16; p. e2118705119
Main Authors Bardon, Alexandra, Xiao, Will, Ponce, Carlos R, Livingstone, Margaret S, Kreiman, Gabriel
Format Journal Article
LanguageEnglish
Published United States National Academy of Sciences 19.04.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The primate inferior temporal cortex contains neurons that respond more strongly to faces than to other objects. Termed “face neurons,” these neurons are thought to be selective for faces as a semantic category. However, face neurons also partly respond to clocks, fruits, and single eyes, raising the question of whether face neurons are better described as selective for visual features related to faces but dissociable from them. We used a recently described algorithm, XDream, to evolve stimuli that strongly activated face neurons. XDream leverages a generative neural network that is not limited to realistic objects. Human participants assessed images evolved for face neurons and for nonface neurons and natural images depicting faces, cars, fruits, etc. Evolved images were consistently judged to be distinct from real faces. Images evolved for face neurons were rated as slightly more similar to faces than images evolved for nonface neurons. There was a correlation among natural images between face neuron activity and subjective “faceness” ratings, but this relationship did not hold for face neuron–evolved images, which triggered high activity but were rated low in faceness. Our results suggest that so-called face neurons are better described as tuned to visual features rather than semantic categories.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Author contributions: A.B., W.X., C.R.P., M.S.L., and G.K. designed research; A.B., W.X., C.R.P., and M.S.L. performed research; A.B., W.X., and G.K. analyzed data; and A.B., W.X., C.R.P., M.S.L., and G.K. wrote the paper.
Contributed by Margaret S. Livingstone; received October 13, 2021; accepted February 17, 2022; reviewed by Marlene Behrmann and Sabine Kastner
1A.B. and W.X. contributed equally to this work.
ISSN:0027-8424
1091-6490
1091-6490
DOI:10.1073/pnas.2118705119