Experience transforms crossmodal object representations in the anterior temporal lobes

Combining information from multiple senses is essential to object recognition, core to the ability to learn concepts, make new inferences, and generalize across distinct entities. Yet how the mind combines sensory input into coherent crossmodal representations - the - remains poorly understood. Here...

Full description

Saved in:
Bibliographic Details
Published ineLife Vol. 13
Main Authors Li, Aedan Yue, Ladyka-Wojcik, Natalia, Qazilbash, Heba, Golestani, Ali, Walther, Dirk B, Martin, Chris B, Barense, Morgan D
Format Journal Article
LanguageEnglish
Published England eLife Science Publications, Ltd 22.04.2024
eLife Sciences Publications Ltd
eLife Sciences Publications, Ltd
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Combining information from multiple senses is essential to object recognition, core to the ability to learn concepts, make new inferences, and generalize across distinct entities. Yet how the mind combines sensory input into coherent crossmodal representations - the - remains poorly understood. Here, we applied multi-echo fMRI across a 4-day paradigm, in which participants learned three-dimensional crossmodal representations created from well-characterized unimodal visual shape and sound features. Our novel paradigm decoupled the learned crossmodal object representations from their baseline unimodal shapes and sounds, thus allowing us to track the emergence of crossmodal object representations as they were learned by healthy adults. Critically, we found that two anterior temporal lobe structures - temporal pole and perirhinal cortex - differentiated learned from non-learned crossmodal objects, even when controlling for the unimodal features that composed those objects. These results provide evidence for integrated crossmodal object representations in the anterior temporal lobes that were different from the representations for the unimodal features. Furthermore, we found that perirhinal cortex representations were by default biased toward visual shape, but this initial visual bias was attenuated by crossmodal learning. Thus, crossmodal learning transformed perirhinal representations such that they were no longer predominantly grounded in the visual modality, which may be a mechanism by which object concepts gain their abstraction.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2050-084X
2050-084X
DOI:10.7554/eLife.83382