The Representation of Semantic Information Across Human Cerebral Cortex During Listening Versus Reading Is Invariant to Stimulus Modality

An integral part of human language is the capacity to extract meaning from spoken and written words, but the precise relationship between brain representations of information perceived by listening versus reading is unclear. Prior neuroimaging studies have shown that semantic information in spoken l...

Full description

Saved in:
Bibliographic Details
Published inThe Journal of neuroscience Vol. 39; no. 39; pp. 7722 - 7736
Main Authors Deniz, Fatma, Nunez-Elizalde, Anwar O, Huth, Alexander G, Gallant, Jack L
Format Journal Article
LanguageEnglish
Published United States Society for Neuroscience 25.09.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:An integral part of human language is the capacity to extract meaning from spoken and written words, but the precise relationship between brain representations of information perceived by listening versus reading is unclear. Prior neuroimaging studies have shown that semantic information in spoken language is represented in multiple regions in the human cerebral cortex, while amodal semantic information appears to be represented in a few broad brain regions. However, previous studies were too insensitive to determine whether semantic representations were shared at a fine level of detail rather than merely at a coarse scale. We used fMRI to record brain activity in two separate experiments while participants listened to or read several hours of the same narrative stories, and then created voxelwise encoding models to characterize semantic selectivity in each voxel and in each individual participant. We find that semantic tuning during listening and reading are highly correlated in most semantically selective regions of cortex, and models estimated using one modality accurately predict voxel responses in the other modality. These results suggest that the representation of language semantics is independent of the sensory modality through which the semantic information is received. Humans can comprehend the meaning of words from both spoken and written language. It is therefore important to understand the relationship between the brain representations of spoken or written text. Here, we show that although the representation of semantic information in the human brain is quite complex, the semantic representations evoked by listening versus reading are almost identical. These results suggest that the representation of language semantics is independent of the sensory modality through which the semantic information is received.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Author contributions: F.D., A.G.H., and J.L.G. designed research; F.D. and A.G.H. performed research; F.D. and A.O.N.-E. analyzed data; F.D. wrote the first draft of the paper; F.D., A.O.N.-E., A.G.H., and J.L.G. edited the paper; F.D. wrote the paper.
A.G. Huth's present address: Departments of Computer Science and Neuroscience, University of Texas, Austin, TX 78712.
ISSN:0270-6474
1529-2401
DOI:10.1523/jneurosci.0675-19.2019