Reading the mind's eye: Decoding category information during mental imagery

Category information for visually presented objects can be read out from multi-voxel patterns of fMRI activity in ventral–temporal cortex. What is the nature and reliability of these patterns in the absence of any bottom–up visual input, for example, during visual imagery? Here, we first ask how wel...

Full description

Saved in:
Bibliographic Details
Published inNeuroImage (Orlando, Fla.) Vol. 50; no. 2; pp. 818 - 825
Main Authors Reddy, Leila, Tsuchiya, Naotsugu, Serre, Thomas
Format Journal Article
LanguageEnglish
Published United States Elsevier Inc 01.04.2010
Elsevier Limited
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Category information for visually presented objects can be read out from multi-voxel patterns of fMRI activity in ventral–temporal cortex. What is the nature and reliability of these patterns in the absence of any bottom–up visual input, for example, during visual imagery? Here, we first ask how well category information can be decoded for imagined objects and then compare the representations evoked during imagery and actual viewing. In an fMRI study, four object categories (food, tools, faces, buildings) were either visually presented to subjects, or imagined by them. Using pattern classification techniques, we could reliably decode category information (including for non-special categories, i.e., food and tools) from ventral–temporal cortex in both conditions, but only during actual viewing from retinotopic areas. Interestingly, in temporal cortex when the classifier was trained on the viewed condition and tested on the imagery condition, or vice versa, classification performance was comparable to within the imagery condition. The above results held even when we did not use information in the specialized category-selective areas. Thus, the patterns of representation during imagery and actual viewing are in fact surprisingly similar to each other. Consistent with this observation, the maps of “diagnostic voxels” (i.e., the classifier weights) for the perception and imagery classifiers were more similar in ventral–temporal cortex than in retinotopic cortex. These results suggest that in the absence of any bottom–up input, cortical back projections can selectively re-activate specific patterns of neural activity.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
PMCID: PMC2823980
ISSN:1053-8119
1095-9572
1095-9572
DOI:10.1016/j.neuroimage.2009.11.084