Human Scene-Selective Areas Represent 3D Configurations of Surfaces

It has been argued that scene-selective areas in the human brain represent both the 3D structure of the local visual environment and low-level 2D features (such as spatial frequency) that provide cues for 3D structure. To evaluate the degree to which each of these hypotheses explains variance in sce...

Full description

Saved in:
Bibliographic Details
Published inNeuron (Cambridge, Mass.) Vol. 101; no. 1; pp. 178 - 192.e7
Main Authors Lescroart, Mark D., Gallant, Jack L.
Format Journal Article
LanguageEnglish
Published United States Elsevier Inc 02.01.2019
Elsevier Limited
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:It has been argued that scene-selective areas in the human brain represent both the 3D structure of the local visual environment and low-level 2D features (such as spatial frequency) that provide cues for 3D structure. To evaluate the degree to which each of these hypotheses explains variance in scene-selective areas, we develop an encoding model of 3D scene structure and test it against a model of low-level 2D features. We fit the models to fMRI data recorded while subjects viewed visual scenes. The fit models reveal that scene-selective areas represent the distance to and orientation of large surfaces, at least partly independent of low-level features. Principal component analysis of the model weights reveals that the most important dimensions of 3D structure are distance and openness. Finally, reconstructions of the stimuli based on the model weights demonstrate that our model captures unprecedented detail about the local visual environment from scene-selective areas. •A model of 3D structure explains unique response variance in scene-selective areas•Individual voxels in these areas represent distances and orientations of 3D surfaces•The principal dimensions of tuning in scene-selective areas are distance and openness•The model can reconstruct 3D scene backgrounds from brain activity in these areas This paper uses voxelwise modeling to show that individual voxels in human scene-selective areas are tuned for the orientations and distances of surfaces. Simple 2D features cannot explain this tuning, and the model can reconstruct 3D scenes from fMRI activity.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0896-6273
1097-4199
1097-4199
DOI:10.1016/j.neuron.2018.11.004