Multivariate Analysis of BOLD Activation Patterns Recovers Graded Depth Representations in Human Visual and Parietal Cortex

Navigating through natural environments requires localizing objects along three distinct spatial axes. Information about position along the horizontal and vertical axes is available from an object’s position on the retina, while position along the depth axis must be inferred based on second-order cu...

Full description

Saved in:
Bibliographic Details
Published ineNeuro Vol. 6; no. 4; p. ENEURO.0362-18.2019
Main Authors Henderson, Margaret, Vo, Vy, Chunharas, Chaipat, Sprague, Thomas, Serences, John
Format Journal Article
LanguageEnglish
Published United States Society for Neuroscience 01.07.2019
Subjects
Online AccessGet full text
ISSN2373-2822
2373-2822
DOI10.1523/ENEURO.0362-18.2019

Cover

Loading…
More Information
Summary:Navigating through natural environments requires localizing objects along three distinct spatial axes. Information about position along the horizontal and vertical axes is available from an object’s position on the retina, while position along the depth axis must be inferred based on second-order cues such as the disparity between the images cast on the two retinae. Past work has revealed that object position in two-dimensional (2D) retinotopic space is robustly represented in visual cortex and can be robustly predicted using a multivariate encoding model, in which an explicit axis is modeled for each spatial dimension. However, no study to date has used an encoding model to estimate a representation of stimulus position in depth. Here, we recorded BOLD fMRI while human subjects viewed a stereoscopic random-dot sphere at various positions along the depth ( z ) and the horizontal ( x ) axes, and the stimuli were presented across a wider range of disparities (out to ∼40 arcmin) compared to previous neuroimaging studies. In addition to performing decoding analyses for comparison to previous work, we built encoding models for depth position and for horizontal position, allowing us to directly compare encoding between these dimensions. Our results validate this method of recovering depth representations from retinotopic cortex. Furthermore, we find convergent evidence that depth is encoded most strongly in dorsal area V3A.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
The authors declare no competing financial interests.
This work was supported by National Eye Institute Grants R01-EY025872 (to J.S.) and F32-EY028438 (to T.S.), Thai Red Cross Society funding (C.C.), and the National Science Foundation Graduate Research Fellowships Program (V.V.).
M.H. and V.V. authors contributed equally to this work.
Author contributions: M.H., V.V., C.C., T.S., and J.S. designed research; M.H., V.V., C.C., and T.S. performed research; M.H. and V.V. analyzed data; M.H. and V.V. wrote the paper.
ISSN:2373-2822
2373-2822
DOI:10.1523/ENEURO.0362-18.2019