Multivariate Analysis of BOLD Activation Patterns Recovers Graded Depth Representations in Human Visual and Parietal Cortex
Navigating through natural environments requires localizing objects along three distinct spatial axes. Information about position along the horizontal and vertical axes is available from an object’s position on the retina, while position along the depth axis must be inferred based on second-order cu...
Saved in:
Published in | eNeuro Vol. 6; no. 4; p. ENEURO.0362-18.2019 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
United States
Society for Neuroscience
01.07.2019
|
Subjects | |
Online Access | Get full text |
ISSN | 2373-2822 2373-2822 |
DOI | 10.1523/ENEURO.0362-18.2019 |
Cover
Loading…
Abstract | Navigating through natural environments requires localizing objects along three distinct spatial axes. Information about position along the horizontal and vertical axes is available from an object’s position on the retina, while position along the depth axis must be inferred based on second-order cues such as the disparity between the images cast on the two retinae. Past work has revealed that object position in two-dimensional (2D) retinotopic space is robustly represented in visual cortex and can be robustly predicted using a multivariate encoding model, in which an explicit axis is modeled for each spatial dimension. However, no study to date has used an encoding model to estimate a representation of stimulus position in depth. Here, we recorded BOLD fMRI while human subjects viewed a stereoscopic random-dot sphere at various positions along the depth (
z
) and the horizontal (
x
) axes, and the stimuli were presented across a wider range of disparities (out to ∼40 arcmin) compared to previous neuroimaging studies. In addition to performing decoding analyses for comparison to previous work, we built encoding models for depth position and for horizontal position, allowing us to directly compare encoding between these dimensions. Our results validate this method of recovering depth representations from retinotopic cortex. Furthermore, we find convergent evidence that depth is encoded most strongly in dorsal area V3A. |
---|---|
AbstractList | Navigating through natural environments requires localizing objects along three distinct spatial axes. Information about position along the horizontal and vertical axes is available from an object's position on the retina, while position along the depth axis must be inferred based on second-order cues such as the disparity between the images cast on the two retinae. Past work has revealed that object position in two-dimensional (2D) retinotopic space is robustly represented in visual cortex and can be robustly predicted using a multivariate encoding model, in which an explicit axis is modeled for each spatial dimension. However, no study to date has used an encoding model to estimate a representation of stimulus position in depth. Here, we recorded BOLD fMRI while human subjects viewed a stereoscopic random-dot sphere at various positions along the depth (
) and the horizontal (
) axes, and the stimuli were presented across a wider range of disparities (out to ∼40 arcmin) compared to previous neuroimaging studies. In addition to performing decoding analyses for comparison to previous work, we built encoding models for depth position and for horizontal position, allowing us to directly compare encoding between these dimensions. Our results validate this method of recovering depth representations from retinotopic cortex. Furthermore, we find convergent evidence that depth is encoded most strongly in dorsal area V3A. Navigating through natural environments requires localizing objects along three distinct spatial axes. Information about position along the horizontal and vertical axes is available from an object’s position on the retina, while position along the depth axis must be inferred based on second-order cues such as the disparity between the images cast on the two retinae. Past work has revealed that object position in two-dimensional (2D) retinotopic space is robustly represented in visual cortex and can be robustly predicted using a multivariate encoding model, in which an explicit axis is modeled for each spatial dimension. However, no study to date has used an encoding model to estimate a representation of stimulus position in depth. Here, we recorded BOLD fMRI while human subjects viewed a stereoscopic random-dot sphere at various positions along the depth ( z ) and the horizontal ( x ) axes, and the stimuli were presented across a wider range of disparities (out to ∼40 arcmin) compared to previous neuroimaging studies. In addition to performing decoding analyses for comparison to previous work, we built encoding models for depth position and for horizontal position, allowing us to directly compare encoding between these dimensions. Our results validate this method of recovering depth representations from retinotopic cortex. Furthermore, we find convergent evidence that depth is encoded most strongly in dorsal area V3A. Navigating through natural environments requires localizing objects along three distinct spatial axes. Information about position along the horizontal and vertical axes is available from an object's position on the retina, while position along the depth axis must be inferred based on second-order cues such as the disparity between the images cast on the two retinae. Past work has revealed that object position in two-dimensional (2D) retinotopic space is robustly represented in visual cortex and can be robustly predicted using a multivariate encoding model, in which an explicit axis is modeled for each spatial dimension. However, no study to date has used an encoding model to estimate a representation of stimulus position in depth. Here, we recorded BOLD fMRI while human subjects viewed a stereoscopic random-dot sphere at various positions along the depth (z) and the horizontal (x) axes, and the stimuli were presented across a wider range of disparities (out to ∼40 arcmin) compared to previous neuroimaging studies. In addition to performing decoding analyses for comparison to previous work, we built encoding models for depth position and for horizontal position, allowing us to directly compare encoding between these dimensions. Our results validate this method of recovering depth representations from retinotopic cortex. Furthermore, we find convergent evidence that depth is encoded most strongly in dorsal area V3A.Navigating through natural environments requires localizing objects along three distinct spatial axes. Information about position along the horizontal and vertical axes is available from an object's position on the retina, while position along the depth axis must be inferred based on second-order cues such as the disparity between the images cast on the two retinae. Past work has revealed that object position in two-dimensional (2D) retinotopic space is robustly represented in visual cortex and can be robustly predicted using a multivariate encoding model, in which an explicit axis is modeled for each spatial dimension. However, no study to date has used an encoding model to estimate a representation of stimulus position in depth. Here, we recorded BOLD fMRI while human subjects viewed a stereoscopic random-dot sphere at various positions along the depth (z) and the horizontal (x) axes, and the stimuli were presented across a wider range of disparities (out to ∼40 arcmin) compared to previous neuroimaging studies. In addition to performing decoding analyses for comparison to previous work, we built encoding models for depth position and for horizontal position, allowing us to directly compare encoding between these dimensions. Our results validate this method of recovering depth representations from retinotopic cortex. Furthermore, we find convergent evidence that depth is encoded most strongly in dorsal area V3A. |
Author | Chunharas, Chaipat Vo, Vy Serences, John Sprague, Thomas Henderson, Margaret |
Author_xml | – sequence: 1 givenname: Margaret orcidid: 0000-0001-9375-6680 surname: Henderson fullname: Henderson, Margaret – sequence: 2 givenname: Vy orcidid: 0000-0001-9601-1297 surname: Vo fullname: Vo, Vy – sequence: 3 givenname: Chaipat orcidid: 0000-0003-1074-0160 surname: Chunharas fullname: Chunharas, Chaipat – sequence: 4 givenname: Thomas orcidid: 0000-0001-9530-2463 surname: Sprague fullname: Sprague, Thomas – sequence: 5 givenname: John orcidid: 0000-0002-8551-5147 surname: Serences fullname: Serences, John |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/31285275$$D View this record in MEDLINE/PubMed |
BookMark | eNp9UUtPGzEQtiqqQim_AAn5yGVTP3a93kulEAJUSpsKlV4tr3cWXG3sYHsjUP98nQYq6KGn8Xi-x2i-92jPeQcIHVMyoRXjH-df5zfXywnhghVUThihzRt0wHjNCyYZ23vx3kdHMf4khFDBairpO7TPKZMVq6sD9OvLOCS70cHqBHjq9PAYbcS-x2fLxTmemu0wWe_wN50SBBfxNRi_gRDxZdAddPgc1uku_64DRHDpDzpi6_DVuNIO_7Bx1APWrssSwULKzcyHBA8f0NteDxGOnuohurmYf59dFYvl5efZdFGYklSpqAyvwUjTCSMIEVR2xtRScuDCtASoZq3otGSClKzlsm4MK2XDO172Zdm0PT9En3a667FdQWfykkEPah3sSodH5bVVryfO3qlbv1GiJg2jPAucPgkEfz9CTGplo4Fh0A78GBVjVVnRvJvM0JOXXn9Nni-eAc0OYIKPMUCvjN3dLFvbQVGitgGrXcBqG7CiUm0Dzlz-D_dZ_n-s3xjhqzQ |
CitedBy_id | crossref_primary_10_1523_ENEURO_0411_19_2019 crossref_primary_10_7554_eLife_78712 crossref_primary_10_1016_j_visres_2022_108082 crossref_primary_10_1146_annurev_vision_111022_123857 crossref_primary_10_1016_j_nicl_2022_103005 |
Cites_doi | 10.1152/jn.2001.86.4.2054 10.1523/JNEUROSCI.0991-07.2007 10.1163/156856897X00357 10.1038/ncomms15276 10.1214/aos/1013699998 10.1016/j.neuroimage.2009.03.023 10.1186/s12868-017-0395-7 10.1016/j.neuropsychologia.2011.07.013 10.1007/978-1-4899-4541-9 10.1523/JNEUROSCI.5292-03.2004 10.1016/j.tics.2009.08.005 10.1038/nn1461 10.1152/jn.00540.2011 10.1523/JNEUROSCI.3577-09.2009 10.1016/0042-6989(84)90107-X 10.1038/nn.3574 10.1016/j.tics.2015.02.005 10.1038/nn1641 10.1523/JNEUROSCI.08-12-04531.1988 10.1167/7.14.15 10.1016/j.jphysparis.2013.04.002 10.1097/00001756-200102120-00036 10.1152/jn.00790.2009 10.1523/JNEUROSCI.2728-08.2008 10.1016/j.neuroimage.2016.12.039 10.1016/j.tics.2015.07.005 10.1016/j.neuroimage.2014.12.083 10.1523/JNEUROSCI.3047-14.2015 10.1111/opo.12121 10.1016/j.cub.2015.06.003 10.1146/annurev-vision-111815-114605 10.1523/JNEUROSCI.4753-08.2009 10.1152/jn.01042.2003 10.1016/S0896-6273(03)00459-8 10.1523/JNEUROSCI.3484-16.2017 10.1017/S0952523815000176 10.1073/pnas.93.6.2382 10.1163/156856897X00366 10.1016/j.neuroimage.2015.03.023 10.1126/science.7754376 10.1126/science.1063695 10.1016/j.jphysparis.2004.03.004 10.1523/JNEUROSCI.07-03-00913.1987 10.1016/j.neuron.2007.10.012 10.1523/JNEUROSCI.5956-10.2011 10.1093/cercor/bht288 10.1016/j.neuron.2009.09.006 |
ContentType | Journal Article |
Copyright | Copyright © 2019 Henderson et al. Copyright © 2019 Henderson et al. 2019 Henderson et al. |
Copyright_xml | – notice: Copyright © 2019 Henderson et al. – notice: Copyright © 2019 Henderson et al. 2019 Henderson et al. |
DBID | AAYXX CITATION CGR CUY CVF ECM EIF NPM 7X8 5PM |
DOI | 10.1523/ENEURO.0362-18.2019 |
DatabaseName | CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed MEDLINE - Academic PubMed Central (Full Participant titles) |
DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) MEDLINE - Academic |
DatabaseTitleList | MEDLINE MEDLINE - Academic CrossRef |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Medicine |
DocumentTitleAlternate | Multivariate Representations of Stimulus Depth |
EISSN | 2373-2822 |
ExternalDocumentID | PMC6709213 31285275 10_1523_ENEURO_0362_18_2019 |
Genre | Research Support, Non-U.S. Gov't Journal Article Research Support, N.I.H., Extramural |
GrantInformation_xml | – fundername: NEI NIH HHS grantid: R01 EY025872 – fundername: National Eye Institute grantid: F32-EY028438; R01-EY025872 – fundername: NSF GRFP – fundername: Thai Red Cross Society |
GroupedDBID | 53G 5VS AAYXX ADBBV ADRAZ AKSEZ ALMA_UNASSIGNED_HOLDINGS AOIJS BCNDV CITATION GROUPED_DOAJ H13 HYE KQ8 M48 M~E OK1 RHI RPM TFN CGR CUY CVF ECM EIF NPM RHF 7X8 5PM |
ID | FETCH-LOGICAL-c405t-5c37ec8cd6c600618dcc7883e36cb0e1a2b6da826042b3879c24893d34f449bf3 |
IEDL.DBID | M48 |
ISSN | 2373-2822 |
IngestDate | Thu Aug 21 18:32:57 EDT 2025 Fri Jul 11 02:37:16 EDT 2025 Thu Jan 02 22:59:16 EST 2025 Tue Jul 01 03:56:34 EDT 2025 Thu Apr 24 23:04:37 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 4 |
Keywords | intraparietal sulcus encoding model fMRI vision depth MVPA |
Language | English |
License | https://creativecommons.org/licenses/by-nc-sa/4.0 Copyright © 2019 Henderson et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed. |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c405t-5c37ec8cd6c600618dcc7883e36cb0e1a2b6da826042b3879c24893d34f449bf3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 The authors declare no competing financial interests. This work was supported by National Eye Institute Grants R01-EY025872 (to J.S.) and F32-EY028438 (to T.S.), Thai Red Cross Society funding (C.C.), and the National Science Foundation Graduate Research Fellowships Program (V.V.). M.H. and V.V. authors contributed equally to this work. Author contributions: M.H., V.V., C.C., T.S., and J.S. designed research; M.H., V.V., C.C., and T.S. performed research; M.H. and V.V. analyzed data; M.H. and V.V. wrote the paper. |
ORCID | 0000-0001-9601-1297 0000-0001-9530-2463 0000-0003-1074-0160 0000-0001-9375-6680 0000-0002-8551-5147 |
OpenAccessLink | http://journals.scholarsportal.info/openUrl.xqy?doi=10.1523/ENEURO.0362-18.2019 |
PMID | 31285275 |
PQID | 2254510068 |
PQPubID | 23479 |
ParticipantIDs | pubmedcentral_primary_oai_pubmedcentral_nih_gov_6709213 proquest_miscellaneous_2254510068 pubmed_primary_31285275 crossref_citationtrail_10_1523_ENEURO_0362_18_2019 crossref_primary_10_1523_ENEURO_0362_18_2019 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2019-07-01 |
PublicationDateYYYYMMDD | 2019-07-01 |
PublicationDate_xml | – month: 07 year: 2019 text: 2019-07-01 day: 01 |
PublicationDecade | 2010 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States |
PublicationTitle | eNeuro |
PublicationTitleAlternate | eNeuro |
PublicationYear | 2019 |
Publisher | Society for Neuroscience |
Publisher_xml | – name: Society for Neuroscience |
References | 2023041302135258000_6.4.ENEURO.0362-18.2019.8 2023041302135258000_6.4.ENEURO.0362-18.2019.7 2023041302135258000_6.4.ENEURO.0362-18.2019.6 2023041302135258000_6.4.ENEURO.0362-18.2019.29 2023041302135258000_6.4.ENEURO.0362-18.2019.5 2023041302135258000_6.4.ENEURO.0362-18.2019.4 2023041302135258000_6.4.ENEURO.0362-18.2019.3 2023041302135258000_6.4.ENEURO.0362-18.2019.2 2023041302135258000_6.4.ENEURO.0362-18.2019.1 2023041302135258000_6.4.ENEURO.0362-18.2019.23 2023041302135258000_6.4.ENEURO.0362-18.2019.45 2023041302135258000_6.4.ENEURO.0362-18.2019.24 2023041302135258000_6.4.ENEURO.0362-18.2019.46 2023041302135258000_6.4.ENEURO.0362-18.2019.21 2023041302135258000_6.4.ENEURO.0362-18.2019.43 2023041302135258000_6.4.ENEURO.0362-18.2019.22 2023041302135258000_6.4.ENEURO.0362-18.2019.44 2023041302135258000_6.4.ENEURO.0362-18.2019.27 2023041302135258000_6.4.ENEURO.0362-18.2019.28 2023041302135258000_6.4.ENEURO.0362-18.2019.25 2023041302135258000_6.4.ENEURO.0362-18.2019.47 2023041302135258000_6.4.ENEURO.0362-18.2019.9 2023041302135258000_6.4.ENEURO.0362-18.2019.26 2023041302135258000_6.4.ENEURO.0362-18.2019.48 2023041302135258000_6.4.ENEURO.0362-18.2019.30 2023041302135258000_6.4.ENEURO.0362-18.2019.31 2023041302135258000_6.4.ENEURO.0362-18.2019.18 2023041302135258000_6.4.ENEURO.0362-18.2019.19 2023041302135258000_6.4.ENEURO.0362-18.2019.12 2023041302135258000_6.4.ENEURO.0362-18.2019.34 2023041302135258000_6.4.ENEURO.0362-18.2019.13 2023041302135258000_6.4.ENEURO.0362-18.2019.35 2023041302135258000_6.4.ENEURO.0362-18.2019.10 2023041302135258000_6.4.ENEURO.0362-18.2019.32 2023041302135258000_6.4.ENEURO.0362-18.2019.11 2023041302135258000_6.4.ENEURO.0362-18.2019.33 2023041302135258000_6.4.ENEURO.0362-18.2019.16 2023041302135258000_6.4.ENEURO.0362-18.2019.38 2023041302135258000_6.4.ENEURO.0362-18.2019.17 2023041302135258000_6.4.ENEURO.0362-18.2019.39 2023041302135258000_6.4.ENEURO.0362-18.2019.14 2023041302135258000_6.4.ENEURO.0362-18.2019.36 2023041302135258000_6.4.ENEURO.0362-18.2019.15 2023041302135258000_6.4.ENEURO.0362-18.2019.37 2023041302135258000_6.4.ENEURO.0362-18.2019.41 2023041302135258000_6.4.ENEURO.0362-18.2019.20 2023041302135258000_6.4.ENEURO.0362-18.2019.42 2023041302135258000_6.4.ENEURO.0362-18.2019.40 |
References_xml | – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.2 doi: 10.1152/jn.2001.86.4.2054 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.38 doi: 10.1523/JNEUROSCI.0991-07.2007 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.5 doi: 10.1163/156856897X00357 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.12 doi: 10.1038/ncomms15276 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.4 doi: 10.1214/aos/1013699998 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.10 doi: 10.1016/j.neuroimage.2009.03.023 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.22 doi: 10.1186/s12868-017-0395-7 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.32 doi: 10.1016/j.neuropsychologia.2011.07.013 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.11 doi: 10.1007/978-1-4899-4541-9 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.39 doi: 10.1523/JNEUROSCI.5292-03.2004 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.35 doi: 10.1016/j.tics.2009.08.005 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.46 doi: 10.1038/nn1461 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.8 doi: 10.1152/jn.00540.2011 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.7 doi: 10.1523/JNEUROSCI.3577-09.2009 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.31 doi: 10.1016/0042-6989(84)90107-X – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.36 doi: 10.1038/nn.3574 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.37 doi: 10.1016/j.tics.2015.02.005 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.24 doi: 10.1038/nn1641 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.29 doi: 10.1523/JNEUROSCI.08-12-04531.1988 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.6 doi: 10.1167/7.14.15 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.20 doi: 10.1016/j.jphysparis.2013.04.002 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.17 doi: 10.1097/00001756-200102120-00036 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.23 doi: 10.1152/jn.00790.2009 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.30 doi: 10.1523/JNEUROSCI.2728-08.2008 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.13 doi: 10.1016/j.neuroimage.2016.12.039 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.25 doi: 10.1016/j.tics.2015.07.005 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.48 doi: 10.1016/j.neuroimage.2014.12.083 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.16 doi: 10.1523/JNEUROSCI.3047-14.2015 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.19 doi: 10.1111/opo.12121 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.3 doi: 10.1016/j.cub.2015.06.003 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.45 doi: 10.1146/annurev-vision-111815-114605 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.15 doi: 10.1523/JNEUROSCI.4753-08.2009 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.21 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.27 doi: 10.1152/jn.01042.2003 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.41 doi: 10.1016/S0896-6273(03)00459-8 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.43 doi: 10.1523/JNEUROSCI.3484-16.2017 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.47 doi: 10.1017/S0952523815000176 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.9 doi: 10.1073/pnas.93.6.2382 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.28 doi: 10.1163/156856897X00366 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.42 doi: 10.1016/j.neuroimage.2015.03.023 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.33 doi: 10.1126/science.7754376 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.34 doi: 10.1126/science.1063695 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.40 doi: 10.1016/j.jphysparis.2004.03.004 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.14 doi: 10.1523/JNEUROSCI.07-03-00913.1987 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.44 doi: 10.1016/j.neuron.2007.10.012 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.1 doi: 10.1523/JNEUROSCI.5956-10.2011 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.18 doi: 10.1093/cercor/bht288 – ident: 2023041302135258000_6.4.ENEURO.0362-18.2019.26 doi: 10.1016/j.neuron.2009.09.006 |
SSID | ssj0001627181 |
Score | 2.1381562 |
Snippet | Navigating through natural environments requires localizing objects along three distinct spatial axes. Information about position along the horizontal and... |
SourceID | pubmedcentral proquest pubmed crossref |
SourceType | Open Access Repository Aggregation Database Index Database Enrichment Source |
StartPage | ENEURO.0362-18.2019 |
SubjectTerms | Brain Mapping Depth Perception - physiology Female Humans Magnetic Resonance Imaging Male Models, Neurological Multivariate Analysis New Research Parietal Lobe - physiology Photic Stimulation - methods Support Vector Machine Visual Cortex - physiology |
Title | Multivariate Analysis of BOLD Activation Patterns Recovers Graded Depth Representations in Human Visual and Parietal Cortex |
URI | https://www.ncbi.nlm.nih.gov/pubmed/31285275 https://www.proquest.com/docview/2254510068 https://pubmed.ncbi.nlm.nih.gov/PMC6709213 |
Volume | 6 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LT9wwEB4BlRCXij6ABYqM1GNDN7HjJIcKUR5FVaE9sIhbFI8dsRLKbveBQPz5zjjJlkfLgWtiW5Y_jz2fPf4G4KNMnIqVKgOZYhkoV2aBScgek4jmVxgZlgDhaItTfdxT3y_iizlos6I2Azj-J7XjfFK90dXOze_bXTL4L3X2Hvn58JSj3nb8A6DQh2tl8_CKtibNbOyk8ff9oYuOaC0OG_Wh_9RdgkVJq3Yccezh_c3qiQf6OJDy3s50tAyvG5dS7NVz4A3MueotLJ40l-bv4M4_sr0mUkx-pWhVSMSgFF9__jgQe9hmOBO_vNhmNRZMSjleQ3wbFdZZceCGk0v6Ovz7WIlK9Svh7wDEeX88pR4UlaUmiHvTWIp9juK9eQ-9o8Oz_eOgybkQILlukyBGQg9TtBo1uzepRSSWLJ3UaLouLCKjbUGchIzdyDTJMGL5GitVqVRmSrkCC9WgcmsgwtCiTrKuLsjJ6drElGlqCoUGURuMiw5E7eDm2AiSc16Mq5yJCYGT1-DkDE4epjmD04FPs0rDWo_j-eLbLWo52Q1fhhSVG0zHOa1jitajrk47sFqjOGuwhb8DyQN8ZwVYk_vhn6p_6bW5WQ4vCuX6i2tuwBJ3vI4I3oSFyWjqPpDfMzFb_rxgy8_oP6V8ArM |
linkProvider | Scholars Portal |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multivariate+Analysis+of+BOLD+Activation+Patterns+Recovers+Graded+Depth+Representations+in+Human+Visual+and+Parietal+Cortex&rft.jtitle=eNeuro&rft.au=Henderson%2C+Margaret&rft.au=Vo%2C+Vy&rft.au=Chunharas%2C+Chaipat&rft.au=Sprague%2C+Thomas&rft.date=2019-07-01&rft.pub=Society+for+Neuroscience&rft.eissn=2373-2822&rft.volume=6&rft.issue=4&rft_id=info:doi/10.1523%2FENEURO.0362-18.2019&rft_id=info%3Apmid%2F31285275&rft.externalDocID=PMC6709213 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2373-2822&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2373-2822&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2373-2822&client=summon |