Interpreting the retinal neural code for natural scenes: From computations to neurons

Understanding the circuit mechanisms of the visual code for natural scenes is a central goal of sensory neuroscience. We show that a three-layer network model predicts retinal natural scene responses with an accuracy nearing experimental limits. The model’s internal structure is interpretable, as in...

Full description

Saved in:
Bibliographic Details
Published inNeuron (Cambridge, Mass.) Vol. 111; no. 17; pp. 2742 - 2755.e4
Main Authors Maheswaranathan, Niru, McIntosh, Lane T., Tanaka, Hidenori, Grant, Satchel, Kastner, David B., Melander, Joshua B., Nayebi, Aran, Brezovec, Luke E., Wang, Julia H., Ganguli, Surya, Baccus, Stephen A.
Format Journal Article
LanguageEnglish
Published United States Elsevier Inc 06.09.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Understanding the circuit mechanisms of the visual code for natural scenes is a central goal of sensory neuroscience. We show that a three-layer network model predicts retinal natural scene responses with an accuracy nearing experimental limits. The model’s internal structure is interpretable, as interneurons recorded separately and not modeled directly are highly correlated with model interneurons. Models fitted only to natural scenes reproduce a diverse set of phenomena related to motion encoding, adaptation, and predictive coding, establishing their ethological relevance to natural visual computation. A new approach decomposes the computations of model ganglion cells into the contributions of model interneurons, allowing automatic generation of new hypotheses for how interneurons with different spatiotemporal responses are combined to generate retinal computations, including predictive phenomena currently lacking an explanation. Our results demonstrate a unified and general approach to study the circuit mechanisms of ethological retinal computations under natural visual scenes. •A three-layer model captures retinal natural scene responses and phenomena•Models have internal units highly correlated with interneuron recordings•A general approach reveals how model interneuron pathways encode any stimulus•Model analysis yields new automatic circuit hypotheses for neural computations Maheswaranathan et al. create a three-layer network model that captures retinal encoding of natural scenes and many ethological phenomena. The model’s structure is interpretable in terms of the actions of real interneurons. A new computational approach automatically generates hypotheses for how interneurons generate ethological computations under natural visual scenes.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
AUTHOR CONTRIBUTIONS
All authors participated in the overall design of the study. N.M., L.T.M., D.B.K., J.B.M., S. Ganguli, and S.A.B. participated in the design of experiments. N.M., L.T.M., D.B.K., and J.B.M. performed biological experiments. N.M., L.T.M., S. Grant, J.B.M., A.N., and J.H.W. participated in the development and fitting of computational models. N.M., L.T.M., H.T., S. Grant, J.B.M., L.E.B., and J.H.W. performed in silico experiments. N.M., L.T.M., H.T., S. Grant, J.B.M., L.E.B., J.H.W., and S.A.B. participated in computational analyses. H.T. designed and performed integrated gradients analyses. N.M., L.T.M., H.T., S. Grant, S. Ganguli, and S.A.B. contributed to the writing of the paper.
ISSN:0896-6273
1097-4199
1097-4199
DOI:10.1016/j.neuron.2023.06.007