Jointly efficient encoding and decoding in neural populations

The efficient coding approach proposes that neural systems represent as much sensory information as biological constraints allow. It aims at formalizing encoding as a constrained optimal process. A different approach, that aims at formalizing decoding, proposes that neural systems instantiate a gene...

Full description

Saved in:
Bibliographic Details
Published inbioRxiv
Main Authors Malerba, Simone Blanco, Micheli, Aurora, Woodford, Michael, da Silveira, Rava Azeredo
Format Paper
LanguageEnglish
Published Cold Spring Harbor Laboratory 04.05.2024
Edition1.2
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The efficient coding approach proposes that neural systems represent as much sensory information as biological constraints allow. It aims at formalizing encoding as a constrained optimal process. A different approach, that aims at formalizing decoding, proposes that neural systems instantiate a generative model of the sensory world. Here, we put forth a normative framework that characterizes neural systems as jointly optimizing encoding and decoding. It takes the form of a variational autoencoder: sensory stimuli are encoded in the noisy activity of neurons to be interpreted by a flexible decoder; encoding must allow for an accurate stimulus reconstruction from neural activity. Jointly, neural activity is required to represent the statistics of latent features which are mapped by the decoder into distributions over sensory stimuli; decoding correspondingly optimizes the accuracy of the generative model. This framework yields in a family of encoding-decoding models, which result in equally accurate generative models, indexed by a measure of the stimulus-induced deviation of neural activity from the marginal distribution over neural activity. Each member of this family predicts a specific relation between properties of the sensory neurons—such as the arrangement of the tuning curve means (preferred stimuli) and widths (degrees of selectivity) in the population—as a function of the statistics of the sensory world. Our approach thus generalizes the efficient coding approach. Notably, here, the form of the constraint on the optimization derives from the requirement of an accurate generative model, while it is arbitrary in efficient coding models. Moreover, solutions do not require the knowledge of the stimulus distribution, but are learned on the basis of data samples; the constraint further acts as regularizer, allowing the model to generalize beyond the training data. Finally, we characterize the family of models we obtain through alternate measures of performance, such as the error in stimulus reconstruction. We find that a range of models admits comparable performance; in particular, a population of sensory neurons with broad tuning curves as observed experimentally yields both low reconstruction stimulus error and an accurate generative model that generalizes robustly to unseen data. Our brain represents the sensory world in the activity of populations of neurons. Two theories have addressed the nature of these representations. The first theory—efficient coding—posits that neurons encode as much information as possible about sensory stimuli, subject to resource constraints such as limits on energy consumption. The second one—generative modeling—focuses on decoding, and is organized around the idea that neural activity plays the role of a latent variable from which sensory stimuli can be simulated. Our work subsumes the two approaches in a unifying framework based on the mathematics of variational autoencoders. Unlike in efficient coding, which assumes full knowledge of stimulus statistics, here representations are learned from examples, in a joint optimization of encoding and decoding. This new framework yields a range of optimal representations, corresponding to different models of neural selectivity and reconstruction performances, depending on the resource constraint. The form of the constraint is not arbitrary but derives from the optimization framework, and its strength tunes the ability of the model to generalize beyond the training example. Central to the approach, and to the nature of the representations it implies, is the interplay of encoding and decoding, itself central to brain processing.
Bibliography:Competing Interest Statement: The authors have declared no competing interest.
ISSN:2692-8205
DOI:10.1101/2023.06.21.545848