Inter-individual deep image reconstruction via hierarchical neural code conversion

•Neural code converters, which are trained to predict brain activity patterns from one to another individual when presented with the same stimulus, automatically learn the hierarchical correspondence of visual areas.•Converted brain activity patterns can be decoded into hierarchical DNN features to...

Full description

Saved in:
Bibliographic Details
Published inNeuroImage (Orlando, Fla.) Vol. 271; p. 120007
Main Authors Ho, Jun Kai, Horikawa, Tomoyasu, Majima, Kei, Cheng, Fan, Kamitani, Yukiyasu
Format Journal Article
LanguageEnglish
Published United States Elsevier Inc 01.05.2023
Elsevier Limited
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•Neural code converters, which are trained to predict brain activity patterns from one to another individual when presented with the same stimulus, automatically learn the hierarchical correspondence of visual areas.•Converted brain activity patterns can be decoded into hierarchical DNN features to reconstruct visual images, even though the converter is trained on a limited number of data samples.•The information of hierarchical and fine-scale visual features is preserved with the functional alignment to capture the richness of visual perception. The sensory cortex is characterized by general organizational principles such as topography and hierarchy. However, measured brain activity given identical input exhibits substantially different patterns across individuals. Although anatomical and functional alignment methods have been proposed in functional magnetic resonance imaging (fMRI) studies, it remains unclear whether and how hierarchical and fine-grained representations can be converted between individuals while preserving the encoded perceptual content. In this study, we trained a method of functional alignment called neural code converter that predicts a target subject’s brain activity pattern from a source subject given the same stimulus, and analyzed the converted patterns by decoding hierarchical visual features and reconstructing perceived images. The converters were trained on fMRI responses to identical sets of natural images presented to pairs of individuals, using the voxels on the visual cortex that covers from V1 through the ventral object areas without explicit labels of the visual areas. We decoded the converted brain activity patterns into the hierarchical visual features of a deep neural network using decoders pre-trained on the target subject and then reconstructed images via the decoded features. Without explicit information about the visual cortical hierarchy, the converters automatically learned the correspondence between visual areas of the same levels. Deep neural network feature decoding at each layer showed higher decoding accuracies from corresponding levels of visual areas, indicating that hierarchical representations were preserved after conversion. The visual images were reconstructed with recognizable silhouettes of objects even with relatively small numbers of data for converter training. The decoders trained on pooled data from multiple individuals through conversions led to a slight improvement over those trained on a single individual. These results demonstrate that the hierarchical and fine-grained representation can be converted by functional alignment, while preserving sufficient visual information to enable inter-individual visual image reconstruction.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1053-8119
1095-9572
1095-9572
DOI:10.1016/j.neuroimage.2023.120007