Variational autoencoder: An unsupervised model for encoding and decoding fMRI activity in visual cortex

Goal-driven and feedforward-only convolutional neural networks (CNN) have been shown to be able to predict and decode cortical responses to natural images or videos. Here, we explored an alternative deep neural network, variational auto-encoder (VAE), as a computational model of the visual cortex. W...

Full description

Saved in:
Bibliographic Details
Published inNeuroImage (Orlando, Fla.) Vol. 198; pp. 125 - 136
Main Authors Han, Kuan, Wen, Haiguang, Shi, Junxing, Lu, Kun-Han, Zhang, Yizhen, Fu, Di, Liu, Zhongming
Format Journal Article
LanguageEnglish
Published United States Elsevier Inc 01.09.2019
Elsevier Limited
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Goal-driven and feedforward-only convolutional neural networks (CNN) have been shown to be able to predict and decode cortical responses to natural images or videos. Here, we explored an alternative deep neural network, variational auto-encoder (VAE), as a computational model of the visual cortex. We trained a VAE with a five-layer encoder and a five-layer decoder to learn visual representations from a diverse set of unlabeled images. Using the trained VAE, we predicted and decoded cortical activity observed with functional magnetic resonance imaging (fMRI) from three human subjects passively watching natural videos. Compared to CNN, VAE could predict the video-evoked cortical responses with comparable accuracy in early visual areas, but relatively lower accuracy in higher-order visual areas. The distinction between CNN and VAE in terms of encoding performance was primarily attributed to their different learning objectives, rather than their different model architecture or number of parameters. Despite lower encoding accuracies, VAE offered a more convenient strategy for decoding the fMRI activity to reconstruct the video input, by first converting the fMRI activity to the VAE's latent variables, and then converting the latent variables to the reconstructed video frames through the VAE's decoder. This strategy was more advantageous than alternative decoding methods, e.g. partial least squares regression, for being able to reconstruct both the spatial structure and color of the visual input. Such findings highlight VAE as an unsupervised model for learning visual representation, as well as its potential and limitations for explaining cortical responses and reconstructing naturalistic and diverse visual experiences. •Variational auto-encoder implements 1 an unsupervised model of “Bayesian brain”.•Variational auto-encoder explains and predicts fMRI responses to natural videos.•Variational auto-encoder decodes fMRI responses to directly reconstruct visual input.•Convolutional neural networks trained for image classification better predict fMRI responses than variational auto-encoder trained for image reconstruction.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1053-8119
1095-9572
1095-9572
DOI:10.1016/j.neuroimage.2019.05.039