A Convolutional Autoencoder for Multi-Subject fMRI Data Aggregation
Finding the most effective way to aggregate multi-subject fMRI data is a long-standing and challenging problem. It is of increasing interest in contemporary fMRI studies of human cognition due to the scarcity of data per subject and the variability of brain anatomy and functional response across sub...
Saved in:
Main Authors | , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
16.08.2016
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.1608.04846 |
Cover
Loading…
Summary: | Finding the most effective way to aggregate multi-subject fMRI data is a
long-standing and challenging problem. It is of increasing interest in
contemporary fMRI studies of human cognition due to the scarcity of data per
subject and the variability of brain anatomy and functional response across
subjects. Recent work on latent factor models shows promising results in this
task but this approach does not preserve spatial locality in the brain. We
examine two ways to combine the ideas of a factor model and a searchlight based
analysis to aggregate multi-subject fMRI data while preserving spatial
locality. We first do this directly by combining a recent factor method known
as a shared response model with searchlight analysis. Then we design a
multi-view convolutional autoencoder for the same task. Both approaches
preserve spatial locality and have competitive or better performance compared
with standard searchlight analysis and the shared response model applied across
the whole brain. We also report a system design to handle the computational
challenge of training the convolutional autoencoder. |
---|---|
DOI: | 10.48550/arxiv.1608.04846 |