Generalising the HCP multimodal cortical parcellation to UK Biobank
The Human Connectome Project Multimodal Parcellation (HCP_MMP1.0) provides a robust in vivo map of the cerebral cortex, which demonstrates variability in structure and function that cannot be captured through diffeomorphic image registration alone. The HCP successfully employed a fully-connected neu...
Saved in:
Published in | bioRxiv |
---|---|
Main Authors | , , , , , , , , , , , |
Format | Paper |
Language | English |
Published |
Cold Spring Harbor Laboratory
14.03.2023
|
Edition | 1.1 |
Subjects | |
Online Access | Get full text |
ISSN | 2692-8205 |
DOI | 10.1101/2023.03.14.532531 |
Cover
Loading…
Summary: | The Human Connectome Project Multimodal Parcellation (HCP_MMP1.0) provides a robust in vivo map of the cerebral cortex, which demonstrates variability in structure and function that cannot be captured through diffeomorphic image registration alone. The HCP successfully employed a fully-connected neural network architecture to capture this variation, however it is unclear whether this approach generalises to other datasets with less rich imaging protocols. In this paper we propose and validate a novel geometric deep learning framework for generating individualised HCP_MMP1.0 parcellations in UK Biobank (UKB), an extremely rich resource that has led to numerous breakthroughs in neuroscience. To address substantial differences in image acquisition (for example, 6 minutes of resting-state fMRI per subject for UKB vs. 60 minutes per subject for HCP), we introduce a multi-step learning procedure including pretraining with a convolutional autoencoder. Compared to a fully-connected baseline, our proposed framework improved average detection rate across all areas by 10.4% (99.1% vs 88.7%), and detection of the worst performing area by 51.0% (80.9% vs. 29.9%). Importantly, this was not a result of the framework predicting one consistent parcellation across subjects, as visual inspection indicated that our method was sensitive to atypical cortical topographies. Code and trained models will be made available at https://www.github.com. |
---|---|
Bibliography: | Competing Interest Statement: The authors have declared no competing interest. |
ISSN: | 2692-8205 |
DOI: | 10.1101/2023.03.14.532531 |