Multimodal integration of neuroimaging and genetic data for the diagnosis of mood disorders based on computer vision models

Mood disorders, particularly major depressive disorder (MDD) and bipolar disorder (BD), are often underdiagnosed, leading to substantial morbidity. Harnessing the potential of emerging methodologies, we propose a novel multimodal fusion approach that integrates patient-oriented brain structural magn...

Full description

Saved in:
Bibliographic Details
Published inJournal of psychiatric research Vol. 172; pp. 144 - 155
Main Authors Lee, Seungeun, Cho, Yongwon, Ji, Yuyoung, Jeon, Minhyek, Kim, Aram, Ham, Byung-Joo, Joo, Yoonjung Yoonie
Format Journal Article
LanguageEnglish
Published England Elsevier Ltd 01.04.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Mood disorders, particularly major depressive disorder (MDD) and bipolar disorder (BD), are often underdiagnosed, leading to substantial morbidity. Harnessing the potential of emerging methodologies, we propose a novel multimodal fusion approach that integrates patient-oriented brain structural magnetic resonance imaging (sMRI) scans with DNA whole-exome sequencing (WES) data. Multimodal data fusion aims to improve the detection of mood disorders by employing established deep-learning architectures for computer vision and machine-learning strategies. We analyzed brain imaging genetic data of 321 East Asian individuals, including 147 patients with MDD, 78 patients with BD, and 96 healthy controls. We developed and evaluated six fusion models by leveraging common computer vision models in image classification: Vision Transformer (ViT), Inception-V3, and ResNet50, in conjunction with advanced machine-learning techniques (XGBoost and LightGBM) known for high-dimensional data analysis. Model validation was performed using a 10-fold cross-validation. Our ViT ⊕ XGBoost fusion model with MRI scans, genomic Single Nucleotide polymorphism (SNP) data, and unweighted polygenic risk score (PRS) outperformed baseline models, achieving an incremental area under the curve (AUC) of 0.2162 (32.03% increase) and 0.0675 (+8.19%) and incremental accuracy of 0.1455 (+25.14%) and 0.0849 (+13.28%) compared to SNP-only and image-only baseline models, respectively. Our findings highlight the opportunity to refine mood disorder diagnostics by demonstrating the transformative potential of integrating diverse, yet complementary, data modalities and methodologies. •Mood disorders are prevalent globally; however, their diagnosis remains challenging due to significant heterogeneity and the absence of direct diagnostic measures.•Our study evaluated a multimodal fusion approach for classifying mood disorders by integrating patient-specific brain structural MRI (sMRI) scans with DNA whole-exome sequencing (WES) data and the corresponding unweighted polygenic risk scores.•By utilizing the established computer vision models and machine learning strategies, our model outperformed single-modality methods in classifying patients with mood disorders.•The improved classification accuracy highlights the transformative potential of combining diverse, yet complementary, data sources and methodologies to advance psychiatric diagnostics.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0022-3956
1879-1379
1879-1379
DOI:10.1016/j.jpsychires.2024.02.036