Self-Supervised Pre-training Tasks for an fMRI Time-series Transformer in Autism Detection

Autism Spectrum Disorder (ASD) is a neurodevelopmental condition that encompasses a wide variety of symptoms and degrees of impairment, which makes the diagnosis and treatment challenging. Functional magnetic resonance imaging (fMRI) has been extensively used to study brain activity in ASD, and mach...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Zhou, Yinchi, Duan, Peiyu, Du, Yuexi, Dvornek, Nicha C
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 18.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Autism Spectrum Disorder (ASD) is a neurodevelopmental condition that encompasses a wide variety of symptoms and degrees of impairment, which makes the diagnosis and treatment challenging. Functional magnetic resonance imaging (fMRI) has been extensively used to study brain activity in ASD, and machine learning methods have been applied to analyze resting state fMRI (rs-fMRI) data. However, fewer studies have explored the recent transformer-based models on rs-fMRI data. Given the superiority of transformer models in capturing long-range dependencies in sequence data, we have developed a transformer-based self-supervised framework that directly analyzes time-series fMRI data without computing functional connectivity. To address over-fitting in small datasets and enhance the model performance, we propose self-supervised pre-training tasks to reconstruct the randomly masked fMRI time-series data, investigating the effects of various masking strategies. We then finetune the model for the ASD classification task and evaluate it using two public datasets and five-fold cross-validation with different amounts of training data. The experiments show that randomly masking entire ROIs gives better model performance than randomly masking time points in the pre-training step, resulting in an average improvement of 10.8% for AUC and 9.3% for subject accuracy compared with the transformer model trained from scratch across different levels of training data availability. Our code is available on GitHub.
ISSN:2331-8422