BrainMAE: A Region-aware Self-supervised Learning Framework for Brain Signals
The human brain is a complex, dynamic network, which is commonly studied using functional magnetic resonance imaging (fMRI) and modeled as network of Regions of interest (ROIs) for understanding various brain functions. Recent studies utilize deep learning approaches to learn the brain network repre...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
24.06.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The human brain is a complex, dynamic network, which is commonly studied
using functional magnetic resonance imaging (fMRI) and modeled as network of
Regions of interest (ROIs) for understanding various brain functions. Recent
studies utilize deep learning approaches to learn the brain network
representation based on functional connectivity (FC) profile, broadly falling
into two main categories. The Fixed-FC approaches, utilizing the FC profile
which represents the linear temporal relation within the brain network, are
limited by failing to capture informative brain temporal dynamics. On the other
hand, the Dynamic-FC approaches, modeling the evolving FC profile over time,
often exhibit less satisfactory performance due to challenges in handling the
inherent noisy nature of fMRI data.
To address these challenges, we propose Brain Masked Auto-Encoder (BrainMAE)
for learning representations directly from fMRI time-series data. Our approach
incorporates two essential components: a region-aware graph attention mechanism
designed to capture the relationships between different brain ROIs, and a novel
self-supervised masked autoencoding framework for effective model pre-training.
These components enable the model to capture rich temporal dynamics of brain
activity while maintaining resilience to inherent noise in fMRI data. Our
experiments demonstrate that BrainMAE consistently outperforms established
baseline methods by significant margins in four distinct downstream tasks.
Finally, leveraging the model's inherent interpretability, our analysis of
model-generated representations reveals findings that resonate with ongoing
research in the field of neuroscience. |
---|---|
DOI: | 10.48550/arxiv.2406.17086 |