OCL: Ordinal Contrastive Learning for Imputating Features with Progressive Labels
Accurately discriminating progressive stages of Alzheimer's Disease (AD) is crucial for early diagnosis and prevention. It often involves multiple imaging modalities to understand the complex pathology of AD, however, acquiring a complete set of images is challenging due to high cost and burden...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
03.03.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Accurately discriminating progressive stages of Alzheimer's Disease (AD) is
crucial for early diagnosis and prevention. It often involves multiple imaging
modalities to understand the complex pathology of AD, however, acquiring a
complete set of images is challenging due to high cost and burden for subjects.
In the end, missing data become inevitable which lead to limited sample-size
and decrease in precision in downstream analyses. To tackle this challenge, we
introduce a holistic imaging feature imputation method that enables to leverage
diverse imaging features while retaining all subjects. The proposed method
comprises two networks: 1) An encoder to extract modality-independent
embeddings and 2) A decoder to reconstruct the original measures conditioned on
their imaging modalities. The encoder includes a novel {\em ordinal contrastive
loss}, which aligns samples in the embedding space according to the progression
of AD. We also maximize modality-wise coherence of embeddings within each
subject, in conjunction with domain adversarial training algorithms, to further
enhance alignment between different imaging modalities. The proposed method
promotes our holistic imaging feature imputation across various modalities in
the shared embedding space. In the experiments, we show that our networks
deliver favorable results for statistical analysis and classification against
imputation baselines with Alzheimer's Disease Neuroimaging Initiative (ADNI)
study. |
---|---|
DOI: | 10.48550/arxiv.2503.02899 |