Multi-instance discriminative contrastive learning for brain image representation
This paper focuses on the problem of learning discriminative representation for brain images, which is a critical task toward understanding brain developments. Related studies usually extract manual and statistical features from the functional magnetic resonance images (MRIs) to differentiate brain...
Saved in:
Published in | Neural computing & applications Vol. 37; no. 11; pp. 7459 - 7472 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
London
Springer London
01.04.2025
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This paper focuses on the problem of learning discriminative representation for brain images, which is a critical task toward understanding brain developments. Related studies usually extract manual and statistical features from the functional magnetic resonance images (MRIs) to differentiate brain patterns. However, these features fail to consider the implicit and high-order variances, and the existing representation methods often suffer from the weak manual features and the small-size sample. This paper introduces a weakly-supervised representation learning model, dubbed multi-instance discriminative contrastive learning (MIDCL), to identify the different MRI patterns. MIDCL yields two versions for each instance of a subject by introducing noise patterns and then achieves latent representations for them via training an encoder network and a projection network. Due to the multi-instance problem, MIDCL simultaneously minimizes an unsupervised contrastive loss (UCL) between the two representations at the level of instances and a supervised contrastive loss (SCL) between the two concatenated feature vectors at the level of subjects. We finally conducted experiments on two publicly available brain image datasets. The experiment results manifest that MIDCL could benefit from both UCL and SCL, thereby improving brain image classification performance in comparison with the state-of-the-art models. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 0941-0643 1433-3058 |
DOI: | 10.1007/s00521-022-07524-7 |