Contrastive pretraining improves deep learning classification of endocardial electrograms in a preclinical model
Rotors and focal ectopies, or “drivers,” are hypothesized mechanisms of persistent atrial fibrillation (AF). Machine learning algorithms have been used to identify these drivers, but the limited size of current driver data sets constrains their performance. We proposed that pretraining using unsuper...
Saved in:
Published in | Heart rhythm O2 Vol. 6; no. 4; pp. 473 - 480 |
---|---|
Main Authors | , , , , , , , , , , , |
Format | Journal Article |
Language | English |
Published |
United States
Elsevier Inc
01.04.2025
Elsevier |
Subjects | |
Online Access | Get full text |
ISSN | 2666-5018 2666-5018 |
DOI | 10.1016/j.hroo.2025.01.008 |
Cover
Loading…
Summary: | Rotors and focal ectopies, or “drivers,” are hypothesized mechanisms of persistent atrial fibrillation (AF). Machine learning algorithms have been used to identify these drivers, but the limited size of current driver data sets constrains their performance.
We proposed that pretraining using unsupervised learning on a substantial data set of unlabeled electrograms could enhance classifier accuracy when applied to a smaller driver data set.
We used a SimCLR-based framework to pretrain a residual neural network on 113,000 unlabeled 64-electrode measurements from a canine model of AF. The network was then fine-tuned to identify drivers from intracardiac electrograms. Various augmentations, including cropping, Gaussian blurring, and rotation, were applied during pretraining to improve the robustness of the learned representations.
Pretraining significantly improved driver detection accuracy compared with a non-pretrained network (80.8% vs 62.5%). The pretrained network also demonstrated greater resilience to reductions in training data set size, maintaining higher accuracy even with a 30% reduction in data. Gradient-weighted Class Activation Mapping analysis revealed that the network’s attention aligned well with manually annotated driver regions, suggesting that the network learned meaningful features for driver detection.
This study demonstrates that contrastive pretraining can enhance the accuracy of driver detection algorithms in AF. The findings support the broader application of transfer learning to other electrogram-based tasks, potentially improving outcomes in clinical electrophysiology. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 2666-5018 2666-5018 |
DOI: | 10.1016/j.hroo.2025.01.008 |