Self-supervised Representation Learning on Electronic Health Records with Graph Kernel Infomax

Learning Electronic Health Records (EHRs) representation is a preeminent yet under-discovered research topic. It benefits various clinical decision support applications, e.g., medication outcome prediction or patient similarity search. Current approaches focus on task-specific label supervision on v...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Hao-Ren, Yao, Cao, Nairen, Russell, Katina, Der-Chen, Chang, Ophir Frieder, Fineman, Jeremy
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 20.02.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Learning Electronic Health Records (EHRs) representation is a preeminent yet under-discovered research topic. It benefits various clinical decision support applications, e.g., medication outcome prediction or patient similarity search. Current approaches focus on task-specific label supervision on vectorized sequential EHR, which is not applicable to large-scale unsupervised scenarios. Recently, contrastive learning shows great success on self-supervised representation learning problems. However, complex temporality often degrades the performance. We propose Graph Kernel Infomax, a self-supervised graph kernel learning approach on the graphical representation of EHR, to overcome the previous problems. Unlike the state-of-the-art, we do not change the graph structure to construct augmented views. Instead, we use Kernel Subspace Augmentation to embed nodes into two geometrically different manifold views. The entire framework is trained by contrasting nodes and graph representations on those two manifold views through the commonly used contrastive objectives. Empirically, using publicly available benchmark EHR datasets, our approach yields performance on clinical downstream tasks that exceeds the state-of-the-art. Theoretically, the variation on distance metrics naturally creates different views as data augmentation without changing graph structures.
ISSN:2331-8422
DOI:10.48550/arxiv.2209.00655