Model-Attentive Ensemble Learning for Sequence Modeling
Medical time-series datasets have unique characteristics that make prediction tasks challenging. Most notably, patient trajectories often contain longitudinal variations in their input-output relationships, generally referred to as temporal conditional shift. Designing sequence models capable of ada...
Saved in:
Published in | arXiv.org |
---|---|
Main Authors | , , |
Format | Paper |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
23.02.2021
|
Subjects | |
Online Access | Get full text |
ISSN | 2331-8422 |
Cover
Summary: | Medical time-series datasets have unique characteristics that make prediction tasks challenging. Most notably, patient trajectories often contain longitudinal variations in their input-output relationships, generally referred to as temporal conditional shift. Designing sequence models capable of adapting to such time-varying distributions remains a prevailing problem. To address this we present Model-Attentive Ensemble learning for Sequence modeling (MAES). MAES is a mixture of time-series experts which leverages an attention-based gating mechanism to specialize the experts on different sequence dynamics and adaptively weight their predictions. We demonstrate that MAES significantly out-performs popular sequence models on datasets subject to temporal shift. |
---|---|
Bibliography: | content type line 50 SourceType-Working Papers-1 ObjectType-Working Paper/Pre-Print-1 |
ISSN: | 2331-8422 |