Model-Attentive Ensemble Learning for Sequence Modeling

Medical time-series datasets have unique characteristics that make prediction tasks challenging. Most notably, patient trajectories often contain longitudinal variations in their input-output relationships, generally referred to as temporal conditional shift. Designing sequence models capable of ada...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Bourgin, Victor D, Bica, Ioana, van der Schaar, Mihaela
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 23.02.2021
Subjects
Online AccessGet full text
ISSN2331-8422

Cover

More Information
Summary:Medical time-series datasets have unique characteristics that make prediction tasks challenging. Most notably, patient trajectories often contain longitudinal variations in their input-output relationships, generally referred to as temporal conditional shift. Designing sequence models capable of adapting to such time-varying distributions remains a prevailing problem. To address this we present Model-Attentive Ensemble learning for Sequence modeling (MAES). MAES is a mixture of time-series experts which leverages an attention-based gating mechanism to specialize the experts on different sequence dynamics and adaptively weight their predictions. We demonstrate that MAES significantly out-performs popular sequence models on datasets subject to temporal shift.
Bibliography:content type line 50
SourceType-Working Papers-1
ObjectType-Working Paper/Pre-Print-1
ISSN:2331-8422