Deep contextual multi-task feature fusion for enhanced concept, negation and speculation detection from clinical notes

Effective clinical decision support calls for precise detection of clinical entities such as diseases/disorders and associated assertions such as negation and speculation from clinical text. Contemporary approaches have relied on domain-specific Bidirectional Encoder Representations from Transformer...

Full description

Saved in:
Bibliographic Details
Published inInformatics in medicine unlocked Vol. 34; p. 101109
Main Authors Narayanan, Sankaran, S.S., Madhuri, Ramesh, Maneesha V., Rangan, P. Venkat, Rajan, Sreeranga P.
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 2022
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Effective clinical decision support calls for precise detection of clinical entities such as diseases/disorders and associated assertions such as negation and speculation from clinical text. Contemporary approaches have relied on domain-specific Bidirectional Encoder Representations from Transformers (BERT) language models to achieve robust performance in clinical concept recognition. However, due to annotation scarcity, these approaches face a challenge in assertion detection. This study proposes a novel end-to-end neural model utilizing contextual features derived from a BERT ensemble, syntactic features derived from constituency parse trees, and multi-task learning for the enhanced detection of concept and assertion entities. Two clinical note benchmark datasets (n2c2 2010, n2c2 2012) were used for validating the proposed approach. Apart from achieving state-of-the-art performance in concept recognition (n2c2 2012), the proposed model significantly enhanced clinical note negation (+2.35 F1, McNemar’s test) and speculation (+5.26 F1) detection as compared to standalone transformer-based models. Assertion generalization improved by +2.23 F1, further reinforcing the effectiveness of the proposed strategy. Additionally, this study offers a generic methodology that integrates feature fusion, contextual language model ensembling, and multi-task learning to utilize transformer-based language models effectively. [Display omitted] •Development of an end-to-end multi-task neural model to enhance clinical concept and assertion identification.•A generic methodology based on multi-transformer ensembling, constituency parse treebased syntactic features, and multi-task transfer learning.•Validation on two real-world clinical note benchmark datasets (n2c2 2010, n2c2 2012).•Out-performance in both tasks as compared to contemporary transformer-based approaches.•Low-resource speculation recognition enhanced significantly (up to +5.26 F1, McNemar’s test).
ISSN:2352-9148
2352-9148
DOI:10.1016/j.imu.2022.101109