Document-Level Event Argument Extraction with Sparse Representation Attention

Document-level Event Argument Extraction (DEAE) aims to extract structural event knowledge composed of arguments and roles beyond the sentence level. Existing methods mainly focus on designing prompts and using Abstract Meaning Representation (AMR) graph structure as additional features to enrich ev...

Full description

Saved in:
Bibliographic Details
Published inMathematics (Basel) Vol. 12; no. 17; p. 2636
Main Authors Zhang, Mengxi, Chen, Honghui
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.08.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Document-level Event Argument Extraction (DEAE) aims to extract structural event knowledge composed of arguments and roles beyond the sentence level. Existing methods mainly focus on designing prompts and using Abstract Meaning Representation (AMR) graph structure as additional features to enrich event argument representation. However, two challenges still remain: (1) the long-range dependency between event trigger and event arguments and (2) the distracting context in the document towards an event that can mislead the argument classification. To address these issues, we propose a novel document-level event argument extraction model named AMR Parser and Sparse Representation (APSR). Specifically, APSR sets inter- and intra-sentential encoders to capture the contextual information in different scopes. Especially, in the intra-sentential encoder, APSR designs three types of sparse event argument attention mechanisms to extract the long-range dependency. Then, APSR constructs AMR semantic graphs, which capture the interactions among concepts well. Finally, APSR fuses the inter- and intra-sentential representations and predicts what role a candidate span plays. Experimental results on the RAMS and WikiEvents datasets demonstrate that APSR achieves a superior performance compared with competitive baselines in terms of F1 by 1.27% and 3.12%, respectively.
ISSN:2227-7390
DOI:10.3390/math12172636