Joint entity and relation extraction with position-aware attention and relation embedding

The joint extraction of entities and relations is an important task in natural language processing, which aims to obtain all relational triples in plain text. However, few existing methods excel in solving the overlapping triple problem. Moreover, most methods ignore the position and order of the wo...

Full description

Saved in:
Bibliographic Details
Published inApplied soft computing Vol. 119; p. 108604
Main Authors Chen, Tiantian, Zhou, Lianke, Wang, Nianbin, Chen, Xirui
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.04.2022
Subjects
Online AccessGet full text
ISSN1568-4946
1872-9681
DOI10.1016/j.asoc.2022.108604

Cover

Loading…
More Information
Summary:The joint extraction of entities and relations is an important task in natural language processing, which aims to obtain all relational triples in plain text. However, few existing methods excel in solving the overlapping triple problem. Moreover, most methods ignore the position and order of the words in the entity in the entity extraction process, which affects the performance of triples extraction. To solve these problems, a joint extraction model with position-aware attention and relation embedding is proposed, named PARE-Joint. The proposed model first recognizes the subjects, and then uses the subject and relation guided attention network to learn the enhanced sentence representation and determine the corresponding objects. In this way, the interaction between entities and relations is captured, and the overlapping triple problem can be better resolved. In addition, taking into account the important role of word order in the entity for triple extraction, the position-aware attention mechanism is used to extract the subjects and the objects in the sentences, respectively. The experimental results demonstrate that our model can solve the overlapping triple problem more effectively and outperform other baselines on four public datasets. •A joint extraction model with position-aware attention and relation embedding is proposed.•The model solves the overlapping triple problem more effectively.•The model increases the influence of the position and order of words in an entity.•The model learns the interaction and correlation between the relation and entity.
ISSN:1568-4946
1872-9681
DOI:10.1016/j.asoc.2022.108604