A Bidirectional Extraction-Then-Evaluation Framework for Complex Relation Extraction

Relation extraction is an important task in the field of natural language processing. Previous works mainly focus on adopting pipeline methods or joint methods to model relation extraction in general scenarios. However, these existing methods face challenges when adapting to complex relation extract...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on knowledge and data engineering Vol. 36; no. 12; pp. 7442 - 7454
Main Authors Zhang, Weiyan, Wang, Jiacheng, Chen, Chuang, Lu, Wanpeng, Du, Wen, Wang, Haofen, Liu, Jingping, Ruan, Tong
Format Journal Article
LanguageEnglish
Published IEEE 01.12.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Relation extraction is an important task in the field of natural language processing. Previous works mainly focus on adopting pipeline methods or joint methods to model relation extraction in general scenarios. However, these existing methods face challenges when adapting to complex relation extraction scenarios, such as handling overlapped triplets, multiple triplets, and cross-sentence triplets. In this paper, we revisit the advantages and disadvantages of the aforementioned methods in complex relation extraction. Based on the in-depth analysis, we propose a novel two-stage bidirectional extract-then-evaluate framework named BeeRe . In the extraction stage, we first obtain the subject set, relation set, and object set. Then, we design subject- and object-oriented triplet extractors to iteratively recurrent obtain candidate triplets, ensuring high recall. In the evaluation stage, we adopt a relation-oriented triplet filter to determine subject-object pairs based on relations in triplets obtained in the first stage, ensuring high precision. We conduct extensive experiments on three public datasets to show that BeeRe achieves state-of-the-art performance in both complex and general relation extraction scenarios. Even when compared to large language models like closed-source/open-source LLMs, BeeRe still has significant performance gains.
ISSN:1041-4347
1558-2191
DOI:10.1109/TKDE.2024.3435765