Korean Semantic Role Labeling with Bidirectional Encoder Representations from Transformers and Simple Semantic Information

State-of-the-art semantic role labeling (SRL) performance has been achieved using neural network models by incorporating syntactic feature information such as dependency trees. In recent years, breakthroughs achieved using end-to-end neural network models have resulted in a state-of-the-art SRL perf...

Full description

Saved in:
Bibliographic Details
Published inApplied sciences Vol. 12; no. 12; p. 5995
Main Authors Bae, Jangseong, Lee, Changki
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.06.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:State-of-the-art semantic role labeling (SRL) performance has been achieved using neural network models by incorporating syntactic feature information such as dependency trees. In recent years, breakthroughs achieved using end-to-end neural network models have resulted in a state-of-the-art SRL performance even without syntactic features. With the advent of a language model called bidirectional encoder representations from transformers (BERT), another breakthrough was witnessed. Even though the semantic information of each word constituting a sentence is important in determining the meaning of a word, previous studies regarding the end-to-end neural network method did not utilize semantic information. In this study, we propose a BERT-based SRL model that uses simple semantic information without syntactic feature information. To obtain the latter, we used PropBank, which described the relational information between predicates and arguments. In addition, text-originated feature information obtained from the training text data was utilized. Our proposed model achieved state-of-the-art results on both Korean PropBank and CoNLL-2009 English benchmarks.
ISSN:2076-3417
2076-3417
DOI:10.3390/app12125995