SentiBERT: Pre-training Language Model Combining Sentiment Information
Pre-training language models on large-scale unsupervised corpus are attracting the attention of researchers in the field of natural language processing. The existing model mainly extracts the semantic and structural features of the text in the pre-training stage. Aiming at sentiment task and complex...
Saved in:
Published in | Jisuanji kexue yu tansuo Vol. 14; no. 9; pp. 1563 - 1570 |
---|---|
Main Author | |
Format | Journal Article |
Language | Chinese |
Published |
Journal of Computer Engineering and Applications Beijing Co., Ltd., Science Press
01.09.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Pre-training language models on large-scale unsupervised corpus are attracting the attention of researchers in the field of natural language processing. The existing model mainly extracts the semantic and structural features of the text in the pre-training stage. Aiming at sentiment task and complex emotional features, a pre-training method focusing on learning sentiment features is proposed on the basis of the latest pre-training language model BERT(bidirectional encoder representations from transformers). In the further pre-training stage, this paper improves pre-training task of BERT with the help of sentiment dictionary. At the same time, this paper uses context-based word sentiment prediction task to classify the sentiment of masked words to acquire the textual representation biased towards sentiment features. Finally, fine-tuning is performed on a small labeled data sets. Experimental results show that, compared with the original BERT model, the accuracy of sentiment tasks can be improved by 1 percentag |
---|---|
ISSN: | 1673-9418 |
DOI: | 10.3778/j.issn.1673-9418.1910037 |