Post-encoding and contrastive learning method for response selection task
Retrieval-based dialogue systems have achieved great performance improvements after the raise of pre-trained language models and Transformer mechanisms. In the process of context and response selection, the pre-trained language model can capture the relationship between texts, but current existing m...
Saved in:
Published in | 2023 5th International Conference on Natural Language Processing (ICNLP) pp. 1 - 5 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.03.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Retrieval-based dialogue systems have achieved great performance improvements after the raise of pre-trained language models and Transformer mechanisms. In the process of context and response selection, the pre-trained language model can capture the relationship between texts, but current existing methods don't consider the order of sentences and the relationship between the context and the response. At the same time, as the problem of a small number of positive samples in retrieval-based dialogue systems, it is difficult to train a learning model with high performance. In addition, existing methods usually requires the larger computational cost after splicing the context and the response. To solve the above problems, we propose a post-encoding approach combining with the strategy of contrastive learning. The order of the context and the relationship between sentences in dialogues and response are reflected in the encoding process, and a new loss function is designed for contrastive learning. The propose approach is validated through experiments on public datasets. The experiment results show that our model achieves better performance and effectiveness compared to existing methods. |
---|---|
DOI: | 10.1109/ICNLP58431.2023.00050 |