Label and Context Augmentation for Response Selection at DSTC8

This paper studies the dialogue response selection task. As state-of-the-arts are neural models requiring a large training set, data augmentation has been considered as a means to overcome the sparsity of observational annotation, where only one observed response is annotated as gold. In this paper,...

Full description

Saved in:
Bibliographic Details
Published inIEEE/ACM transactions on audio, speech, and language processing Vol. 29; pp. 2541 - 2550
Main Authors Jeong, Myeongho, Choi, Seungtaek, Yeo, Jinyoung, Hwang, Seung-won
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper studies the dialogue response selection task. As state-of-the-arts are neural models requiring a large training set, data augmentation has been considered as a means to overcome the sparsity of observational annotation, where only one observed response is annotated as gold. In this paper, we first consider label augmentation, of selecting, among unobserved utterances, that would "counterfactually" replace the labeled response, for the given context, and augmenting labels only if that is the case. The key advantage of this model is not incurring human annotation overhead, thus not increasing the training cost, i.e., for low-resource scenarios. In addition, we consider context augmentation scenarios where the given dialogue context is not sufficient for label augmentation. In this case, inspired by open-domain question answering, we "decontextualize" by retrieving missing contexts, such as related persona. We empirically show that our pipeline improves BERT-based models in two different response selection tasks without incurring annotation overheads.
ISSN:2329-9290
2329-9304
DOI:10.1109/TASLP.2021.3076876