Improving Longer-range Dialogue State Tracking

Dialogue state tracking (DST) is a pivotal component in task-oriented dialogue systems. While it is relatively easy for a DST model to capture belief states in short conversations, the task of DST becomes more challenging as the length of a dialogue increases due to the injection of more distracting...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Zhang, Ye, Cao, Yuan, Mahdieh, Mahdis, Zhao, Jeffrey, Wu, Yonghui
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 05.05.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Dialogue state tracking (DST) is a pivotal component in task-oriented dialogue systems. While it is relatively easy for a DST model to capture belief states in short conversations, the task of DST becomes more challenging as the length of a dialogue increases due to the injection of more distracting contexts. In this paper, we aim to improve the overall performance of DST with a special focus on handling longer dialogues. We tackle this problem from three perspectives: 1) A model designed to enable hierarchical slot status prediction; 2) Balanced training procedure for generic and task-specific language understanding; 3) Data perturbation which enhances the model's ability in handling longer conversations. We conduct experiments on the MultiWOZ benchmark, and demonstrate the effectiveness of each component via a set of ablation tests, especially on longer conversations.
ISSN:2331-8422