On the Robustness of Intent Classification and Slot Labeling in Goal-oriented Dialog Systems to Real-world Noise
Intent Classification (IC) and Slot Labeling (SL) models, which form the basis of dialogue systems, often encounter noisy data in real-word environments. In this work, we investigate how robust IC/SL models are to noisy data. We collect and publicly release a test-suite for seven common noise types...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
14.04.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Intent Classification (IC) and Slot Labeling (SL) models, which form the
basis of dialogue systems, often encounter noisy data in real-word
environments. In this work, we investigate how robust IC/SL models are to noisy
data. We collect and publicly release a test-suite for seven common noise types
found in production human-to-bot conversations (abbreviations, casing,
misspellings, morphological variants, paraphrases, punctuation and synonyms).
On this test-suite, we show that common noise types substantially degrade the
IC accuracy and SL F1 performance of state-of-the-art BERT-based IC/SL models.
By leveraging cross-noise robustness transfer -- training on one noise type to
improve robustness on another noise type -- we design aggregate
data-augmentation approaches that increase the model performance across all
seven noise types by +10.8% for IC accuracy and +15 points for SL F1 on
average. To the best of our knowledge, this is the first work to present a
single IC/SL model that is robust to a wide range of noise phenomena. |
---|---|
DOI: | 10.48550/arxiv.2104.07149 |