EnDex: Evaluation of Dialogue Engagingness at Scale

Findings of EMNLP 2022 We propose EnDex, the first human-reaction based model to evaluate dialogue engagingness. EnDex is trained on 80k Reddit-based Engagement Dataset (RED) curated using a novel distant-supervision framework. Engagingness is a key measure that captures high-level quality of AI dia...

Full description

Saved in:
Bibliographic Details
Main Authors Xu, Guangxuan, Liu, Ruibo, Harel-Canada, Fabrice, Chandra, Nischal Reddy, Peng, Nanyun
Format Journal Article
LanguageEnglish
Published 22.10.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Findings of EMNLP 2022 We propose EnDex, the first human-reaction based model to evaluate dialogue engagingness. EnDex is trained on 80k Reddit-based Engagement Dataset (RED) curated using a novel distant-supervision framework. Engagingness is a key measure that captures high-level quality of AI dialogue systems and closely reflects actual user experience. However, data shortage, plus the abstract and extensive definition of engagingness makes it challenging to develop an automatic metric. Our work departs from mainstream approaches that use synthetic negative examples to train binary classifiers, and instead, proposes a solution using distant-supervision from human-reaction feedback. To support the soundness of our EnDex metric, we offer a theoretical foundation for engagement, an extensive ablation study, and empirical evidence of high correlation on five engagingness related datasets. We will release code, off-the-shelf EnDex model, and a large-scale dataset upon paper publication to facilitate future research.
DOI:10.48550/arxiv.2210.12362