Learning to Write with Coherence From Negative Examples

Coherence is one of the critical factors that determine the quality of writing. We propose writing relevance (WR) training method for neural encoder-decoder natural language generation (NLG) models which improves coherence of the continuation by leveraging negative examples. WR loss regresses the ve...

Full description

Saved in:
Bibliographic Details
Main Authors Son, Seonil, Lim, Jaeseo, Jang, Youwon, Lee, Jaeyoung, Zhang, Byoung-Tak
Format Journal Article
LanguageEnglish
Published 22.09.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Coherence is one of the critical factors that determine the quality of writing. We propose writing relevance (WR) training method for neural encoder-decoder natural language generation (NLG) models which improves coherence of the continuation by leveraging negative examples. WR loss regresses the vector representation of the context and generated sentence toward positive continuation by contrasting it with the negatives. We compare our approach with Unlikelihood (UL) training in a text continuation task on commonsense natural language inference (NLI) corpora to show which method better models the coherence by avoiding unlikely continuations. The preference of our approach in human evaluation shows the efficacy of our method in improving coherence.
DOI:10.48550/arxiv.2209.10922