Learning to Write with Coherence From Negative Examples
Coherence is one of the critical factors that determine the quality of writing. We propose writing relevance (WR) training method for neural encoder-decoder natural language generation (NLG) models which improves coherence of the continuation by leveraging negative examples. WR loss regresses the ve...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
22.09.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Coherence is one of the critical factors that determine the quality of
writing. We propose writing relevance (WR) training method for neural
encoder-decoder natural language generation (NLG) models which improves
coherence of the continuation by leveraging negative examples. WR loss
regresses the vector representation of the context and generated sentence
toward positive continuation by contrasting it with the negatives. We compare
our approach with Unlikelihood (UL) training in a text continuation task on
commonsense natural language inference (NLI) corpora to show which method
better models the coherence by avoiding unlikely continuations. The preference
of our approach in human evaluation shows the efficacy of our method in
improving coherence. |
---|---|
DOI: | 10.48550/arxiv.2209.10922 |