DiactTOD: Learning Generalizable Latent Dialogue Acts for Controllable Task-Oriented Dialogue Systems
Dialogue act annotations are important to improve response generation quality in task-oriented dialogue systems. However, it can be challenging to use dialogue acts to control response generation in a generalizable way because different datasets and tasks may have incompatible annotations. While alt...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
01.08.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Dialogue act annotations are important to improve response generation quality
in task-oriented dialogue systems. However, it can be challenging to use
dialogue acts to control response generation in a generalizable way because
different datasets and tasks may have incompatible annotations. While
alternative methods that utilize latent action spaces or reinforcement learning
do not require explicit annotations, they may lack interpretability or face
difficulties defining task-specific rewards. In this work, we present a novel
end-to-end latent dialogue act model (DiactTOD) that represents dialogue acts
in a latent space. DiactTOD, when pre-trained on a large corpus, is able to
predict and control dialogue acts to generate controllable responses using
these latent representations in a zero-shot fashion. Our approach demonstrates
state-of-the-art performance across a wide range of experimental settings on
the MultiWOZ dataset, including zero-shot, few-shot, and full data fine-tuning
with both end-to-end and policy optimization configurations. |
---|---|
AbstractList | Dialogue act annotations are important to improve response generation quality
in task-oriented dialogue systems. However, it can be challenging to use
dialogue acts to control response generation in a generalizable way because
different datasets and tasks may have incompatible annotations. While
alternative methods that utilize latent action spaces or reinforcement learning
do not require explicit annotations, they may lack interpretability or face
difficulties defining task-specific rewards. In this work, we present a novel
end-to-end latent dialogue act model (DiactTOD) that represents dialogue acts
in a latent space. DiactTOD, when pre-trained on a large corpus, is able to
predict and control dialogue acts to generate controllable responses using
these latent representations in a zero-shot fashion. Our approach demonstrates
state-of-the-art performance across a wide range of experimental settings on
the MultiWOZ dataset, including zero-shot, few-shot, and full data fine-tuning
with both end-to-end and policy optimization configurations. |
Author | Zhang, Yi Wu, Qingyang Gung, James Shu, Raphael |
Author_xml | – sequence: 1 givenname: Qingyang surname: Wu fullname: Wu, Qingyang – sequence: 2 givenname: James surname: Gung fullname: Gung, James – sequence: 3 givenname: Raphael surname: Shu fullname: Shu, Raphael – sequence: 4 givenname: Yi surname: Zhang fullname: Zhang, Yi |
BackLink | https://doi.org/10.48550/arXiv.2308.00878$$DView paper in arXiv |
BookMark | eNpFz7FOwzAUhWEPMJTCAzDVL5Bgx4ljs1UpFKRIGcge3Tg3lYXrIMcgytNTAhLTWT4d6b8iF37ySMgtZ2muioLdQfi0H2kmmEoZU6VaEdxZMLFtdve0Rgje-gPdo8cAzn5B75DWENFHenZuOrwj3Zo403EKtJp8DJNzi2phfk2aYM8Uh3_8cpojHudrcjmCm_Hmb9ekfXxoq6ekbvbP1bZOQJYqQaGQY5ENGrkcQaOUSqIuBwFcc10KI3UuuTaYS5VlEnoj-owb1EwaLrlYk83v7dLZvQV7hHDqfnq7pVd8AypbUk8 |
ContentType | Journal Article |
Copyright | http://arxiv.org/licenses/nonexclusive-distrib/1.0 |
Copyright_xml | – notice: http://arxiv.org/licenses/nonexclusive-distrib/1.0 |
DBID | AKY GOX |
DOI | 10.48550/arxiv.2308.00878 |
DatabaseName | arXiv Computer Science arXiv.org |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: GOX name: arXiv.org url: http://arxiv.org/find sourceTypes: Open Access Repository |
DeliveryMethod | fulltext_linktorsrc |
ExternalDocumentID | 2308_00878 |
GroupedDBID | AKY GOX |
ID | FETCH-LOGICAL-a678-e38e1e52d9e16fa9e6686e97d3a191973c694619ce468226abc3b21ce906c1613 |
IEDL.DBID | GOX |
IngestDate | Mon Jan 08 05:40:31 EST 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-a678-e38e1e52d9e16fa9e6686e97d3a191973c694619ce468226abc3b21ce906c1613 |
OpenAccessLink | https://arxiv.org/abs/2308.00878 |
ParticipantIDs | arxiv_primary_2308_00878 |
PublicationCentury | 2000 |
PublicationDate | 2023-08-01 |
PublicationDateYYYYMMDD | 2023-08-01 |
PublicationDate_xml | – month: 08 year: 2023 text: 2023-08-01 day: 01 |
PublicationDecade | 2020 |
PublicationYear | 2023 |
Score | 1.8894057 |
SecondaryResourceType | preprint |
Snippet | Dialogue act annotations are important to improve response generation quality
in task-oriented dialogue systems. However, it can be challenging to use
dialogue... |
SourceID | arxiv |
SourceType | Open Access Repository |
SubjectTerms | Computer Science - Computation and Language |
Title | DiactTOD: Learning Generalizable Latent Dialogue Acts for Controllable Task-Oriented Dialogue Systems |
URI | https://arxiv.org/abs/2308.00878 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV09T8MwFLTaTiwIBKh8ygOrRRInLzFb1VIqBHQJUrbIHy-oogqoDYifz4sTVBZW-6ZnS3fPPp8Zu6Z-2lU-5VMZFDFiKAxtHRHYRKYWHEh_NPD0DIuX-KFIigHjv29h9OZ79dXlA5vtDenj1uqYpdmQDaOotWzdL4vuctJHcfX4HY40ph_6QxLzA7bfqzs-6ZbjkA2wPmI4W2nb5MvZLe_TTF95n_bcWqrWyB9J8NUNJ5w_SeET22w5qUk-7Yzka4_K9fZNLNtcYlKJO3CfOX7M8vldPl2I_ncDoYkgBMoMQ0wipzCESisEyABV6qSmFkql0oKKqbuxGAOROGhjpYlCiyoASzJNnrBR_V7jmPGkim0VOgCiftJXsZYusGBISWgXVGl6ysa-JuVHF2BRtuUqfbnO_p86Z3vt1-qd2e2CjZrNJ14SATfmyq_CD-DQhZk |
link.rule.ids | 228,230,783,888 |
linkProvider | Cornell University |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=DiactTOD%3A+Learning+Generalizable+Latent+Dialogue+Acts+for+Controllable+Task-Oriented+Dialogue+Systems&rft.au=Wu%2C+Qingyang&rft.au=Gung%2C+James&rft.au=Shu%2C+Raphael&rft.au=Zhang%2C+Yi&rft.date=2023-08-01&rft_id=info:doi/10.48550%2Farxiv.2308.00878&rft.externalDocID=2308_00878 |