DiffNorm: Self-Supervised Normalization for Non-autoregressive Speech-to-speech Translation

Non-autoregressive Transformers (NATs) are recently applied in direct speech-to-speech translation systems, which convert speech across different languages without intermediate text data. Although NATs generate high-quality outputs and offer faster inference than autoregressive models, they tend to...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Tan, Weiting, Zhang, Jingyu, Shen, Lingfeng, Khashabi, Daniel, Koehn, Philipp
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 21.10.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Non-autoregressive Transformers (NATs) are recently applied in direct speech-to-speech translation systems, which convert speech across different languages without intermediate text data. Although NATs generate high-quality outputs and offer faster inference than autoregressive models, they tend to produce incoherent and repetitive results due to complex data distribution (e.g., acoustic and linguistic variations in speech). In this work, we introduce DiffNorm, a diffusion-based normalization strategy that simplifies data distributions for training NAT models. After training with a self-supervised noise estimation objective, DiffNorm constructs normalized target data by denoising synthetically corrupted speech features. Additionally, we propose to regularize NATs with classifier-free guidance, improving model robustness and translation quality by randomly dropping out source information during training. Our strategies result in a notable improvement of about +7 ASR-BLEU for English-Spanish (En-Es) and +2 ASR-BLEU for English-French (En-Fr) translations on the CVSS benchmark, while attaining over 14x speedup for En-Es and 5x speedup for En-Fr translations compared to autoregressive baselines.
ISSN:2331-8422