Duplex Diffusion Models Improve Speech-to-Speech Translation
Speech-to-speech translation is a typical sequence-to-sequence learning task that naturally has two directions. How to effectively leverage bidirectional supervision signals to produce high-fidelity audio for both directions? Existing approaches either train two separate models or a multitask-learne...
Saved in:
Main Author | |
---|---|
Format | Journal Article |
Language | English |
Published |
21.05.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Speech-to-speech translation is a typical sequence-to-sequence learning task
that naturally has two directions. How to effectively leverage bidirectional
supervision signals to produce high-fidelity audio for both directions?
Existing approaches either train two separate models or a multitask-learned
model with low efficiency and inferior performance. In this paper, we propose a
duplex diffusion model that applies diffusion probabilistic models to both
sides of a reversible duplex Conformer, so that either end can simultaneously
input and output a distinct language's speech. Our model enables reversible
speech translation by simply flipping the input and output ends. Experiments
show that our model achieves the first success of reversible speech translation
with significant improvements of ASR-BLEU scores compared with a list of
state-of-the-art baselines. |
---|---|
DOI: | 10.48550/arxiv.2305.12628 |