Many-to-Many Voice Conversion using Cycle-Consistent Variational Autoencoder with Multiple Decoders

One of the obstacles in many-to-many voice conversion is the requirement of the parallel training data, which contain pairs of utterances with the same linguistic content spoken by different speakers. Since collecting such parallel data is a highly expensive task, many works attempted to use non-par...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Lee, Keonnyeong, Yoo, In-Chul, Yook, Dongsuk
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 02.02.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:One of the obstacles in many-to-many voice conversion is the requirement of the parallel training data, which contain pairs of utterances with the same linguistic content spoken by different speakers. Since collecting such parallel data is a highly expensive task, many works attempted to use non-parallel training data for many-to-many voice conversion. One of such approaches is using the variational autoencoder (VAE). Though it can handle many-to-many voice conversion without the parallel training, the VAE based voice conversion methods suffer from low sound qualities of the converted speech. One of the major reasons is because the VAE learns only the self-reconstruction path. The conversion path is not trained at all. In this paper, we propose a cycle consistency loss for VAE to explicitly learn the conversion path. In addition, we propose to use multiple decoders to further improve the sound qualities of the conventional VAE based voice conversion methods. The effectiveness of the proposed method is validated using objective and the subjective evaluations.
ISSN:2331-8422