Relational Data Selection for Data Augmentation of Speaker-dependent Multi-band MelGAN Vocoder

Nowadays, neural vocoders can generate very high-fidelity speech when a bunch of training data is available. Although a speaker-dependent (SD) vocoder usually outperforms a speaker-independent (SI) vocoder, it is impractical to collect a large amount of data of a specific target speaker for most rea...

Full description

Saved in:
Bibliographic Details
Main Authors Wu, Yi-Chiao, Hu, Cheng-Hung, Lee, Hung-Shin, Peng, Yu-Huai, Huang, Wen-Chin, Tsao, Yu, Wang, Hsin-Min, Toda, Tomoki
Format Journal Article
LanguageEnglish
Published 10.06.2021
Online AccessGet full text

Cover

Loading…
More Information
Summary:Nowadays, neural vocoders can generate very high-fidelity speech when a bunch of training data is available. Although a speaker-dependent (SD) vocoder usually outperforms a speaker-independent (SI) vocoder, it is impractical to collect a large amount of data of a specific target speaker for most real-world applications. To tackle the problem of limited target data, a data augmentation method based on speaker representation and similarity measurement of speaker verification is proposed in this paper. The proposed method selects utterances that have similar speaker identity to the target speaker from an external corpus, and then combines the selected utterances with the limited target data for SD vocoder adaptation. The evaluation results show that, compared with the vocoder adapted using only limited target data, the vocoder adapted using augmented data improves both the quality and similarity of synthesized speech.
DOI:10.48550/arxiv.2106.05629