EGAN: A Neural Excitation Generation Model Based on Generative Adversarial Networks with Harmonics and Noise Input

This paper presents a speech synthesis method based on source-filter modeling. The source model is powered by a neural excitation generator trained by GAN to produce an excitation signal with harmonics and noise input and conditioning spectral envelope derived from the WORLD vocoder. An overlap-add...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the ... IEEE International Conference on Acoustics, Speech and Signal Processing (1998) pp. 1 - 5
Main Authors Lin, Yen-Ting, Chiang, Chen-Yu
Format Conference Proceeding
LanguageEnglish
Published IEEE 04.06.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper presents a speech synthesis method based on source-filter modeling. The source model is powered by a neural excitation generator trained by GAN to produce an excitation signal with harmonics and noise input and conditioning spectral envelope derived from the WORLD vocoder. An overlap-add FFT filter implements the filter model to generate synthesized speech with the excitation signal and the spectral envelope. The harmonics inputs are 32-channel sine waves at frequencies of F0 and the corresponding harmonic over-tones to provide bases for periodic impulse trains. The noise input is a 3-channel Gaussian noise to infer aperiodicity. The neural excitation generator employs cascaded neural filter blocks to filter 35-channel harmonics and noise signals by amplifying the signal of each input channel with the instantaneous amplitude and adding noise to each channel conditioned by the spectral envelope. Since harmonics directly conditioned by F0 generate the excitation signal, the proposed speech synthesis method is intrinsically flexible in F0 manipulation. Experimental results on CMU ARCTIC and CSTR VCTK speech corpora demonstrate that the proposed method outperforms the conventional speech synthesizers in not only F0 manipulation but also in keeping good synthesis quality against unseen speakers.
ISSN:2379-190X
DOI:10.1109/ICASSP49357.2023.10096801