Towards Improving NAM-to-Speech Synthesis Intelligibility using Self-Supervised Speech Models
We propose a novel approach to significantly improve the intelligibility in the Non-Audible Murmur (NAM)-to-speech conversion task, leveraging self-supervision and sequence-to-sequence (Seq2Seq) learning techniques. Unlike conventional methods that explicitly record ground-truth speech, our methodol...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
26.07.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We propose a novel approach to significantly improve the intelligibility in
the Non-Audible Murmur (NAM)-to-speech conversion task, leveraging
self-supervision and sequence-to-sequence (Seq2Seq) learning techniques. Unlike
conventional methods that explicitly record ground-truth speech, our
methodology relies on self-supervision and speech-to-speech synthesis to
simulate ground-truth speech. Despite utilizing simulated speech, our method
surpasses the current state-of-the-art (SOTA) by 29.08% improvement in the
Mel-Cepstral Distortion (MCD) metric. Additionally, we present error rates and
demonstrate our model's proficiency to synthesize speech in novel voices of
interest. Moreover, we present a methodology for augmenting the existing CSTR
NAM TIMIT Plus corpus, setting a benchmark with a Word Error Rate (WER) of
42.57% to gauge the intelligibility of the synthesized speech. Speech samples
can be found at https://nam2speech.github.io/NAM2Speech/ |
---|---|
DOI: | 10.48550/arxiv.2407.18541 |