Textless NLP -- Zero Resource Challenge with Low Resource Compute
This work addresses the persistent challenges of substantial training time and GPU resource requirements even when training lightweight encoder-vocoder models for Textless NLP. We reduce training steps significantly while improving performance by a) leveraging learning rate schedulers for efficient...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
24.09.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This work addresses the persistent challenges of substantial training time
and GPU resource requirements even when training lightweight encoder-vocoder
models for Textless NLP. We reduce training steps significantly while improving
performance by a) leveraging learning rate schedulers for efficient and faster
convergence b) optimizing hop length and c) tuning the interpolation scale
factors for better audio quality. Additionally, we explore the latent space
representation for Indian languages such as Tamil and Bengali for the acoustic
unit discovery and voice conversion task. Our approach leverages a quantized
encoder architecture, in conjunction with a vocoder which utilizes the proposed
mixture of optimized hop length, tuned interpolation scale factors and a cyclic
learning rate scheduler. We obtain consistently good results across English,
Tamil and Bengali datasets. The proposed method excels in capturing complex
linguistic patterns, resulting in clear reconstructed audio during voice
conversion with significantly reduced training time. |
---|---|
DOI: | 10.48550/arxiv.2409.19015 |