On the Disentanglement and Robustness of Self-Supervised Speech Representations

This paper conducts an analysis of latent embeddings generated by a range of pre-trained, self-supervised learning (SSL) models. Departing from conventional practices that predominantly focus on examining these embeddings within the realm of speech recognition tasks, our study investigates the chara...

Full description

Saved in:
Bibliographic Details
Published in2024 International Conference on Electronics, Information, and Communication (ICEIC) pp. 1 - 4
Main Authors Song, Yanjue, Kim, Doyeon, Madhu, Nilesh, Kang, Hong-Goo
Format Conference Proceeding
LanguageEnglish
Published IEEE 28.01.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper conducts an analysis of latent embeddings generated by a range of pre-trained, self-supervised learning (SSL) models. Departing from conventional practices that predominantly focus on examining these embeddings within the realm of speech recognition tasks, our study investigates the characteristics associated with speakers and their behavior under the influence of input distortions. We establish a controlled setting with varying background noise levels and different room impulse response conditions to assess the robustness of these embeddings. We measure speaker-related information by utilizing repetitive sentences spoken by multiple speakers. The results demonstrate that the robustness of pre-trained SSL models is influenced by the type and severity of distortion, whereas the inclusion of speaker information is determined by the specific pre-training approach employed. This distinct perspective offers valuable insights into the versatility and limitations of SSL models.
ISSN:2767-7699
DOI:10.1109/ICEIC61013.2024.10457271