On the Disentanglement and Robustness of Self-Supervised Speech Representations
This paper conducts an analysis of latent embeddings generated by a range of pre-trained, self-supervised learning (SSL) models. Departing from conventional practices that predominantly focus on examining these embeddings within the realm of speech recognition tasks, our study investigates the chara...
Saved in:
Published in | 2024 International Conference on Electronics, Information, and Communication (ICEIC) pp. 1 - 4 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
28.01.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This paper conducts an analysis of latent embeddings generated by a range of pre-trained, self-supervised learning (SSL) models. Departing from conventional practices that predominantly focus on examining these embeddings within the realm of speech recognition tasks, our study investigates the characteristics associated with speakers and their behavior under the influence of input distortions. We establish a controlled setting with varying background noise levels and different room impulse response conditions to assess the robustness of these embeddings. We measure speaker-related information by utilizing repetitive sentences spoken by multiple speakers. The results demonstrate that the robustness of pre-trained SSL models is influenced by the type and severity of distortion, whereas the inclusion of speaker information is determined by the specific pre-training approach employed. This distinct perspective offers valuable insights into the versatility and limitations of SSL models. |
---|---|
ISSN: | 2767-7699 |
DOI: | 10.1109/ICEIC61013.2024.10457271 |