General-Purpose Speech Representation Learning through a Self-Supervised Multi-Granularity Framework
This paper presents a self-supervised learning framework, named MGF, for general-purpose speech representation learning. In the design of MGF, speech hierarchy is taken into consideration. Specifically, we propose to use generative learning approaches to capture fine-grained information at small tim...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
03.02.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This paper presents a self-supervised learning framework, named MGF, for
general-purpose speech representation learning. In the design of MGF, speech
hierarchy is taken into consideration. Specifically, we propose to use
generative learning approaches to capture fine-grained information at small
time scales and use discriminative learning approaches to distill
coarse-grained or semantic information at large time scales. For phoneme-scale
learning, we borrow idea from the masked language model but tailor it for the
continuous speech signal by replacing classification loss with a contrastive
loss. We corroborate our design by evaluating MGF representation on various
downstream tasks, including phoneme classification, speaker classification,
speech recognition, and emotion classification. Experiments verify that
training at different time scales needs different training targets and loss
functions, which in general complement each other and lead to a better
performance. |
---|---|
DOI: | 10.48550/arxiv.2102.01930 |