Self-Supervised Learning With Segmental Masking for Speech Representation

Self-supervised learning has achieved remarkable success for learning speech representations from unlabeled data. The masking strategy plays an important role in the self-supervised learning algorithm. Most of the masking techniques operate at a frame level. In linguistics, phone is the smallest uni...

Full description

Saved in:
Bibliographic Details
Published inIEEE journal of selected topics in signal processing Vol. 16; no. 6; pp. 1367 - 1379
Main Authors Yue, Xianghu, Lin, Jingru, Gutierrez, Fabian Ritter, Li, Haizhou
Format Journal Article
LanguageEnglish
Published New York IEEE 01.10.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Self-supervised learning has achieved remarkable success for learning speech representations from unlabeled data. The masking strategy plays an important role in the self-supervised learning algorithm. Most of the masking techniques operate at a frame level. In linguistics, phone is the smallest unit of sound. Hence, we believe that a masking technique that operates at a phoneme level will effectively encode the phonotactic and prosodic constraints of a spoken language, thus eventually benefits the downstream speech recognition tasks. In this work, we explore a novel segmental masking strategy. Specifically, we mask phonetically motivated speech segments according to the phonetic segmentation in an utterance. By doing so, we implicitly incorporate the properties of a spoken language, such as phonotactic constraints and duration of phonetic segments, into the pre-training. Through extensive experiments, we confirm that the segmental masking strategy consistently outperforms the frame-based masking counterpart. We also further investigate the effect of segmental masking unit size, i.e. phoneme, phoneme span, and lexical word. This work presents an important finding about masking strategy in speech representation learning.
ISSN:1932-4553
1941-0484
DOI:10.1109/JSTSP.2022.3191845