Data Augmenting Contrastive Learning of Speech Representations in the Time Domain

Contrastive Predictive Coding (CPC), based on predicting future segments of speech from past segments is emerging as a powerful algorithm for representation learning of speech signal. However, it still under-performs compared to other methods on unsupervised evaluation benchmarks. Here, we intro-duc...

Full description

Saved in:
Bibliographic Details
Published in2021 IEEE Spoken Language Technology Workshop (SLT) pp. 215 - 222
Main Authors Kharitonov, Eugene, Riviere, Morgane, Synnaeve, Gabriel, Wolf, Lior, Mazare, Pierre-Emmanuel, Douze, Matthijs, Dupoux, Emmanuel
Format Conference Proceeding
LanguageEnglish
Published IEEE 19.01.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Contrastive Predictive Coding (CPC), based on predicting future segments of speech from past segments is emerging as a powerful algorithm for representation learning of speech signal. However, it still under-performs compared to other methods on unsupervised evaluation benchmarks. Here, we intro-duce WavAugment, a time-domain data augmentation library which we adapt and optimize for the specificities of CPC (raw waveform input, contrastive loss, past versus future structure). We find that applying augmentation only to the segments from which the CPC prediction is performed yields better results than applying it also to future segments from which the samples (both positive and negative) of the contrastive loss are drawn. After selecting the best combination of pitch modification, additive noise and reverberation on unsupervised metrics on LibriSpeech (with a gain of 18-22% relative on the ABX score), we apply this combination without any change to three new datasets in the Zero Resource Speech Benchmark 2017 and beat the state-of-the-art using out-of-domain training data. Finally, we show that the data-augmented pretrained features improve a downstream phone recognition task in the Libri-light semi-supervised setting (10 min, 1 h or 10 h of labelled data) reducing the PER by 15% relative.
DOI:10.1109/SLT48900.2021.9383605