STOCHASTIC FUTURE CONTEXT FOR SPEECH PROCESSING

The amount of future context used in a speech processing application allows for tradeoffs between performance and the delay in providing results to users. Existing speech processing applications may be trained with a specified future context size and perform poorly when used in production with a dif...

Full description

Saved in:
Bibliographic Details
Main Authors Han, Kyu Jeong, Wu, Felix, Sridhar, Prashant, Kim, Kwangyoun
Format Patent
LanguageEnglish
Published 06.10.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The amount of future context used in a speech processing application allows for tradeoffs between performance and the delay in providing results to users. Existing speech processing applications may be trained with a specified future context size and perform poorly when used in production with a different future context size. A speech processing application trained using a stochastic future context allows a trained neural network to be used in production with different amounts of future context. During an update step in training, a future-context size may be sampled from a probability distribution, used to mask a neural network, and compute an output of the masked neural network. The output may then be used to compute a loss value and update parameters of the neural network. The trained neural network may then be used in production with different amounts of future context to provide greater flexibility for production speech processing applications.
Bibliography:Application Number: US202117530139