Discriminative training of language models for speech recognition

In this paper we describe how discriminative training can be applied to language models for speech recognition. Language models are important to guide the speech recognition search, particularly in compensating for mistakes in acoustic decoding. A frequently used measure of the quality of language m...

Full description

Saved in:
Bibliographic Details
Published in2002 IEEE International Conference on Acoustics, Speech, and Signal Processing Vol. 1; pp. I-325 - I-328
Main Authors Kuo, Hong-Kwang Jeff, Fosler-Lussier, Eric, Jiang, Hui, Lee, Chin-Hui
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.01.2002
Subjects
Online AccessGet full text
ISBN9780780374027
0780374029
ISSN1520-6149
DOI10.1109/ICASSP.2002.5743720

Cover

Loading…
More Information
Summary:In this paper we describe how discriminative training can be applied to language models for speech recognition. Language models are important to guide the speech recognition search, particularly in compensating for mistakes in acoustic decoding. A frequently used measure of the quality of language models is the perplexity; however, what is more important for accurate decoding is not necessarily having the maximum likelihood hypothesis, but rather the best separation of the correct string from the competing, acoustically confusible hypotheses. Discriminative training can help to improve language models for the purpose of speech recognition by improving the separation of the correct hypothesis from the competing hypotheses. We describe the algorithm and demonstrate modest improvements in word and sentence error rates on the DARPA Communicator task without any increase in language model complexity.
ISBN:9780780374027
0780374029
ISSN:1520-6149
DOI:10.1109/ICASSP.2002.5743720