A hybrid statistical model to generate pronunciation variants of words

Generating pronunciation variants of words is an important applicable subject in speech researches and is used extensively in automatic speech segmentation and recognition systems. In this way, decision trees are extremely used to model pronunciation variants of words and sub-word unites. In the cas...

Full description

Saved in:
Bibliographic Details
Published inProceedings of 2005 IEEE International Conference on Natural Language Processing and Knowledge Engineering : (IEEE NLP-KE'05) : Oct. 30-Nov. 1, 2005, Wuhan, China pp. 106 - 110
Main Authors Vazirnezhad, B., Almasganj, F., Bijankhan, M.
Format Conference Proceeding
LanguageEnglish
Published IEEE 2005
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Generating pronunciation variants of words is an important applicable subject in speech researches and is used extensively in automatic speech segmentation and recognition systems. In this way, decision trees are extremely used to model pronunciation variants of words and sub-word unites. In the case of word unites and very large vocabulary, to train necessary decision trees we need a huge amount of speech utterances which contains all of the needed words with a sufficient number of each one. This approach besides demanding very large data, for new words needs some new extra corpus. To solve these problems we have used generalized decision trees, that each tree is trained for a group of words with similar phonemic structure instead of a single word. These trees can predict regions of the words in which substitution, deletion and insertion of phonemes would occur. Next to this step, appropriate statistical contextual rules, which are extracted from a large speech corpus, is applied to these regions in order to generate words variants. This new hybrid d-tree/c-rule approach takes into account word phonological structures, stress, and phone context information simultaneously and an ordinary size speech corpus is sufficient to train its models. By using the word variants obtained by this method in the lexicon of "SHENAVA", a Persian ACSR, a relative WER% reduction of as high as 6% was obtained.
ISBN:9780780393615
0780393619
DOI:10.1109/NLPKE.2005.1598716