Near-Optimal Active Learning for Multilingual Grapheme-to-Phoneme Conversion
The construction of pronunciation dictionaries relies on high-quality and extensive training data in data-driven way. However, the manual annotation of corpus for this purpose is both costly and time consuming, especially for low-resource languages that lack sufficient data and resources. A multilin...
Saved in:
Published in | Applied sciences Vol. 13; no. 16; p. 9408 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Basel
MDPI AG
01.08.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The construction of pronunciation dictionaries relies on high-quality and extensive training data in data-driven way. However, the manual annotation of corpus for this purpose is both costly and time consuming, especially for low-resource languages that lack sufficient data and resources. A multilingual pronunciation dictionary includes some common phonemes or phonetic units, which means that these phonemes or units have similarities in the pronunciation of different languages and can be used in the construction process of pronunciation dictionaries for low-resource languages. By using a multilingual pronunciation dictionary, knowledge can be shared among different languages, thus improving the quality and accuracy of pronunciation dictionaries for low-resource languages. In this paper, we propose using shared articulatory features among multiple languages to construct a universal phoneme set, which is then used to label words for multiple languages. To achieve this, we first developed a grapheme−phoneme (G2P) model based on an encoder−decoder deep neural network. We then adopted a near-optimal active learning method in the process of building the pronunciation dictionary to select informative samples from a large, unlabeled corpus and had them labeled by experts. Our experiments demonstrate that this method selected about 1/5 of the unlabeled data and achieved an even higher conversion accuracy than the results of the large data training method. By selectively labeling samples with a high uncertainty in the model, while avoiding labeling samples that were accurately predicted by the current model, our method greatly enhances the efficiency of pronunciation dictionary construction. |
---|---|
ISSN: | 2076-3417 2076-3417 |
DOI: | 10.3390/app13169408 |