Automatic Speech Recognition using the Melspectrogram-based method for English Phonemes

An automatic speech recognition (ASR) technique may be set up to forecast the pronunciation of textual identifiers (such as song names) based on assumptions about the language or languages in which the textual identifier was originally written. To identify mispronunciation, custom acoustic-phonetic...

Full description

Saved in:
Bibliographic Details
Published in2022 International Conference on Computer, Power and Communications (ICCPC) pp. 270 - 273
Main Authors Soundarya, M, Karthikeyan, P R, Ganapathy, Kirupa, Thangarasu, Gunasekar
Format Conference Proceeding
LanguageEnglish
Published IEEE 14.12.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:An automatic speech recognition (ASR) technique may be set up to forecast the pronunciation of textual identifiers (such as song names) based on assumptions about the language or languages in which the textual identifier was originally written. To identify mispronunciation, custom acoustic-phonetic elements are typically used. This study examines the use of deep convolutional neural networks to identify English phonemes that have been mispronounced in musical samples. Convolutional neural networks (CNNs) are now often employed in systems recognizing speech. In this work, a decoded-based architecture is proposed in which the spectrogram feature that corresponds with the auditory features is proposed by comparing the various inputs to the model. Following the selection of the input features, this research examines the design principles of learning parameters and their application to voice recognition with various parameters. To identify mispronunciation, custom acoustic-phonetic elements are typically used. This research work also examines the application of learning models. The proposed method achieves better results with 85% of accuracy and a Word Error Rate of 8.1 on comparing with existing works.
DOI:10.1109/ICCPC55978.2022.10072076