Simulation of English Speech Recognition Based on Improved Extreme Random Forest Classification

Existing speech recognition systems are only for mainstream audio types; there is little research on language types; the system is subject to relatively large restrictions; and the recognition rate is not high. Therefore, how to use an efficient classifier to make a speech recognition system with a...

Full description

Saved in:
Bibliographic Details
Published inComputational intelligence and neuroscience Vol. 2022; pp. 1 - 10
Main Authors Hao, Chunhui, Li, Yuan
Format Journal Article
LanguageEnglish
Published New York Hindawi 01.07.2022
John Wiley & Sons, Inc
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Existing speech recognition systems are only for mainstream audio types; there is little research on language types; the system is subject to relatively large restrictions; and the recognition rate is not high. Therefore, how to use an efficient classifier to make a speech recognition system with a high recognition rate is one of the current research focuses. Based on the idea of machine learning, this study combines the computational random forest classification method to improve the algorithm and builds an English speech recognition model based on machine learning. Moreover, this study uses a lightweight model and its improved model to recognize speech signals and directly performs adaptive wavelet threshold shrinkage and denoising on the generated time-frequency images. In addition, this study uses the EI strong classifier to replace the softmax of the lightweight AlexNet model, which further improves the recognition accuracy under a low signal-to-noise ratio. Finally, this study designs experiments to verify the model effect. The research results show that the effect of the model constructed in this study is good.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
Academic Editor: Jun Ye
ISSN:1687-5265
1687-5273
1687-5273
DOI:10.1155/2022/1948159