Artificial Neural Network based Emotion Classification and Recognition from Speech

Emotion recognition from speech signals is still a challenging task. Hence, proposing an efficient and accurate technique for speech-based emotion recognition is also an important task. This study is focused on four basic human emotions (sad, angry, happy, and normal) recognition using an artificial...

Full description

Saved in:
Bibliographic Details
Published inInternational journal of advanced computer science & applications Vol. 11; no. 12
Main Authors Iqbal, Mudasser, Ali, Syed, Abid, Muhammad, Majeed, Furqan, Ali, Ans
Format Journal Article
LanguageEnglish
Published West Yorkshire Science and Information (SAI) Organization Limited 01.12.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Emotion recognition from speech signals is still a challenging task. Hence, proposing an efficient and accurate technique for speech-based emotion recognition is also an important task. This study is focused on four basic human emotions (sad, angry, happy, and normal) recognition using an artificial neural network that can be detected through vocal expressions resulting in more efficient and productive machine behaviors. An effective model based on a Bayesian regularized artificial neural network (BRANN) is proposed in this study for speech-based emotion recognition. The experiments are conducted on a well-known Berlin database having 1470 speech samples carrying basic emotions with 500 samples of angry emotions, 300 samples of happy emotions, 350 samples of a neutral state, and 320 samples of sad emotions. The four features Frequency, Pitch, Amplitude, and formant of speech is used to recognize four basic emotions from speech. The performance of the proposed methodology is compared with the performance of state-of-the-art methodologies used for emotion recognition from speech. The proposed methodology achieved 95% accuracy of emotion recognition which is highest as compared to other states of the art techniques in the relevant domain.
ISSN:2158-107X
2156-5570
DOI:10.14569/IJACSA.2020.0111253