Speech emotion recognition with deep convolutional neural networks

•Sound files are represented effectively by combining various features.•The framework sets the new SOTA on two datasets for speech emotion recognition.•For the third dataset (EMO-DB), the framework obtains the second highest accuracy.•The advantages of the framework are its simplicity, applicability...

Full description

Saved in:
Bibliographic Details
Published inBiomedical signal processing and control Vol. 59; p. 101894
Main Authors Issa, Dias, Fatih Demirci, M., Yazici, Adnan
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.05.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•Sound files are represented effectively by combining various features.•The framework sets the new SOTA on two datasets for speech emotion recognition.•For the third dataset (EMO-DB), the framework obtains the second highest accuracy.•The advantages of the framework are its simplicity, applicability, and generality. The speech emotion recognition (or, classification) is one of the most challenging topics in data science. In this work, we introduce a new architecture, which extracts mel-frequency cepstral coefficients, chromagram, mel-scale spectrogram, Tonnetz representation, and spectral contrast features from sound files and uses them as inputs for the one-dimensional Convolutional Neural Network for the identification of emotions using samples from the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), Berlin (EMO-DB), and Interactive Emotional Dyadic Motion Capture (IEMOCAP) datasets. We utilize an incremental method for modifying our initial model in order to improve classification accuracy. All of the proposed models work directly with raw sound data without the need for conversion to visual representations, unlike some previous approaches. Based on experimental results, our best-performing model outperforms existing frameworks for RAVDESS and IEMOCAP, thus setting the new state-of-the-art. For the EMO-DB dataset, it outperforms all previous works except one but compares favorably with that one in terms of generality, simplicity, and applicability. Specifically, the proposed framework obtains 71.61% for RAVDESS with 8 classes, 86.1% for EMO-DB with 535 samples in 7 classes, 95.71% for EMO-DB with 520 samples in 7 classes, and 64.3% for IEMOCAP with 4 classes in speaker-independent audio classification tasks.
ISSN:1746-8094
1746-8108
DOI:10.1016/j.bspc.2020.101894