Emotion detection in speech using deep networks
We propose a novel staged hybrid model for emotion detection in speech. Hybrid models exploit the strength of discriminative classifiers along with the representational power of generative models. Discriminative classifiers have been shown to achieve higher performances than the corresponding genera...
Saved in:
Published in | 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) pp. 3724 - 3728 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.05.2014
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We propose a novel staged hybrid model for emotion detection in speech. Hybrid models exploit the strength of discriminative classifiers along with the representational power of generative models. Discriminative classifiers have been shown to achieve higher performances than the corresponding generative likelihood-based classifiers. On the other hand, generative models learn a rich informative representations. Our proposed hybrid model consists of a generative model, which is used for unsupervised representation learning of short term temporal phenomena and a discriminative model, which is used for event detection and classification of long range temporal dynamics. We evaluate our approach on multiple audio-visual datasets (AVEC, VAM, and SPD) and demonstrate its superiority compared to the state-of-the-art. |
---|---|
ISSN: | 1520-6149 2379-190X |
DOI: | 10.1109/ICASSP.2014.6854297 |