Application of deep belief networks in eeg-based dynamic music-emotion recognition
Estimating emotional states in music listening based on electroencephalogram (EEG) has been capturing the attention of researchers in the past decade. Although deep belief network (DBN) has witnessed the success in various domains including early works in emotion recognition based on EEG, it remains...
Saved in:
Published in | 2016 International Joint Conference on Neural Networks (IJCNN) pp. 881 - 888 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.07.2016
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Estimating emotional states in music listening based on electroencephalogram (EEG) has been capturing the attention of researchers in the past decade. Although deep belief network (DBN) has witnessed the success in various domains including early works in emotion recognition based on EEG, it remains unclear whether DBN could improve emotion classification in music domains, especially in dynamic strategy that considers time-varying characteristics of emotion. This paper presents an early study of applying DBNs to improve emotion recognition in music listening where emotions were annotated continuously in time by subjects. Our subject-dependent results using stratified 10-fold cross-validation strategy suggested that DBNs could improve performance in valence classification with fractal dimension (FD), power spectral density (PSD), and discrete wavelet transform (DWT) features and improve performance in arousal classification with FD and DWT features. Furthermore, we found that the size of sliding window affected classification accuracies when using features in time (FD) and time-frequency (DWT) domains, while smaller window (1-4 seconds) could achieve higher performance compared with a larger window (5-8 seconds). |
---|---|
ISSN: | 2161-4407 |
DOI: | 10.1109/IJCNN.2016.7727292 |