Fusion of EEG and Musical Features in Continuous Music-emotion Recognition
Emotion estimation in music listening is confronting challenges to capture the emotion variation of listeners. Recent years have witnessed attempts to exploit multimodality fusing information from musical contents and physiological signals captured from listeners to improve the performance of emotio...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
30.11.2016
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Emotion estimation in music listening is confronting challenges to capture
the emotion variation of listeners. Recent years have witnessed attempts to
exploit multimodality fusing information from musical contents and
physiological signals captured from listeners to improve the performance of
emotion recognition. In this paper, we present a study of fusion of signals of
electroencephalogram (EEG), a tool to capture brainwaves at a high-temporal
resolution, and musical features at decision level in recognizing the
time-varying binary classes of arousal and valence. Our empirical results
showed that the fusion could outperform the performance of emotion recognition
using only EEG modality that was suffered from inter-subject variability, and
this suggested the promise of multimodal fusion in improving the accuracy of
music-emotion recognition. |
---|---|
DOI: | 10.48550/arxiv.1611.10120 |