Emotion Recognition With Knowledge Graph Based on Electrodermal Activity
Electrodermal activity (EDA) sensor is emerging non-invasive equipment in affect detection research, which is used to measure electrical activities of the skin. Knowledge graphs are an effective way to learn representation from data. However, few studies analyzed the effect of knowledge-related grap...
Saved in:
Published in | Frontiers in neuroscience Vol. 16; p. 911767 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Lausanne
Frontiers Research Foundation
09.06.2022
Frontiers Media S.A |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Electrodermal activity (EDA) sensor is emerging non-invasive equipment in affect detection research, which is used to measure electrical activities of the skin. Knowledge graphs are an effective way to learn representation from data. However, few studies analyzed the effect of knowledge-related graph features with physiological signals when subjects are in non-similar mental states. In this paper, we propose a model using deep learning techniques to classify the emotional responses of individuals acquired from physiological datasets. We aim to improve the execution of emotion recognition based on EDA signals. The proposed framework is based on observed gender and age information as embedding feature vectors. We also extract time and frequency EDA features in line with cognitive studies. We then introduce a sophisticated weighted feature fusion method that combines knowledge embedding feature vectors and statistical feature (SF) vectors for emotional state classification. We finally utilize deep neural networks to optimize our approach. Results obtained indicated that the correct combination of Gender-Age Relation Graph (GARG) and SF vectors improve the performance of the valence-arousal emotion recognition system by 4 and 5% on PAFEW and 3 and 2% on DEAP datasets. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 Reviewed by: Mohammad Khosravi, Persian Gulf University, Iran; Yizhang Jiang, Jiangnan University, China Edited by: Chee-Kong Chui, National University of Singapore, Singapore This article was submitted to Perception Science, a section of the journal Frontiers in Neuroscience |
ISSN: | 1662-453X 1662-4548 1662-453X |
DOI: | 10.3389/fnins.2022.911767 |