Multi-modal fusion learning through biosignal, audio, and visual content for detection of mental stress

Mental stress is a significant risk factor for several maladies and can negatively impact a person’s quality of life, including their work and personal relationships. Traditional methods of detecting mental stress through interviews and questionnaires may not capture individuals’ instantaneous emoti...

Full description

Saved in:
Bibliographic Details
Published inNeural computing & applications Vol. 35; no. 34; pp. 24435 - 24454
Main Authors Dogan, Gulin, Akbulut, Fatma Patlar
Format Journal Article
LanguageEnglish
Published London Springer London 01.12.2023
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Mental stress is a significant risk factor for several maladies and can negatively impact a person’s quality of life, including their work and personal relationships. Traditional methods of detecting mental stress through interviews and questionnaires may not capture individuals’ instantaneous emotional responses. In this study, the method of experience sampling was used to analyze the participants’ immediate affective responses, which provides a more comprehensive and dynamic understanding of the participants’ experiences. WorkStress3D dataset was compiled using information gathered from 20 participants for three distinct modalities. During an average of one week, 175 h of data containing physiological signals such as BVP, EDA, and body temperature, as well as facial expressions and auditory data, were collected from a single subject. We present a novel fusion model that uses double-early fusion approaches to combine data from multiple modalities. The model’s F1 score of 0.94 with a loss of 0.18 is very encouraging, showing that it can accurately identify and classify varying degrees of stress. Furthermore, we investigate the utilization of transfer learning techniques to improve the efficacy of our stress detection system. Despite our efforts, we were unable to attain better results than the fusion model. Transfer learning resulted in an accuracy of 0.93 and a loss of 0.17, illustrating the difficulty of adapting pre-trained models to the task of stress analysis. The results we obtained emphasize the significance of multi-modal fusion in stress detection and the importance of selecting the most suitable model architecture for the given task. The proposed fusion model demonstrates its potential for achieving an accurate and robust classification of stress. This research contributes to the field of stress analysis and contributes to the development of effective models for stress detection.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-023-09036-4