Multimodal Prediction of Alexithymia from Physiological and Audio Signals
Alexithymia is a trait that reflects a person's difficulty in recognising and expressing their emotions, which has been associated with various forms of mental illness. Identifying alexithymia can have therapeutic, preventive, and diagnostic benefits. However, there has been limited research on...
Saved in:
Published in | 2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW) pp. 1 - 8 |
---|---|
Main Authors | , , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
10.09.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Alexithymia is a trait that reflects a person's difficulty in recognising and expressing their emotions, which has been associated with various forms of mental illness. Identifying alexithymia can have therapeutic, preventive, and diagnostic benefits. However, there has been limited research on proposing predictive models for alexithymia, and literature on multimodal approaches is almost non-existent. In this light, we present a novel predictive framework that utilises multimodal physiological and audio signals, such as heart rate, skin conductance level, facial electromyograms, and speech recordings to detect and classify alexithymia. To this end, two novel datasets were collected through an emotion processing imagery experiment, and subsequently utilised on the task of alexithymia classification by adopting the TAS-20 (Toronto Alexithymia Scale). Furthermore, we developed a set of temporal features that both capture spectral information and are localised in the time-domain (e.g., via wavelets). Using the extracted features, simple machine learning classifiers can be used in the proposed framework, achieving up to 96% f1-score - even when using data from only one of the 12 stages of the experiment. Interestingly, we also find that combining auditory and physiological features in a multimodal manner further improves classification outcomes. The datasets are made available on request by following the provided github link |
---|---|
DOI: | 10.1109/ACIIW59127.2023.10388211 |