Acoustic Characterization of Huntington's Disease Emotional Expression: An Explainable AI Approach

Huntington's Disease (HD) is a neurodegenerative disorder characterized by motor, cognitive, and psychiatric symptoms. The first studies on the emotional behavior of HD patients, based on speech emotion recognition (SER) models, have recently emerged. Following on from this line of research, we...

Full description

Saved in:
Bibliographic Details
Published in2024 12th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW) pp. 247 - 255
Main Authors Chenain, Lucie, Bachoud-Levi, Anne-Catherine, Clavel, Chloe
Format Conference Proceeding
LanguageEnglish
Published IEEE 15.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Huntington's Disease (HD) is a neurodegenerative disorder characterized by motor, cognitive, and psychiatric symptoms. The first studies on the emotional behavior of HD patients, based on speech emotion recognition (SER) models, have recently emerged. Following on from this line of research, we propose in this article to study two as yet unexplored points: 1. The ability of HD individuals to express emotions using emotional vocal narration; 2. The nuances of acoustic expression across various emotions in HD patients. In the first experiment, emotional stories narrated by HD, premanifest HD (preHD), and control individuals were rated to analyze patients' ability to follow the given instructions. HD subjects showed difficulties in telling happy and angry stories compared to control subjects, while preHD only showed difficulties in telling angry stories. The second experiment is dedicated to analyzing the acoustic expression of emotions in the patients. For this purpose, emotion classifiers were developed and a Shapley Additive exPlanations (SHAP) analysis was performed to gain insights into the different acoustic expressions of emotions of the groups. The classifiers performed satisfactorily, except for the emotion anger in HD and joy in preHD. This indicates that the acoustic expression of these emotions is more difficult to perceive, even if the subjects were able to follow the given instructions. The SHAP analysis showed that the features that are important for the prediction of emotions differ from one group to another. This study contributes to a deeper understanding of emotional speech narration in HD and preHD individuals, highlighting not only the deviations from controls, but also uncovering specific acoustic patterns indicative of emotional expression in the HD group.
DOI:10.1109/ACIIW63320.2024.00052