Breaking the Silence: Whisper-Driven Emotion Recognition in AI Mental Support Models

Most emotional support conversations (ESCs) currently rely on text-based interfaces, which may not be user-friendly, especially for individuals with visual impairments or those who struggle with reading and writing. Thus, we present a personalized voice-based ESC system powered by large language mod...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE Conference on Artificial Intelligence (CAI) pp. 290 - 291
Main Authors Qu, Xinghua, Sun, Zhu, Feng, Shanshan, Chen, Caishun, Tian, Tian
Format Conference Proceeding
LanguageEnglish
Published IEEE 25.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Most emotional support conversations (ESCs) currently rely on text-based interfaces, which may not be user-friendly, especially for individuals with visual impairments or those who struggle with reading and writing. Thus, we present a personalized voice-based ESC system powered by large language models (LLMs). It can analyze emotional status from vocal user inputs, which provides deep insights that text-based methods cannot, enabling the LLM-driven chatbot to offer more tailored and effective emotional support to its users. Our code is available at https://github.com/xinghua-qu/speech_emotion_recognition
DOI:10.1109/CAI59869.2024.00063