Breaking the Silence: Whisper-Driven Emotion Recognition in AI Mental Support Models
Most emotional support conversations (ESCs) currently rely on text-based interfaces, which may not be user-friendly, especially for individuals with visual impairments or those who struggle with reading and writing. Thus, we present a personalized voice-based ESC system powered by large language mod...
Saved in:
Published in | 2024 IEEE Conference on Artificial Intelligence (CAI) pp. 290 - 291 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
25.06.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Most emotional support conversations (ESCs) currently rely on text-based interfaces, which may not be user-friendly, especially for individuals with visual impairments or those who struggle with reading and writing. Thus, we present a personalized voice-based ESC system powered by large language models (LLMs). It can analyze emotional status from vocal user inputs, which provides deep insights that text-based methods cannot, enabling the LLM-driven chatbot to offer more tailored and effective emotional support to its users. Our code is available at https://github.com/xinghua-qu/speech_emotion_recognition |
---|---|
DOI: | 10.1109/CAI59869.2024.00063 |