Intertwining Two Artificial Minds: Chaining GPT and RoBERTa for Emotion Detection
Emotion detection is a growing field focusing on developing machine learning models for classifying emotions. Identifying human emotions is paramount to understanding the deeper meaning of data, and is essential in many applications to improve their interaction with people. The advent of Large Langu...
Saved in:
Published in | 2023 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE) pp. 1 - 6 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
04.12.2023
|
Subjects | |
Online Access | Get full text |
DOI | 10.1109/CSDE59766.2023.10487718 |
Cover
Loading…
Summary: | Emotion detection is a growing field focusing on developing machine learning models for classifying emotions. Identifying human emotions is paramount to understanding the deeper meaning of data, and is essential in many applications to improve their interaction with people. The advent of Large Language Models like GPT and RoBERTa models has enabled practitioners to understand textual data with more detail than before. In emotion detection, GPT models are found to excel at nuanced emotion detection and generation but lack the accuracy of fine-tuned models. Fine-tuned models are more accurate due to the domain-specific knowledge incorporated through training. This paper presents a novel framework combining the two types of models. It leverages the generative strength of GPT models for synthetic data augmentation on minority classes to improve the accuracy of a fine-tuned RoBERTa model. The RoBERTa, GPT-3.5-turbo, and GPT-4 evaluated on the GoEmotions dataset exhibited Macro-F1 scores of 0.49, 0.17, and 0.22, respectively. Our best-performing LangChain GPT3.5 + RoBERTa model with synthetic data augmentation on minority classes achieved a Macro-F1 score of 0.51. |
---|---|
DOI: | 10.1109/CSDE59766.2023.10487718 |