Exploring Semantic Understanding and Generative Modeling in Speech-Text Multimodal Data Fusion
Accurate semantic understanding is crucial in the field of human-computer interaction, and it can also greatly improve the comfort of users. In this paper, we use semantic emotion recognition as the research object, collect speech datasets from multiple domains, and extract their semantic features f...
Saved in:
Published in | Applied mathematics and nonlinear sciences Vol. 9; no. 1 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Sciendo
01.01.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Accurate semantic understanding is crucial in the field of human-computer interaction, and it can also greatly improve the comfort of users. In this paper, we use semantic emotion recognition as the research object, collect speech datasets from multiple domains, and extract their semantic features from natural language information. The natural language is digitized using word embedding technology, and then machine learning methods are used to understand the text’s semantics. The attention mechanism is included in the construction of a multimodal Attention-BiLSTM model. The model presented in this paper convergence is achieved in around 20 epochs of training, and the training time and effectiveness are better than those of the other two models. The model in this paper has the highest recognition accuracy. Compared to the S-CBLA model, the recognition accuracy of five semantic emotions, namely happy, angry, sad, sarcastic, and fear, has improved by 24.89%, 15.75%, 1.99%, 2.5%, and 8.5%, respectively. In addition, the probability of correctly recognizing the semantic emotion “Pleasure” in the S-CBLA model is 0.5, while the probability of being recognized as “Angry” is 0.25, which makes it easy to misclassify pleasure as anger. The model in this paper, on the other hand, is capable of distinguishing most semantic emotion types. To conclude, the above experiments confirm the superiority of this paper’s model. This paper’s model improves the accuracy of recognizing semantic emotions and is practical for human-computer interaction. |
---|---|
ISSN: | 2444-8656 2444-8656 |
DOI: | 10.2478/amns-2024-3156 |