Music Emotion Recognition Based on Feature Fusion Broad Learning Method

TP391; With the rapid development in the field of artificial intelligence and natural language processing(NLP),research on music retrieval has gained importance.Music messages express emotional signals.The emotional classification of music can help in conveniently organizing and retrieving music.It...

Full description

Saved in:
Bibliographic Details
Published in东华大学学报(英文版) Vol. 40; no. 3; pp. 343 - 350
Main Authors YU Jinming, ZHANG Chenguang, HAI Han
Format Journal Article
LanguageEnglish
Published College of Information Science and Technology,Donghua University,Shanghai 201620,China 01.06.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:TP391; With the rapid development in the field of artificial intelligence and natural language processing(NLP),research on music retrieval has gained importance.Music messages express emotional signals.The emotional classification of music can help in conveniently organizing and retrieving music.It is also the premise of using music for psychological intervention and physiological adjustment.A new chord-to-vector method was proposed,which converted the chord information of music into a chord vector of music and combined the weight of the Mel-frequency cepstral coefficient(MFCC)and residual phase(RP)with the feature fusion of a cochleogram.The music emotion recognition and classification training was carried out using the fusion of a convolution neural network and bidirectional long short-term memory(BiLSTM).In addition,based on the self-collected dataset,a comparison of the proposed model with other model structures was performed.The results show that the proposed method achieved a higher recognition accuracy compared with other models.
ISSN:1672-5220
DOI:10.19884/j.1672-5220.202201005