Augmentation Embedded Deep Convolutional Neural Network for Predominant Instrument Recognition

Instrument recognition is a critical task in the field of music information retrieval and deep neural networks have become the dominant models for this task due to their effectiveness. Recently, incorporating data augmentation methods into deep neural networks has been a popular approach to improve...

Full description

Saved in:
Bibliographic Details
Published inApplied sciences Vol. 13; no. 18; p. 10189
Main Authors Zhang, Jian, Bai, Na
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.09.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Instrument recognition is a critical task in the field of music information retrieval and deep neural networks have become the dominant models for this task due to their effectiveness. Recently, incorporating data augmentation methods into deep neural networks has been a popular approach to improve instrument recognition performance. However, existing data augmentation processes are always based on simple instrument spectrogram representation and are typically independent of the predominant instrument recognition process. This may result in a lack of coverage for certain required instrument types, leading to inconsistencies between the augmented data and the specific requirements of the recognition model. To build more expressive instrument representation and address this inconsistency, this paper constructs a combined two-channel representation for further capturing the unique rhythm patterns of different types of instruments and proposes a new predominant instrument recognition strategy called Augmentation Embedded Deep Convolutional neural Network (AEDCN). AEDCN adds two fully connected layers into the backbone neural network and integrates data augmentation directly into the recognition process by introducing a proposed Adversarial Embedded Conditional Variational AutoEncoder (ACEVAE) between the added fully connected layers of the backbone network. This embedded module aims to generate augmented data based on designated labels, thereby ensuring its compatibility with the predominant instrument recognition model. The effectiveness of the combined representation and AEDCN is validated through comparative experiments with other commonly used deep neural networks and data augmentation-based predominant instrument recognition methods using a polyphonic music recognition dataset. The results demonstrate the superior performance of AEDCN in predominant instrument recognition tasks.
ISSN:2076-3417
2076-3417
DOI:10.3390/app131810189