MusicBERT: Symbolic Music Understanding with Large-Scale Pre-Training
Symbolic music understanding, which refers to the understanding of music from the symbolic data (e.g., MIDI format, but not audio), covers many music applications such as genre classification, emotion classification, and music pieces matching. While good music representations are beneficial for thes...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
10.06.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Symbolic music understanding, which refers to the understanding of music from
the symbolic data (e.g., MIDI format, but not audio), covers many music
applications such as genre classification, emotion classification, and music
pieces matching. While good music representations are beneficial for these
applications, the lack of training data hinders representation learning.
Inspired by the success of pre-training models in natural language processing,
in this paper, we develop MusicBERT, a large-scale pre-trained model for music
understanding. To this end, we construct a large-scale symbolic music corpus
that contains more than 1 million music songs. Since symbolic music contains
more structural (e.g., bar, position) and diverse information (e.g., tempo,
instrument, and pitch), simply adopting the pre-training techniques from NLP to
symbolic music only brings marginal gains. Therefore, we design several
mechanisms, including OctupleMIDI encoding and bar-level masking strategy, to
enhance pre-training with symbolic music data. Experiments demonstrate the
advantages of MusicBERT on four music understanding tasks, including melody
completion, accompaniment suggestion, genre classification, and style
classification. Ablation studies also verify the effectiveness of our designs
of OctupleMIDI encoding and bar-level masking strategy in MusicBERT. |
---|---|
DOI: | 10.48550/arxiv.2106.05630 |