LP-MusicCaps: LLM-Based Pseudo Music Captioning

Automatic music captioning, which generates natural language descriptions for given music tracks, holds significant potential for enhancing the understanding and organization of large volumes of musical data. Despite its importance, researchers face challenges due to the costly and time-consuming co...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Doh, SeungHeon, Choi, Keunwoo, Lee, Jongpil, Nam, Juhan
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 31.07.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Automatic music captioning, which generates natural language descriptions for given music tracks, holds significant potential for enhancing the understanding and organization of large volumes of musical data. Despite its importance, researchers face challenges due to the costly and time-consuming collection process of existing music-language datasets, which are limited in size. To address this data scarcity issue, we propose the use of large language models (LLMs) to artificially generate the description sentences from large-scale tag datasets. This results in approximately 2.2M captions paired with 0.5M audio clips. We term it Large Language Model based Pseudo music caption dataset, shortly, LP-MusicCaps. We conduct a systemic evaluation of the large-scale music captioning dataset with various quantitative evaluation metrics used in the field of natural language processing as well as human evaluation. In addition, we trained a transformer-based music captioning model with the dataset and evaluated it under zero-shot and transfer-learning settings. The results demonstrate that our proposed approach outperforms the supervised baseline model.
ISSN:2331-8422