MOSA: Music mOtion and Semantic Annotation dataset
MOSA dataset is a large-scale music dataset containing 742 professional piano and violin solo music performances with 23 musicians (> 30 hours, and > 570 K notes). This dataset features following types of data: High-quality 3-D motion capture data Audio recordings Manual semantic annotations T...
Saved in:
Main Authors | , , , , , , , , , , , , , , |
---|---|
Format | Data Set |
Language | English |
Published |
Zenodo
30.05.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | MOSA dataset is a large-scale music dataset containing 742 professional piano and violin solo music performances with 23 musicians (> 30 hours, and > 570 K notes). This dataset features following types of data:
High-quality 3-D motion capture data
Audio recordings
Manual semantic annotations
This is the dataset of the paper: Huang et al. (2024) MOSA: Music Motion with Semantic Annotation Dataset for Multimedia Anaysis and Generation. IEEE/ACM Transactions on Audio, Speech and Language Processing. DOI: 10.1109/TASLP.2024.3407529https://arxiv.org/abs/2406.06375
The description of dataset is avaiable on Github: https://github.com/yufenhuang/MOSA-Music-mOtion-and-Semantic-Annotation-dataset/blob/main/MOSA-dataset/dataset.md
To request the access of full dataset, please sign in with Zenodo and submit the request from. |
---|---|
Bibliography: | RelationTypeNote: HasVersion -- 10.5281/zenodo.11393449 RelationTypeNote: IsPublishedIn -- 10.1109/TASLP.2024.3407529 2329-9304 10.1109/TASLP.2024.3407529 10.5281/zenodo.11393449 |
ISSN: | 2329-9304 |
DOI: | 10.5281/zenodo.11393448 |