POPDG: Popular 3D Dance Generation with PopDanceSet

Generating dances that are both lifelike and well-aligned with music continues to be a challenging task in the cross-modal domain. This paper introduces PopDanceSet, the first dataset tailored to the preferences of young audiences, enabling the generation of aesthetically oriented dances. And it sur...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Luo, Zhenye, Ren, Min, Hu, Xuecai, Huang, Yongzhen, Yao, Li
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 06.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Generating dances that are both lifelike and well-aligned with music continues to be a challenging task in the cross-modal domain. This paper introduces PopDanceSet, the first dataset tailored to the preferences of young audiences, enabling the generation of aesthetically oriented dances. And it surpasses the AIST++ dataset in music genre diversity and the intricacy and depth of dance movements. Moreover, the proposed POPDG model within the iDDPM framework enhances dance diversity and, through the Space Augmentation Algorithm, strengthens spatial physical connections between human body joints, ensuring that increased diversity does not compromise generation quality. A streamlined Alignment Module is also designed to improve the temporal alignment between dance and music. Extensive experiments show that POPDG achieves SOTA results on two datasets. Furthermore, the paper also expands on current evaluation metrics. The dataset and code are available at https://github.com/Luke-Luo1/POPDG.
ISSN:2331-8422