Motion to Dance Music Generation using Latent Diffusion ModelMotion to Dance Music Generation using Latent Diffusion Model

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 144
  • Download : 0
The role of music in games and animation, particularly in dance content, is essential for creating immersive and entertaining experiences.Although recent studies have made strides in generating dance music from videos, their practicality in integrating music into games and animation remains limited. In this context, we present a method capable of generating plausible dance music from 3D motion data and genre labels. Our approach leverages a combination of a UNET-based latent diffusion model and a pre-trained VAE model. To evaluate the performance of the proposed model, we employ evaluation metrics to assess various audio properties, including beat alignment, audio quality, motion-music correlation, and genre score. The quantitative results show that our approach outperforms previous methods. Furthermore, we demonstrate that our model can generate audio that seamlessly fits to in-the-wild motion data. This capability enables us to create plausible dance music that complements dynamic movements of characters and enhances overall audiovisual experience in interactive media. Examples from our proposed model are available at this link: https://dmdproject.github.io/.
Publisher
Association for Computing Machinery, Inc
Issue Date
2023-12-13
Language
English
Citation

SIGGRAPH Asia 2023

URI
http://hdl.handle.net/10203/316352
Appears in Collection
GCT-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0