MCM: Multi-condition Motion Synthesis Framework
MCM: Multi-condition Motion Synthesis Framework
Zeyu Ling, Bo Han, Yongkang Wong, Han Lin, Mohan Kankanhalli, Weidong Geng
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 1083-1091.
https://doi.org/10.24963/ijcai.2024/120
Conditional human motion synthesis (HMS) aims to generate human motion sequences that conform to specific conditions. Text and audio represent the two predominant modalities employed as HMS control conditions. While existing research has primarily focused on single conditions, the multi-condition human motion synthesis remains underexplored. In this study, we propose a multi-condition HMS framework, termed MCM, based on a dual-branch structure composed of a main branch and a control branch. This framework effectively extends the applicability of the diffusion model, which is initially predicated solely on textual conditions, to auditory conditions. This extension encompasses both music-to-dance and co-speech HMS while preserving the intrinsic quality of motion and the capabilities for semantic association inherent in the original model.
Furthermore, we propose the implementation of a Transformer-based diffusion model, designated as MWNet, as the main branch. This model adeptly apprehends the spatial intricacies and inter-joint correlations inherent in motion sequences, facilitated by the integration of multi-wise self-attention modules.
Extensive experiments show that our method achieves competitive results in single-condition and multi-condition HMS tasks.
Keywords:
Computer Vision: CV: 3D computer vision
Computer Vision: CV: Applications