Human Motion Generation via Cross-Space Constrained Sampling

Human Motion Generation via Cross-Space Constrained Sampling

Zhongyue Huang, Jingwei Xu, Bingbing Ni

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 757-763. https://doi.org/10.24963/ijcai.2018/105

We aim to automatically generate human motion sequence from a single input person image, with some specific action label. To this end, we propose a cross-space human motion video generation network which features two paths: a forward path that first samples/generates a sequence of low dimensional motion vectors based on Gaussian Process (GP), which is paired with the input person image to form a moving human figure sequence; and a backward path based on the predicted human images to re-extract the corresponding latent motion representations. As lack of supervision, the reconstructed latent motion representations are expected to be as close as possible to the GP sampled ones, thus yielding a cyclic objective function for cross-space (i.e., motion and appearance) mutual constrained generation. We further propose an alternative sampling/generation algorithm with respect to constraints from both spaces. Extensive experimental results show that the proposed framework successfully generates novel human motion sequences with reasonable visual quality.
Keywords:
Machine Learning: Neural Networks
Machine Learning: Deep Learning
Computer Vision: Computer Vision