A Deep Bi-directional Attention Network for Human Motion Recovery
A Deep Bi-directional Attention Network for Human Motion Recovery
Qiongjie Cui, Huaijiang Sun, Yupeng Li, Yue Kong
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 701-707.
https://doi.org/10.24963/ijcai.2019/99
Human motion capture (mocap) data, recording the movement of markers attached to specific joints, has gradually become the most popular solution of animation production. However, the raw motion data are often corrupted due to joint occlusion, marker shedding and the lack of equipment precision, which severely limits the performance in real-world applications. Since human motion is essentially a sequential data, the latest methods resort to variants of long short-time memory network (LSTM) to solve related problems, but most of them tend to obtain visually unreasonable results. This is mainly because these methods hardly capture long-term dependencies and cannot explicitly utilize relevant context, especially in long sequences. To address these issues, we propose a deep bi-directional attention network (BAN) which can not only capture the long-term dependencies but also adaptively extract relevant information at each time step. Moreover, the proposed model, embedded attention mechanism in the bi-directional LSTM (BLSTM) structure at the encoding and decoding stages, can decide where to borrow information and use it to recover corrupted frame effectively. Extensive experiments on CMU database demonstrate that the proposed model consistently outperforms other state-of-the-art methods in terms of recovery accuracy and visualization.
Keywords:
Computer Vision: Motion and Tracking
Machine Learning: Deep Learning
Humans and AI: Human-Computer Interaction
Computer Vision: 2D and 3D Computer Vision
Computer Vision: Computer Vision