Connectionist Temporal Modeling of Video and Language: a Joint Model for Translation and Sign Labeling
Connectionist Temporal Modeling of Video and Language: a Joint Model for Translation and Sign Labeling
Dan Guo, Shengeng Tang, Meng Wang
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 751-757.
https://doi.org/10.24963/ijcai.2019/106
Online sign interpretation suffers from challenges presented by hybrid semantics learning among sequential variations of visual representations, sign linguistics, and textual grammars. This paper proposes a Connectionist Temporal Modeling (CTM) network for sentence translation and sign labeling. To acquire short-term temporal correlations, a Temporal Convolution Pyramid (TCP) module is performed on 2D CNN features to realize (2D+1D)=pseudo 3D' CNN features. CTM aligns the pseudo 3D' with the original 3D CNN clip features and fuses them. Next, we implement a connectionist decoding scheme for long-term sequential learning. Here, we embed dynamic programming into the decoding scheme, which learns temporal mapping among features, sign labels, and the generated sentence directly. The solution using dynamic programming to sign labeling is considered as pseudo labels. Finally, we utilize the pseudo supervision cues in an end-to-end framework. A joint objective function is designed to measure feature correlation, entropy regularization on sign labeling, and probability maximization on sentence decoding. The experimental results using the RWTH-PHOENIX-Weather and USTC-CSL datasets demonstrate the effectiveness of the proposed approach.
Keywords:
Computer Vision: Language and Vision
Computer Vision: Biometrics, Face and Gesture Recognition