Self-Supervised Video Action Localization with Adversarial Temporal Transforms
Self-Supervised Video Action Localization with Adversarial Temporal Transforms
Guoqiang Gong, Liangfeng Zheng, Wenhao Jiang, Yadong Mu
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 693-699.
https://doi.org/10.24963/ijcai.2021/96
Weakly-supervised temporal action localization aims to locate intervals of action instances with only video-level action labels for training. However, the localization results generated from video classification networks are often not accurate due to the lack of temporal boundary annotation of actions. Our motivating insight is that the temporal boundary of action should be stably predicted under various temporal transforms. This inspires a self-supervised equivariant transform consistency constraint. We design a set of temporal transform operations, including naive temporal down-sampling to learnable attention-piloted time warping. In our model, a localization network aims to perform well under all transforms, and another policy network is designed to choose a temporal transform at each iteration that adversarially brings localization result inconsistent with the localization network's. Additionally, we devise a self-refine module to enhance the completeness of action intervals harnessing temporal and semantic contexts. Experimental results on THUMOS14 and ActivityNet demonstrate that our model consistently outperforms the state-of-the-art weakly-supervised temporal action localization methods.
Keywords:
Computer Vision: Action Recognition
Computer Vision: Video: Events, Activities and Surveillance