MAM-RNN: Multi-level Attention Model Based RNN for Video Captioning

MAM-RNN: Multi-level Attention Model Based RNN for Video Captioning

Xuelong Li, Bin Zhao, Xiaoqiang Lu

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 2208-2214. https://doi.org/10.24963/ijcai.2017/307

Visual information is quite important for the task of video captioning. However, in the video, there are a lot of uncorrelated content, which may cause interference to generate a correct caption. Based on this point, we attempt to exploit the visual features which are most correlated to the caption. In this paper, a Multi-level Attention Model based Recurrent Neural Network (MAM-RNN) is proposed, where MAM is utilized to encode the visual feature and RNN works as the decoder to generate the video caption. During generation, the proposed approach is able to adaptively attend to the salient regions in the frame and the frames correlated to the caption. Practically, the experimental results on two benchmark datasets, i.e., MSVD and Charades, have shown the excellent performance of the proposed approach.
Keywords:
Machine Learning: Classification
Machine Learning: Data Mining
Machine Learning: Feature Selection/Construction
Machine Learning: Machine Learning