Position Focused Attention Network for Image-Text Matching
Position Focused Attention Network for Image-Text Matching
Yaxiong Wang, Hao Yang, Xueming Qian, Lin Ma, Jing Lu, Biao Li, Xin Fan
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 3792-3798.
https://doi.org/10.24963/ijcai.2019/526
Image-text matching tasks
have recently attracted a lot of attention in the computer vision field. The
key point of this cross-domain problem is how to accurately measure the
similarity between the visual and the textual contents, which demands a fine
understanding of both modalities. In this paper, we propose a novel position
focused attention network (PFAN) to investigate the relation between the visual
and the textual views. In this work, we integrate the object position clue to
enhance the visual-text joint-embedding learning. We first split the images into blocks, by which we
infer the relative position of region in the image. Then, an attention
mechanism is proposed to model the relations between the image region and
blocks and generate the valuable position feature, which will be further
utilized to enhance the region expression and model a more reliable
relationship between the visual image and the textual sentence. Experiments
on the popular datasets Flickr30K and MS-COCO show the effectiveness of the
proposed method. Besides the public datasets, we also conduct experiments on
our collected practical news dataset (Tencent-News) to validate the practical
application value of proposed method. As far as we know, this is the first
attempt to test the performance on the practical application. Our method can achieve
the state-of-art performance on all of these three datasets.
Keywords:
Machine Learning: Deep Learning
Computer Vision: Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation
Computer Vision: Language and Vision