Abstract

Proceedings Abstracts of the Twenty-Fifth International Joint Conference on Artificial Intelligence

Learning Compact Visual Representation with Canonical Views for Robust Mobile Landmark Search / 3959
Lei Zhu, Jialie Shen, Xiaobai Liu, Liang Xie, Liqiang Nie

Mobile Landmark Search (MLS) recently receives increasing attention. However, it still remains unsolved due to two important issues. One is high bandwidth consumption of query transmission, and the other is the huge visual variations of query images. This paper proposes a Canonical View based Compact Visual Representation (2CVR) to handle these problems via novel three stage learning. First, a submodular function is designed to measure visual representativeness and redundancy of a view set. With it, canonical views, which capture key visual appearances of landmark with limited redundancy, are efficiently discovered with an iterative mining strategy. Second, multimodal sparse coding is applied to transform multiple visual features into an intermediate representation which can robustly characterize visual contents of varied landmark images with only fixed canonical views. Finally, compact binary codes are learned on intermediate representation within a tailored binary embedding model which preserves visual relations of images measured with canonical views and removes noises. With 2CVR, robust visual query processing, low cost of query transmission, and fast search process are simultaneously supported. Experiments demonstrate the superior performance of 2CVR over several state-of-the-art methods.

PDF