Proceedings Abstracts of the Twenty-Fifth International Joint Conference on Artificial Intelligence

Robust Joint Discriminative Feature Learning for Visual Tracking / 3403
Xiangyuan Lan, Shengping Zhang, Pong C. Yuen

Because of the complementarity of multiple visual cues (features) in appearance modeling, many tracking algorithms attempt to fuse multiple features to improve the tracking performance from two aspects: increasing the representation accuracy against appearance variations and enhancing the discriminability between the tracked target and its background. Since both these two aspects simultaneously contribute to the success of a visual tracker, how to fully unleash the capabilities of multiple features from these two aspects in appearance modeling is a key issue for feature fusion-based visual tracking. To address this problem, different from other feature fusion-based trackers which consider one of these two aspects only, this paper proposes an unified feature learning framework which simultaneously exploits both the representation capability and the discriminability of multiple features for visual tracking. In particular, the proposed feature learning framework is capable of: 1) learning robust features by separating out corrupted features for accurate feature representation, 2) seamlessly imposing the discriminabiltiy of multiple visual cues into feature learning, and 3) fusing features by exploiting their shared and feature-specific discriminative information. Extensive experiment results on challenging videos show that the proposed tracker performs favourably against other ten state-of-the-art trackers.