Zero Shot Learning via Low-rank Embedded Semantic AutoEncoder
Zero Shot Learning via Low-rank Embedded Semantic AutoEncoder
Yang Liu, Quanxue Gao, Jin Li, Jungong Han, Ling Shao
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 2490-2496.
https://doi.org/10.24963/ijcai.2018/345
Zero-shot learning (ZSL) has been widely researched and get successful in machine learning. Most existing ZSL methods aim to accurately recognize objects of unseen classes by learning a shared mapping from the feature space to a semantic space. However, such methods did not investigate in-depth whether the mapping can precisely reconstruct the original visual feature. Motivated by the fact that the data have low intrinsic dimensionality e.g. low-dimensional subspace. In this paper, we formulate a novel framework named Low-rank Embedded Semantic AutoEncoder (LESAE) to jointly seek a low-rank mapping to link visual features with their semantic representations. Taking the encoder-decoder paradigm, the encoder part aims to learn a low-rank mapping from the visual feature to the semantic space, while decoder part manages to reconstruct the original data with the learned mapping. In addition, a non-greedy iterative algorithm is adopted to solve our model. Extensive experiments on six benchmark datasets demonstrate its superiority over several state-of-the-art algorithms.
Keywords:
Machine Learning: Classification
Computer Vision: Recognition: Detection, Categorization, Indexing, Matching, Retrieval, Semantic Interpretation