Better Embedding and More Shots for Few-shot Learning

Better Embedding and More Shots for Few-shot Learning

Ziqiu Chi, Zhe Wang, Mengping Yang, Wei Guo, Xinlei Xu

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 2874-2880. https://doi.org/10.24963/ijcai.2022/398

In few-shot learning, methods are enslaved to the scarce labeled data, resulting in suboptimal embedding. Recent studies learn the embedding network by other large-scale labeled data. However, the trained network may give rise to the distorted embedding of target data. We argue two respects are required for an unprecedented and promising solution. We call them Better Embedding and More Shots (BEMS). Suppose we propose to extract embedding from the embedding network. BE maximizes the extraction of general representation and prevents over-fitting information. For this purpose, we introduce the topological relation for global reconstruction, avoiding excessive memorizing. MS maximizes the relevance between the reconstructed embedding and the target class space. In this respect, increasing the number of shots is a pivotal but intractable strategy. As a creative method, we derive the bound of information-theory-based loss function and implicitly achieve infinite shots with negligible cost. A substantial experimental analysis is carried out to demonstrate the state-of-the-art performance. Compared to the baseline, our method improves by up to 10%+. We also prove that BEMS is suitable for both standard pre-trained and meta-learning embedded networks.
Keywords:
Machine Learning: Few-shot learning
Computer Vision: Transfer, low-shot, semi- and un- supervised learning   
Computer Vision: Machine Learning for Vision
Machine Learning: Classification
Machine Learning: Theory of Deep Learning