Does Tail Label Help for Large-Scale Multi-Label Learning
Does Tail Label Help for Large-Scale Multi-Label Learning
Tong Wei, Yu-Feng Li
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 2847-2853.
https://doi.org/10.24963/ijcai.2018/395
Large-scale multi-label learning annotates relevant labels for unseen data from a huge number of candidate labels. It is well known that in large-scale multi-label learning, labels exhibit a long tail distribution in which a significant fraction of labels are tail labels. Nonetheless, how tail labels make impact on the performance metrics in large-scale multi-label learning was not explicitly quantified. In this paper, we disclose that whatever labels are randomly missing or misclassified, tail labels impact much less than common labels in terms of commonly used performance metrics (Top-$k$ precision and nDCG@$k$). With the observation above, we develop a low-complexity large-scale multi-label learning algorithm with the goal of facilitating fast prediction and compact models by trimming tail labels adaptively. Experiments clearly verify that both the prediction time and the model size are significantly reduced without sacrificing much predictive performance for state-of-the-art approaches.
Keywords:
Machine Learning: Multi-instance;Multi-label;Multi-view learning
Machine Learning Applications: Big data ; Scalability