Multi-Label Co-Training
Multi-Label Co-Training
Yuying Xing, Guoxian Yu, Carlotta Domeniconi, Jun Wang, Zili Zhang
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 2882-2888.
https://doi.org/10.24963/ijcai.2018/400
Multi-label learning aims at assigning a set of appropriate labels to multi-label samples. Although it has been successfully applied in various domains in recent years, most multi-label learning methods require sufficient labeled training samples, because of the large number of possible label sets. Co-training, as an important branch of semi-supervised learning, can leverage unlabeled samples, along with scarce labeled ones, and can potentially help with the large labeled data requirement. However, it is a difficult challenge to combine multi-label learning with co-training. Two distinct issues are associated with the challenge: (i) how to solve the widely-witnessed class-imbalance problem in multi-label learning; and (ii) how to select samples with confidence, and communicate their predicted labels among classifiers for model refinement. To address these issues, we introduce an approach called Multi-Label Co-Training (MLCT). MLCT leverages information concerning the co-occurrence of pairwise labels to address the class-imbalance challenge; it introduces a predictive reliability measure to select samples, and applies label-wise filtering to confidently communicate labels of selected samples among co-training classifiers. MLCT performs favorably against related competitive multi-label learning methods on benchmark datasets and it is also robust to the input parameters.
Keywords:
Machine Learning: Classification
Machine Learning: Semi-Supervised Learning
Machine Learning: Multi-instance;Multi-label;Multi-view learning