WPML3CP: Wasserstein Partial Multi-Label Learning with Dual Label Correlation Perspectives
WPML3CP: Wasserstein Partial Multi-Label Learning with Dual Label Correlation Perspectives
Ximing Li, Yuanchao Dai, Bing Wang, Changchun Li, Renchu Guan, Fangming Gu, Jihong Ouyang
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 4479-4487.
https://doi.org/10.24963/ijcai.2024/495
Partial multi-label learning (PMLL) refers to a weakly-supervised classification problem, where each instance is associated with a set of candidate labels, covering its ground-truth labels but also with irrelevant ones. The current methodology of PMLL is to estimate the ground-truth confidences of candidate labels, i.e., the likelihood of a candidate label being a ground-truth one, and induce the multi-label predictor with them, rather than the candidate labels. In this paper, we aim to estimate precise ground-truth confidences by leveraging precise label correlations, which are also required to estimate. To this end, we propose to capture label correlations from both measuring and modeling perspectives. Specifically, we measure the loss between ground-truth confidences and predictions by employing the Wasserstein distance involving label correlations; and form a label correlation-aware regularization to constraint predictive parameters. The two techniques are coupled to promote precise estimations of label correlations. Upon these ideas, we propose a novel PMLL method, namely Wasserstein Partial Multi-Label Learning with dual Label Correlation Perspectives (WPML3CP). We conduct extensive experiments on several benchmark datasets. Empirical results demonstrate that WPML3CP can outperform the existing PMLL baselines.
Keywords:
Machine Learning: ML: Weakly supervised learning
Machine Learning: ML: Classification
Machine Learning: ML: Self-supervised Learning