No Regularization Is Needed: Efficient and Effective Incomplete Label Distribution Learning

No Regularization Is Needed: Efficient and Effective Incomplete Label Distribution Learning

Xiang Li, Songcan Chen

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 4470-4478. https://doi.org/10.24963/ijcai.2024/494

In reality, it is laborious to obtain complete label degrees, giving birth to Incomplete Label Distribution Learning (InLDL), where some degrees are missing. Existing InLDL methods often assume that degrees are uniformly random missing. However, it is often not the case in practice, which arises the first issue. Besides, they often adopt explicit regularization to compensate the incompleteness, leading to burdensome parameter tuning and extra computation, causing the second issue. To address the first issue, we adopt a more practical setting, i.e., small degrees are more prone to be missing, since large degrees are likely to catch more attention. To tackle the second issue, we argue that label distribution itself already contains abundant knowledge, such as label correlation and ranking order, thus it may have provided enough prior for learning. It is precisely because existing methods overlook such a prior that leads to the forced adoption of explicit regularization. By directly utilizing the label degrees prior, we design a properly weighted objective function, exempting the need from explicit regularization. Moreover, we provide rigorous theoretical analysis, revealing in principle that the weighting plays an implicit regularization role. To sum up, our method has four advantages, it is 1) model selection free; 2) with closed-form solution (sub-problem) and easy-to-implement (a few lines of codes); 3) with linear computational complexity in the number of samples, thus scalable to large datasets; 4) competitive with state-of-the-arts in both random and non-random missing scenarios.
Keywords:
Machine Learning: ML: Multi-label learning
Machine Learning: ML: Weakly supervised learning