Fine-tuning Pre-trained Models for Robustness under Noisy Labels
Fine-tuning Pre-trained Models for Robustness under Noisy Labels
Sumyeong Ahn, Sihyeon Kim, Jongwoo Ko, Se-Young Yun
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 3643-3651.
https://doi.org/10.24963/ijcai.2024/403
The presence of noisy labels in a training dataset can significantly impact the performance of machine learning models. In response to this issue, researchers have focused on identifying clean samples and reducing the influence of noisy labels. Recent works in this field have achieved notable success in terms of generalizability, albeit at the expense of extensive computing resources. Therefore, reducing computational costs remains a crucial challenge. Concurrently, in other research areas, there has been a focus on developing fine-tuning techniques to efficiently achieve high generalization performance. Despite their proven efficiently achievable generalization capabilities, these techniques have seen limited exploration from a label noise point of view. In this research, we aim to find an effective approach to fine-tune pre-trained models for noisy labeled datasets. To achieve this goal, we empirically investigate the characteristics of pre-trained models on noisy labels and propose an algorithm, named TURN. We present the results of extensive testing and demonstrate both efficient and improved denoising performance on various benchmarks, surpassing previous methods.
Keywords:
Machine Learning: ML: Robustness
AI Ethics, Trust, Fairness: ETF: Safety and robustness
Machine Learning: ML: Trustworthy machine learning