Machine Unlearning via Null Space Calibration

Machine Unlearning via Null Space Calibration

Huiqiang Chen, Tianqing Zhu, Xin Yu, Wanlei Zhou

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 358-366. https://doi.org/10.24963/ijcai.2024/40

Machine unlearning aims to enable models to forget specific data instances when receiving deletion requests. Current research centers on efficient unlearning to erase the influence of data from the model and neglects the subsequent impacts on the remaining data. Consequently, existing unlearning algorithms degrade the model's performance after unlearning, known as over-unlearning. This paper addresses this critical yet under-explored issue by introducing machine Unlearning via Null Space Calibration (UNSC), which can accurately unlearn target samples without over-unlearning. On the contrary, by calibrating the decision space during unlearning, UNSC can significantly improve the model's performance on the remaining samples. In particular, our approach hinges on confining the unlearning process to a specified null space tailored to the remaining samples, which is augmented by strategically pseudo-labeling the unlearning samples. Comparison against several established baselines affirms the superiority of our approach.
Keywords:
AI Ethics, Trust, Fairness: ETF: Trustworthy AI
AI Ethics, Trust, Fairness: ETF: Accountability
AI Ethics, Trust, Fairness: ETF: Ethical, legal and societal issues
AI Ethics, Trust, Fairness: ETF: Other