Hacking Task Confounder in Meta-Learning
Hacking Task Confounder in Meta-Learning
Jingyao Wang, Yi Ren, Zeen Song, Jianqi Zhang, Changwen Zheng, Wenwen Qiang
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 5064-5072.
https://doi.org/10.24963/ijcai.2024/560
Meta-learning enables rapid generalization to new tasks by learning knowledge from various tasks. It is intuitively assumed that as the training progresses, a model will acquire richer knowledge, leading to better generalization performance. However, our experiments reveal an unexpected result: there is negative knowledge transfer between tasks, affecting generalization performance. To explain this phenomenon, we conduct Structural Causal Models (SCMs) for causal analysis. Our investigation uncovers the presence of spurious correlations between task-specific causal factors and labels in meta-learning. Furthermore, the confounding factors differ across different batches. We refer to these confounding factors as ``Task Confounders". Based on these findings, we propose a plug-and-play Meta-learning Causal Representation Learner (MetaCRL) to eliminate task confounders. It encodes decoupled generating factors from multiple tasks and utilizes an invariant-based bi-level optimization mechanism to ensure their causality for meta-learning. Extensive experiments on various benchmark datasets demonstrate that our work achieves state-of-the-art (SOTA) performance. The code is provided in https://github.com/WangJingyao07/MetaCRL.
Keywords:
Machine Learning: ML: Meta-learning
Computer Vision: CV: Transfer, low-shot, semi- and un- supervised learning
Machine Learning: ML: Causality
Machine Learning: ML: Few-shot learning