Understanding How Feature Structure Transfers in Transfer Learning

Understanding How Feature Structure Transfers in Transfer Learning

Tongliang Liu, Qiang Yang, Dacheng Tao

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 2365-2371. https://doi.org/10.24963/ijcai.2017/329

Transfer learning transfers knowledge across domains to improve the learning performance. Since feature structures generally represent the common knowledge across different domains, they can be transferred successfully even though the labeling functions across domains differ arbitrarily. However, theoretical justification for this success has remained elusive. In this paper, motivated by self-taught learning, we regard a set of bases as a feature structure of a domain if the bases can (approximately) reconstruct any observation in this domain. We propose a general analysis scheme to theoretically justify that if the source and target domains share similar feature structures, the source domain feature structure is transferable to the target domain, regardless of the change of the labeling functions across domains. The transferred structure is interpreted to function as a regularization matrix which benefits the learning process of the target domain task. We prove that such transfer enables the corresponding learning algorithms to be uniformly stable. Specifically, we illustrate the existence of feature structure transfer in two well-known transfer learning settings: domain adaptation and learning to learn.
Keywords:
Machine Learning: Learning Theory
Machine Learning: Transfer, Adaptation, Multi-task Learning