Deep Multi-Task Learning with Adversarial-and-Cooperative Nets
Deep Multi-Task Learning with Adversarial-and-Cooperative Nets
Pei Yang, Qi Tan, Jieping Ye, Hanghang Tong, Jingrui He
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 4078-4084.
https://doi.org/10.24963/ijcai.2019/566
In this paper, we propose a deep multi-Task learning model based on Adversarial-and-COoperative nets (TACO). The goal is to use an adversarial-and-cooperative strategy to decouple the task-common and task-specific knowledge, facilitating the fine-grained knowledge sharing among tasks. TACO accommodates multiple game players, i.e., feature extractors, domain discriminator, and tri-classifiers. They play the MinMax games adversarially and cooperatively to distill the task-common and task-specific features, while respecting their discriminative structures. Moreover, it adopts a divide-and-combine strategy to leverage the decoupled multi-view information to further improve the generalization performance of the model. The experimental results show that our proposed method significantly outperforms the state-of-the-art algorithms on the benchmark datasets in both multi-task learning and semi-supervised domain adaptation scenarios.
Keywords:
Machine Learning: Transfer, Adaptation, Multi-task Learning
Machine Learning: Adversarial Machine Learning