Unpaired Multi-Domain Image Generation via Regularized Conditional GANs
Unpaired Multi-Domain Image Generation via Regularized Conditional GANs
Xudong Mao, Qing Li
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 2553-2559.
https://doi.org/10.24963/ijcai.2018/354
In this paper, we study the problem of multi-domain image generation, the goal of which is to generate pairs of corresponding images from different domains. With the recent development in generative models, image generation has achieved great progress and has been applied to various computer vision tasks. However, multi-domain image generation may not achieve the desired performance due to the difficulty of learning the correspondence of different domain images, especially when the information of paired samples is not given. To tackle this problem, we propose Regularized Conditional GAN (RegCGAN) which is capable of learning to generate corresponding images in the absence of paired training data. RegCGAN is based on the conditional GAN, and we introduce two regularizers to guide the model to learn the corresponding semantics of different domains. We evaluate the proposed model on several tasks for which paired training data is not given, including the generation of edges and photos, the generation of faces with different attributes, etc. The experimental results show that our model can successfully generate corresponding images for all these tasks, while outperforms the baseline methods. We also introduce an approach of applying RegCGAN to unsupervised domain adaptation.
Keywords:
Machine Learning: Unsupervised Learning
Machine Learning: Deep Learning
Computer Vision: Computer Vision