Tag Disentangled Generative Adversarial Network for Object Image Re-rendering
Tag Disentangled Generative Adversarial Network for Object Image Re-rendering
Chaoyue Wang, Chaohui Wang, Chang Xu, Dacheng Tao
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 2901-2907.
https://doi.org/10.24963/ijcai.2017/404
In this paper, we propose a principled Tag Disentangled Generative Adversarial Networks (TD-GAN) for re-rendering new images for the object of interest from a single image of it by specifying multiple scene properties (such as viewpoint, illumination, expression, etc.). The whole framework consists of a disentangling network, a generative network, a tag mapping net, and a discriminative network, which are trained jointly based on a given set of images that are completely/partially tagged (i.e., supervised/semi-supervised setting). Given an input image, the disentangling network extracts disentangled and interpretable representations, which are then used to generate images by the generative network. In order to boost the quality of disentangled representations, the tag mapping net is integrated to explore the consistency between the image and its tags. Furthermore, the discriminative network is introduced to implement the adversarial training strategy for generating more realistic images. Experiments on two challenging datasets demonstrate the state-of-the-art performance of the proposed framework in the problem of interest.
Keywords:
Machine Learning: Neural Networks
Machine Learning: Deep Learning