Cost Ensemble with Gradient Selecting for GANs
Cost Ensemble with Gradient Selecting for GANs
Minghui Liu, Jiali Deng, Meiyi Yang, Xuan Cheng, Nianbo Liu, Ming Liu, Xiaomin Wang
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 1194-1200.
https://doi.org/10.24963/ijcai.2022/167
Generative Adversarial Networks(GANs) are powerful generative models on numerous tasks and datasets but are also known for their training instability and mode collapse. The latter is because the optimal transportation map is discontinuous, but DNNs can only approximate continuous ones. One way to solve the problem is to introduce multiple discriminators or generators. However, their impacts are limited because the cost function of each component is the same. That is, they are homogeneous. In contrast, multiple discriminators with different cost functions can yield various gradients for the generator, which indicates we can use them to search for more transportation maps in the latent space. Inspired by this, we have proposed a framework to combat the mode collapse problem, containing multiple discriminators with different cost functions, named CES-GAN. Unfortunately, it may also lead to the generator being hard to train because the performance between discriminators is unbalanced, according to the Cannikin Law. Thus, a gradient selecting mechanism is also proposed to pick up proper gradients. We provide mathematical statements to prove our assumptions and conduct extensive experiments to verify the performance. The results show that CES-GAN is lightweight and more effective for fighting against the mode collapse problem than similar works.
Keywords:
Computer Vision: Neural generative models, auto encoders, GANs
Computer Vision: Adversarial learning, adversarial attack and defense methods