Can We Find Neurons that Cause Unrealistic Images in Deep Generative Networks?

Can We Find Neurons that Cause Unrealistic Images in Deep Generative Networks?

Hwanil Choi, Wonjoon Chang, Jaesik Choi

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 2888-2894. https://doi.org/10.24963/ijcai.2022/400

Even though Generative Adversarial Networks (GANs) have shown a remarkable ability to generate high-quality images, GANs do not always guarantee the generation of photorealistic images. Occasionally, they generate images that have defective or unnatural objects, which are referred to as `artifacts'. Research to investigate why these artifacts emerge and how they can be detected and removed has yet to be sufficiently carried out. To analyze this, we first hypothesize that rarely activated neurons and frequently activated neurons have different purposes and responsibilities for the progress of generating images. In this study, by analyzing the statistics and the roles for those neurons, we empirically show that rarely activated neurons are related to the failure results of making diverse objects and inducing artifacts. In addition, we suggest a correction method, called `Sequential Ablation’, to repair the defective part of the generated images without high computational cost and manual efforts.
Keywords:
Machine Learning: Generative Adverserial Networks
Machine Learning: Explainable/Interpretable Machine Learning