Learning Causally Disentangled Representations via the Principle of Independent Causal Mechanisms
Learning Causally Disentangled Representations via the Principle of Independent Causal Mechanisms
Aneesh Komanduri, Yongkai Wu, Feng Chen, Xintao Wu
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 4308-4316.
https://doi.org/10.24963/ijcai.2024/476
Learning disentangled causal representations is a challenging problem that has gained significant attention recently due to its implications for extracting meaningful information for downstream tasks. In this work, we define a new notion of causal disentanglement from the perspective of independent causal mechanisms. We propose ICM-VAE, a framework for learning causally disentangled representations supervised by causally related observed labels. We model causal mechanisms using nonlinear learnable flow-based diffeomorphic functions to map noise variables to latent causal variables. Further, to promote the disentanglement of causal factors, we propose a causal disentanglement prior learned from auxiliary labels and the latent causal structure. We theoretically show the identifiability of causal factors and mechanisms up to permutation and elementwise reparameterization. We empirically demonstrate that our framework induces highly disentangled causal factors, improves interventional robustness, and is compatible with counterfactual generation.
Keywords:
Machine Learning: ML: Representation learning
Machine Learning: ML: Causality
Machine Learning: ML: Generative models