Quantifying Consistency and Information Loss for Causal Abstraction Learning
Quantifying Consistency and Information Loss for Causal Abstraction Learning
Fabio Massimo Zennaro, Paolo Turrini, Theodoros Damoulas
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 5750-5757.
https://doi.org/10.24963/ijcai.2023/638
Structural causal models provide a formalism to express causal relations between variables of interest. Models and variables can represent a system at different levels of abstraction, whereby relations may be coarsened and refined according to the need of a modeller.
However, switching between different levels of abstraction requires evaluating a trade-off between the consistency and the information loss among different models.
In this paper we introduce a family of interventional measures that an agent may use to evaluate such a trade-off. We consider four measures suited for different tasks, analyze their properties, and propose algorithms to evaluate and learn causal abstractions. Finally, we illustrate the flexibility of our setup by empirically showing how different measures and algorithmic choices may lead to different abstractions.
Keywords:
Uncertainty in AI: UAI: Causality, structural causal models and causal inference