CausalNET: Unveiling Causal Structures on Event Sequences by Topology-Informed Causal Attention

CausalNET: Unveiling Causal Structures on Event Sequences by Topology-Informed Causal Attention

Hua Zhu, Hong Huang, Kehan Yin, Zejun Fan, Hai Jin, Bang Liu

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 7144-7152. https://doi.org/10.24963/ijcai.2024/790

Causal discovery on event sequences holds a pivotal significance across domains such as healthcare, finance, and industrial systems. The crux of this endeavor lies in unraveling causal structures among event types, typically portrayed as directed acyclic graphs (DAGs). Nonetheless, prevailing methodologies often grapple with untenable assumptions and intricate optimization hurdles. To address these challenges, we present a novel model named CausalNET. At the heart of CausalNET is a special prediction module based on the Transformer architecture, which prognosticates forthcoming events by leveraging historical occurrences, with its predictive prowess amplified by a trainable causal graph engineered to fathom causal relationships among event types. Further, to augment the predictive paradigm, we devise a causal decay matrix to encapsulate the reciprocal influence of events upon each other within the topological network. During training, we alternatively refine the prediction module and fine-tune the causal graph. Comprehensive evaluation on a spectrum of real-world and synthetic datasets underscores the superior performance and scalability of CausalNET, which marks a promising step forward in the realm of causal discovery. Code and Appendix are available at https://github.com/CGCL-codes/CausalNET.
Keywords:
Uncertainty in AI: UAI: Causality, structural causal models and causal inference
Machine Learning: ML: Causality