Lifted Message Passing for Hybrid Probabilistic Inference
Lifted Message Passing for Hybrid Probabilistic Inference
Yuqiao Chen, Nicholas Ruozzi, Sriraam Natarajan
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 5701-5707.
https://doi.org/10.24963/ijcai.2019/790
Lifted inference algorithms for first-order logic models, e.g., Markov logic networks (MLNs), have been of significant interest in recent years. Lifted inference methods exploit model symmetries in order to reduce the size of the model and, consequently, the computational cost of inference. In this work, we consider the problem of lifted inference in MLNs with continuous or both discrete and continuous groundings. Existing work on lifting with continuous groundings has mostly been limited to special classes of models, e.g., Gaussian models, for which variable elimination or message-passing updates can be computed exactly. Here, we develop approximate lifted inference schemes based on particle sampling. We demonstrate empirically that our approximate lifting schemes perform comparably to existing state-of-the-art for models for Gaussian MLNs, while having the flexibility to be applied to models with arbitrary potential functions.
Keywords:
Uncertainty in AI: Approximate Probabilistic Inference
Uncertainty in AI: Relational Inference
Uncertainty in AI: Uncertainty in AI