Lifted Probabilistic Inference by First-Order Knowledge Compilation
Guy Van den Broeck, Nima Taghipour, Wannes Meert, Jesse Davis, Luc De Raedt
Probabilistic logical languages provide powerful formalisms forknowledge representation and learning. Yet performing inference inthese languages is extremely costly, especially if it is done at thepropositional level. Lifted inference algorithms, which avoid repeatedcomputation by treating indistinguishable groups of objects as one, helpmitigate this cost. Seeking inspiration from logical inference, wherelifted inference (e.g., resolution) is commonly performed, we developa model theoretic approach to probabilistic lifted inference. Our algorithmcompiles a first-order probabilistic theory into a first-orderdeterministic decomposable negation normal form (d-DNNF) circuit.Compilation offers the advantage that inference is polynomial in thesize of the circuit. Furthermore, by borrowing techniques from theknowledge compilation literature our algorithm effectively exploitsthe logical structure (e.g., context-specific independencies) withinthe first-order model, which allows more computation to be done at the lifted level.An empirical comparison demonstrates the utility of the proposed approach.