Counting and Sampling Models in First-Order Logic
Counting and Sampling Models in First-Order Logic
Ondřej Kuželka
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Early Career. Pages 7020-7025.
https://doi.org/10.24963/ijcai.2023/801
First-order model counting (FOMC) is the task of counting models of a first-order logic sentence over a given set of domain elements. Its weighted variant, WFOMC, generalizes FOMC by assigning weights to the models and has many applications in statistical relational learning. More than ten years of research by various authors has led to identification of non-trivial classes of WFOMC problems that can be solved in time polynomial in the number of domain elements. In this paper, we describe recent works on WFOMC and the related problem of weighted first-order model sampling (WFOMS). We also discuss possible applications of WFOMC and WFOMS within statistical relational learning and beyond, e.g., automated solving of problems from enumerative combinatorics and elementary probability theory. Finally, we mention research problems that still need to be tackled in order to make applications of these methods really practical more broadly.
Keywords:
EC: Uncertainty In AI