Neuro-Symbolic Verification of Deep Neural Networks
Neuro-Symbolic Verification of Deep Neural Networks
Xuan Xie, Kristian Kersting, Daniel Neider
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3622-3628.
https://doi.org/10.24963/ijcai.2022/503
Formal verification has emerged as a powerful approach to ensure the safety and reliability of deep neural networks. However, current verification tools are limited to only a handful of properties that can be expressed as first-order constraints over the inputs and output of a network. While adversarial robustness and fairness fall under this category, many real-world properties (e.g., "an autonomous vehicle has to stop in front of a stop sign") remain outside the scope of existing verification technology. To mitigate this severe practical restriction, we introduce a novel framework for verifying neural networks, named neuro-symbolic verification. The key idea is to use neural networks as part of the otherwise logical specification, enabling the verification of a wide variety of complex, real-world properties, including the one above. A defining feature of our framework is that it can be implemented on top of existing verification infrastructure for neural networks, making it easily accessible to researchers and practitioners.
Keywords:
Machine Learning: Neuro-Symbolic Methods
Constraint Satisfaction and Optimization: Satisfiabilty
Multidisciplinary Topics and Applications: Validation and Verification