The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural Networks
The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural Networks
Luca Marzari, Davide Corsi, Ferdinando Cicalese, Alessandro Farinelli
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 217-224.
https://doi.org/10.24963/ijcai.2023/25
Deep Neural Networks are increasingly adopted in critical tasks that require a high level of safety, e.g., autonomous driving.
While state-of-the-art verifiers can be employed to check whether a DNN is unsafe w.r.t. some given property (i.e., whether there is at least one unsafe input configuration), their yes/no output is not informative enough for other purposes, such as shielding, model selection, or training improvements.
In this paper, we introduce the #DNN-Verification problem, which involves counting the number of input configurations of a DNN that result in a violation of a particular safety property. We analyze the complexity of this problem and propose a novel approach that returns the exact count of violations. Due to the #P-completeness of the problem, we also propose a randomized, approximate method that provides a provable probabilistic bound of the correct count while significantly reducing computational requirements.
We present experimental results on a set of safety-critical benchmarks that demonstrate the effectiveness of our approximate method and evaluate the tightness of the bound.
Keywords:
Agent-based and Multi-agent Systems: MAS: Formal verification, validation and synthesis
AI Ethics, Trust, Fairness: ETF: Safety and robustness