Reachability Analysis of Deep Neural Networks with Provable Guarantees

Reachability Analysis of Deep Neural Networks with Provable Guarantees

Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 2651-2659. https://doi.org/10.24963/ijcai.2018/368

Verifying correctness for deep neural networks (DNNs) is challenging. We study a generic reachability problem for feed-forward DNNs which, for a given set of inputs to the network and a Lipschitz-continuous function over its outputs computes the lower and upper bound on the function values. Because the network and the function are Lipschitz continuous, all values in the interval between the lower and upper bound are reachable. We show how to obtain the safety verification problem, the output range analysis problem and a robustness measure by instantiating the reachability problem. We present a novel algorithm based on adaptive nested optimisation to solve the reachability problem. The technique has been implemented and evaluated on a range of DNNs, demonstrating its efficiency, scalability and ability to handle a broader class of networks than state-of-the-art verification approaches.
Keywords:
Machine Learning: Neural Networks
Agent-based and Multi-agent Systems: Formal Verification, Validation and Synthesis
Computer Vision: Computer Vision
Machine Learning: Deep Learning