ExplainIt!: A Tool for Computing Robust Attributions of DNNs
ExplainIt!: A Tool for Computing Robust Attributions of DNNs
Sumit Jha, Alvaro Velasquez, Rickard Ewetz, Laura Pullum, Susmit Jha
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Demo Track. Pages 5916-5919.
https://doi.org/10.24963/ijcai.2022/853
Responsible integration of deep neural networks into the design of trustworthy systems requires the ability to explain decisions made by these models. Explainability and transparency are critical for system analysis, certification, and human-machine teaming. We have recently demonstrated that neural stochastic differential equations (SDEs) present an explanation-friendly DNN architecture. In this paper, we present ExplainIt, an online tool for explaining AI decisions that uses neural SDEs to create visually sharper and more robust attributions than traditional residual neural networks. Our tool shows that the injection of noise in every layer of a residual network often leads to less noisy and less fragile integrated gradient attributions. The discrete neural stochastic differential equation model is trained on the ImageNet data set with a million images, and the demonstration produces robust attributions on images in the ImageNet validation library and on a variety of images in the wild. Our online tool is hosted publicly for educational purposes.
Keywords:
AI Ethics, Trust, Fairness: Explainability and Interpretability