Deep Neural Networks via Complex Network Theory: A Perspective

Deep Neural Networks via Complex Network Theory: A Perspective

Emanuele La Malfa, Gabriele La Malfa, Giuseppe Nicosia, Vito Latora

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 4361-4369. https://doi.org/10.24963/ijcai.2024/482

Deep Neural Networks (DNNs) can be represented as graphs whose links and vertices iteratively process data and solve tasks sub-optimally. Complex Network Theory (CNT), merging statistical physics with graph theory, provides a method for interpreting neural networks by analysing their weights and neuron structures. However, classic works adapt CNT metrics that only permit a topological analysis as they do not account for the effect of the input data. In addition, CNT metrics have been applied to a limited range of architectures, mainly including Fully Connected neural networks. In this work, we extend the existing CNT metrics with measures that sample from the DNNs' training distribution, shifting from a purely topological analysis to one that connects with the interpretability of deep learning. For the novel metrics, in addition to the existing ones, we provide a mathematical formalisation for Fully Connected, AutoEncoder, Convolutional and Recurrent neural networks, of which we vary the activation functions and the number of hidden layers. We show that these metrics differentiate DNNs based on the architecture, the number of hidden layers, and the activation function. Our contribution provides a method rooted in physics for interpreting DNNs that offers insights beyond the traditional input-output relationship and the CNT topological analysis.
Keywords:
Machine Learning: ML: Explainable/Interpretable machine learning
Machine Learning: ML: Applications
Machine Learning: ML: Other