Are Logistic Models Really Interpretable?

Are Logistic Models Really Interpretable?

Danial Dervovic, Freddy Lecue, Nicolas Marchesotti, Daniele Magazzeni

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 367-375. https://doi.org/10.24963/ijcai.2024/41

The demand for open and trustworthy AI models points towards widespread publishing of model weights. Consumers of these model weights must be able to act accordingly with the information provided. That said, one of the simplest AI classification models, Logistic Regression (LR), has an unwieldy interpretation of its model weights, with greater difficulties when extending LR to generalised additive models. In this work, we show via a User Study that skilled participants are unable to reliably reproduce the action of small LR models given the trained parameters. As an antidote to this, we define Linearised Additive Models (LAMs), an optimal piecewise linear approximation that augments any trained additive model equipped with a sigmoid link function, requiring no retraining. We argue that LAMs are more interpretable than logistic models -- survey participants are shown to solve model reasoning tasks with LAMs much more accurately than with LR given the same information. Furthermore, we show that LAMs do not suffer from large performance penalties in terms of ROC-AUC and calibration with respect to their logistic counterparts on a broad suite of public financial modelling data.
Keywords:
AI Ethics, Trust, Fairness: ETF: Explainability and interpretability
Machine Learning: ML: Classification