Towards the Quantitative Interpretability Analysis of Citizens Happiness Prediction

Towards the Quantitative Interpretability Analysis of Citizens Happiness Prediction

Lin Li, Xiaohua Wu, Miao Kong, Dong Zhou, Xiaohui Tao

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
AI for Good. Pages 5094-5100. https://doi.org/10.24963/ijcai.2022/707

Evaluating the high-effect factors of citizens' happiness is beneficial to a wide range of policy-making for economics and politics in most countries. Benefiting from the high-efficiency of regression models, previous efforts by sociology scholars have analyzed the effect of happiness factors with high interpretability. However, restricted to their research concerns, they are specifically interested in some subset of factors modeled as linear functions. Recently, deep learning shows promising prediction accuracy while addressing challenges in interpretability. To this end, we introduce Shapley value that is inherent in solid theory for factor contribution interpretability to work with deep learning models by taking into account interactions between multiple factors. The proposed solution computes the Shapley value of a factor, i.e., its average contribution to the prediction in different coalitions based on coalitional game theory. Aiming to evaluate the interpretability quality of our solution, experiments are conducted on a Chinese General Social Survey (CGSS) questionnaire dataset. Through systematic reviews, the experimental results of Shapley value are highly consistent with academic studies in social science, which implies our solution for citizens' happiness prediction has 2-fold implications, theoretically and practically.
Keywords:
Humans and AI: Computational Sustainability and Human Well-Being
AI Ethics, Trust, Fairness: Explainability and Interpretability
AI Ethics, Trust, Fairness: Societal Impact of AI
Machine Learning: Explainable/Interpretable Machine Learning