ADESSE: Advice Explanations in Complex Repeated Decision-Making Environments
ADESSE: Advice Explanations in Complex Repeated Decision-Making Environments
Sören Schleibaum, Lu Feng, Sarit Kraus, Jörg P. Müller
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Human-Centred AI. Pages 7904-7912.
https://doi.org/10.24963/ijcai.2024/875
In the evolving landscape of human-centered AI, fostering a synergistic relationship between humans and AI agents in decision-making processes stands as a paramount challenge. This work considers a problem setup where an intelligent agent comprising a neural network-based prediction component and a deep reinforcement learning component provides advice to a human decision-maker in complex repeated decision-making environments. Whether the human decision-maker would follow the agent's advice depends on their beliefs and trust in the agent and on their understanding of the advice itself. To this end, we developed an approach named ADESSE to generate explanations about the adviser agent to improve human trust and decision-making. Computational experiments on a range of environments with varying model sizes demonstrate the applicability and scalability of ADESSE. Furthermore, an interactive game-based user study shows that participants were significantly more satisfied, achieved a higher reward in the game, and took less time to select an action when presented with explanations generated by ADESSE. These findings illuminate the critical role of tailored, human-centered explanations in AI-assisted decision-making.
Keywords:
Machine Learning: ML: Explainable/Interpretable machine learning
Humans and AI: HAI: Human-AI collaboration
AI Ethics, Trust, Fairness: ETF: Explainability and interpretability
Machine Learning: ML: Reinforcement learning