Inside Out: Emotional Multiagent Multimodal Dialogue Systems
Inside Out: Emotional Multiagent Multimodal Dialogue Systems
Andrey V. Savchenko, Lyudmila V. Savchenko
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Demo Track. Pages 8784-8788.
https://doi.org/10.24963/ijcai.2024/1032
In this paper, we introduce the novel technological framework for the development of emotional dialogue systems. Inspired by the "Inside Out" film, we propose to use multiple emotional agents based on Large Language Models (LLMs) to prepare answers to a user query. Their answers are aggregated into a single response, taking into account the current emotional state of a user. The latter is estimated by video-based facial expression recognition (FER). We introduce several publicly available lightweight neural networks that show near state-of-the-art results on the AffectNet dataset. Qualitative examples using either GPT-3.5 or LLama2 and Mistral demonstrate that the proposed approach leads to more emotional responses in LLMs.
Keywords:
Natural Language Processing: NLP: Language generation
Computer Vision: CV: Biometrics, face, gesture and pose recognition
Agent-based and Multi-agent Systems: MAS: Applications
Humans and AI: HAI: Human-computer interaction