Exploring Personalized Neural Conversational Models

Exploring Personalized Neural Conversational Models

Satwik Kottur, Xiaoyu Wang, Vitor Carvalho

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 3728-3734. https://doi.org/10.24963/ijcai.2017/521

Modeling dialog systems is currently one of the most active problems in Natural Language Processing. Recent advancement in Deep Learning has sparked an interest in the use of neural networks in modeling language, particularly for personalized conversational agents that can retain contextual information during dialog exchanges. This work carefully explores and compares several of the recently proposed neural conversation models, and carries out a detailed evaluation on the multiple factors that can significantly affect predictive performance, such as pretraining, embedding training, data cleaning, diversity reranking, evaluation setting, etc. Based on the tradeoffs of different models, we propose a new generative dialogue model conditioned on speakers as well as context history that outperforms all previous models on both retrieval and generative metrics. Our findings indicate that pretraining speaker embeddings on larger datasets, as well as bootstrapping word and speaker embeddings, can significantly improve performance (up to 3 points in perplexity), and that promoting diversity in using Mutual Information based techniques has a very strong effect in ranking metrics.
Keywords:
Multidisciplinary Topics and Applications: Personalization and User Modeling
Natural Language Processing: Dialogue
Natural Language Processing: Natural Language Generation