Model-Free Preference Elicitation
Model-Free Preference Elicitation
Carlos Martin, Craig Boutilier, Ofer Meshi, Tuomas Sandholm
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 3493-3503.
https://doi.org/10.24963/ijcai.2024/387
In recommender systems, preference elicitation (PE) is an effective way to learn about a user's preferences to improve recommendation quality.
Expected value of information (EVOI), a Bayesian technique that computes expected gain in user utility, has proven to be effective in selecting useful PE queries.
Most EVOI methods use probabilistic models of user preferences and query responses to compute posterior utilities.
By contrast, we develop model-free variants of EVOI that rely on function approximation to obviate the need for specific modeling assumptions.
Specifically, we learn user response and utility models from existing data (often available in real-world recommender systems), which are used to estimate EVOI rather than relying on explicit probabilistic inference.
We augment our approach by using online planning, specifically, Monte Carlo tree search, to further enhance our elicitation policies.
We show that our approach offers significant improvement in recommendation quality over standard baselines on several PE tasks.
Keywords:
Knowledge Representation and Reasoning: KRR: Preference modelling and preference-based reasoning
Data Mining: DM: Recommender systems
Humans and AI: HAI: Personalization and user modeling