Perturbed-History Exploration in Stochastic Multi-Armed Bandits
Perturbed-History Exploration in Stochastic Multi-Armed Bandits
Branislav Kveton, Csaba Szepesvári, Mohammad Ghavamzadeh, Craig Boutilier
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 2786-2793.
https://doi.org/10.24963/ijcai.2019/386
We propose an online algorithm for cumulative regret minimization in a stochastic multi-armed bandit. The algorithm adds O(t) i.i.d. pseudo-rewards to its history in round t and then pulls the arm with the highest average reward in its perturbed history. Therefore, we call it perturbed-history exploration (PHE). The pseudo-rewards are carefully designed to offset potentially underestimated mean rewards of arms with a high probability. We derive near-optimal gap-dependent and gap-free bounds on the n-round regret of PHE. The key step in our analysis is a novel argument that shows that randomized Bernoulli rewards lead to optimism. Finally, we empirically evaluate PHE and show that it is competitive with state-of-the-art baselines.
Keywords:
Machine Learning: Online Learning
Uncertainty in AI: Sequential Decision Making