Minimizing Adaptive Regret with One Gradient per Iteration

Minimizing Adaptive Regret with One Gradient per Iteration

Guanghui Wang, Dakuan Zhao, Lijun Zhang

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 2762-2768. https://doi.org/10.24963/ijcai.2018/383

To cope with non-stationary environments, recent advances in online optimization have introduced the notion of adaptive regret, which measures the performance of an online learner against different comparators within different time intervals. Previous studies have proposed various algorithms to yield low adaptive regret under different scenarios. However, all of existing algorithms need to query the gradient of the loss function at least O(log t) times in every iteration t, which hinders their applications to broad domains, especially when the evaluation of gradients is expensive. To address this limitation, we propose a series of computationally efficient algorithms for minimizing the adaptive regret of general convex, strongly convex and exponentially concave functions respectively. The key idea is to replace each loss function with a carefully designed surrogate loss, which bounds the original loss function from below. We show that the proposed algorithms only query the gradient once per iteration, and attain the same theoretical guarantees as previous optimal algorithms. Empirical results demonstrate the efficiency and effectiveness of our methods.
Keywords:
Machine Learning: Machine Learning
Machine Learning: Online Learning