Riemannian Stochastic Recursive Momentum Method for non-Convex Optimization
Riemannian Stochastic Recursive Momentum Method for non-Convex Optimization
Andi Han, Junbin Gao
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 2505-2511.
https://doi.org/10.24963/ijcai.2021/345
We propose a stochastic recursive momentum method for Riemannian non-convex optimization that achieves a nearly-optimal complexity to find epsilon-approximate solution with one sample. The new algorithm requires one-sample gradient evaluations per iteration and does not require restarting with a large batch gradient, which is commonly used to obtain a faster rate. Extensive experiment results demonstrate the superiority of the proposed algorithm. Extensions to nonsmooth and constrained optimization settings are also discussed.
Keywords:
Machine Learning: Online Learning