A Restart-based Rank-1 Evolution Strategy for Reinforcement Learning
A Restart-based Rank-1 Evolution Strategy for Reinforcement Learning
Zefeng Chen, Yuren Zhou, Xiao-yu He, Siyu Jiang
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 2130-2136.
https://doi.org/10.24963/ijcai.2019/295
Evolution strategies have been demonstrated to have the strong ability to roughly train deep neural networks and well accomplish reinforcement learning tasks. However, existing evolution strategies designed specially for deep reinforcement learning only involve the plain variants which can not realize the adaptation of mutation strength or other advanced techniques. The research of applying advanced and effective evolution strategies to reinforcement learning in an efficient way is still a gap. To this end, this paper proposes a restart-based rank-1 evolution strategy for reinforcement learning. When training the neural network, it adapts the mutation strength and updates the principal search direction in a way similar to the momentum method, which is an ameliorated version of stochastic gradient ascent. Besides, two mechanisms, i.e., the adaptation of the number of elitists and the restart procedure, are integrated to deal with the issue of local optima. Experimental results on classic control problems and Atari games show that the proposed algorithm is superior to or competitive with state-of-the-art algorithms for reinforcement learning, demonstrating the effectiveness of the proposed algorithm.
Keywords:
Machine Learning: Reinforcement Learning
Machine Learning: Deep Learning
Machine Learning Applications: Applications of Reinforcement Learning
Heuristic Search and Game Playing: Heuristic Search and Machine Learning