Hard-Thresholding Meets Evolution Strategies in Reinforcement Learning

Hard-Thresholding Meets Evolution Strategies in Reinforcement Learning

Chengqian Gao, William de Vazelhes, Hualin Zhang, Bin Gu, Zhiqiang Xu

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 3989-3997. https://doi.org/10.24963/ijcai.2024/441

Evolution Strategies (ES) have emerged as a competitive alternative for model-free reinforcement learning, showcasing exemplary performance in tasks like Mujoco and Atari. Notably, they shine in scenarios with imperfect reward functions, making them invaluable for real-world applications where dense reward signals may be elusive. Yet, an inherent assumption in ES—that all input features are task-relevant—poses challenges, especially when confronted with irrelevant features common in real-world problems. This work scrutinizes this limitation, particularly focusing on the Natural Evolution Strategies (NES) variant. We propose NESHT, a novel approach that integrates Hard-Thresholding (HT) with NES to champion sparsity, ensuring only pertinent features are employed. Backed by rigorous analysis and empirical tests, NESHT demonstrates its promise in mitigating the pitfalls of irrelevant features and shines in complex decision-making problems like noisy Mujoco and Atari tasks. Our code is available at https://github.com/cangcn/NES-HT.
Keywords:
Machine Learning: ML: Evolutionary learning
Machine Learning: ML: Feature extraction, selection and dimensionality reduction
Machine Learning: ML: Learning sparse models
Machine Learning: ML: Optimization