M2RL: A Multi-player Multi-agent Reinforcement Learning Framework for Complex Games
M2RL: A Multi-player Multi-agent Reinforcement Learning Framework for Complex Games
Tongtong Yu, Chenghua He, Qiyue Yin
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Demo Track. Pages 8847-8850.
https://doi.org/10.24963/ijcai.2024/1046
Distributed deep reinforcement learning (DDRL) has gained increasing attention due to the emerging requirements for addressing complex games like Go and StarCraft. However, how to effectively and stably train bots with asynchronous and heterogeneous agents cooperation and competition for multiple players under multiple machines (with multiple CPUs and GPUs) using DDRL is still an open problem. We propose and open M2RL, a Multi-player and Multi-agent Reinforcement Learning framework, to make training bots for complex games an easy-to-use warehouse. Experiments involving training a two-player multi-agent Wargame AI, and a sixteen-player multi-agent community game Neural MMO AI, demonstrate the effectiveness of the proposed framework by winning a silver award and beating high-level AI bots designed by professional players.
Keywords:
Agent-based and Multi-agent Systems: MAS: Engineering methods, platforms, languages and tools
Agent-based and Multi-agent Systems: MAS: Applications
Agent-based and Multi-agent Systems: MAS: Multi-agent learning
Uncertainty in AI: UAI: Sequential decision making