Exploring the Vulnerability of Deep Reinforcement Learning-based Emergency Control for Low Carbon Power Systems
Exploring the Vulnerability of Deep Reinforcement Learning-based Emergency Control for Low Carbon Power Systems
Xu Wan, Lanting Zeng, Mingyang Sun
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3954-3961.
https://doi.org/10.24963/ijcai.2022/549
Decarbonization of global power systems significantly increases the operational uncertainty and modeling complexity that drive the necessity of widely exploiting cutting-edge Deep Reinforcement Learning (DRL) technologies to realize adaptive and real-time emergency control, which is the last resort for system stability and resiliency. The vulnerability of the DRL-based emergency control scheme may lead to severe real-world security issues if it can not be fully explored before implementing it practically. To this end, this is the first work that comprehensively investigates adversarial attacks and defense mechanisms for DRL-based power system emergency control. In particular, recovery-targeted (RT) adversarial attacks are designed for gradient-based approaches, aiming to dramatically degrade the effectiveness of the conducted emergency control actions to prevent the system from restoring to a stable state. Furthermore, the corresponding robust defense (RD) mechanisms are proposed to actively modify the observations based on the distances of sequential states. Experiments are conducted based on the standard IEEE reliability test system, and the results show that security risks indeed exist in the state-of-the-art DRL-based power system emergency control models. The effectiveness, stealthiness, instantaneity, and transferability of the proposed attacks and defense mechanisms are demonstrated with both white-box and black-box settings.
Keywords:
Multidisciplinary Topics and Applications: Smart Cities
Machine Learning: Reinforcement Learning
Multidisciplinary Topics and Applications: Security and Privacy