Adversarial Framework with Certified Robustness for Time-Series Domain via Statistical Features (Extended Abstract)
Adversarial Framework with Certified Robustness for Time-Series Domain via Statistical Features (Extended Abstract)
Taha Belkhouja, Janardhan Rao Doppa
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Journal Track. Pages 6845-6850.
https://doi.org/10.24963/ijcai.2023/767
Time-series data arises in many real-world applications (e.g., mobile health) and deep neural networks (DNNs) have shown great success in solving them. Despite their success, little is known about their robustness to adversarial attacks. In this paper, we propose a novel adversarial framework referred to as Time-Series Attacks via STATistical Features (TSA-STAT). To address the unique challenges of time-series domain, TSA-STAT employs constraints on statistical features of the time-series data to construct adversarial examples. Optimized polynomial transformations are used to create attacks that are more effective (in terms of successfully fooling DNNs) than those based on additive perturbations. We also provide certified bounds on the norm of the statistical features for constructing adversarial examples. Our experiments on diverse real-world benchmark datasets show the effectiveness of TSA-STAT in fooling DNNs for time-series domain and in improving their robustness.
Keywords:
Machine Learning: ML: Time series and data streams
Machine Learning: ML: Robustness