Abstract

Proceedings Abstracts of the Twenty-Fifth International Joint Conference on Artificial Intelligence

Adversarial AI / 4094
Yevgeniy Vorobeychik

In recent years AI research has had an increasing role in models and algorithms for security problems.Game theoretic models of security, and Stackelberg security games in particular, have received special attention, in part because these models and associated tools have seen actual deployment in homeland security and sustainability applications. Stackelberg security games have two prototypical features: 1) a collection of potential assets which require protection, and 2) a sequential structure,where a defender first allocates protection resources, and the attacker then responds with an optimal attack.I see the latter feature as the major conceptual breakthrough,allowing very broad application of the idea beyond physical security settings.In particular, I describe three research problems which on the surface look nothing like prototypical security games: adversarial machine learning, privacy-preserving data sharing, and vaccine design.I describe how the second conceptual aspect of security games offers a natural modeling paradigm for these.This, in turn, has two important benefits: first, it offers a new perspective on these problems, and second, facilitates fundamental algorithmic contributions for these domains.

PDF