Most models of Stackelberg security games assume that the attacker only knows the defender's mixed strategy, but is not able to observe (even partially) the instantiated pure strategy. Such partial observation of the deployed pure strategy -- an issue we refer to as {\it information leakage} -- is a significant concern in practical applications. While previous research on patrolling games has addressed the attacker's real-time surveillance, we provide a significant advance. More specifically, after formulating an LP to compute the defender's optimal strategy in the presence of leakage, we start with a hardness result showing that a subproblem (more precisely, the defender oracle) is NP-hard {\it even} for the simplest of security game models. We then approach the problem from three possible directions: efficient algorithms for restricted cases, approximation algorithms, and better sampling algorithms. Our experiments confirm the necessity of handling information leakage and the advantage of our algorithms.
from cs.AI updates on arXiv.org http://ift.tt/1FimMch
via IFTTT
No comments:
Post a Comment