Robust Reward Placement under Uncertainty

Robust Reward Placement under Uncertainty

Petros Petsinis, Kaichen Zhang, Andreas Pavlogiannis, Jingbo Zhou, Panagiotis Karras

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 6770-6778. https://doi.org/10.24963/ijcai.2024/748

We consider a problem of placing generators of rewards to be collected by randomly moving agents in a network. In many settings, the precise mobility pattern may be one of several possible, based on parameters outside our control, such as weather conditions. The placement should be robust to this uncertainty, to gain a competent total reward across possible networks. To study such scenarios, we introduce the Robust Reward Placement problem (RRP). Agents move randomly by a Markovian Mobility Model with a predetermined set of locations whose connectivity is chosen adversarially from a known set Π of candidates. We aim to select a set of reward states within a budget that maximizes the minimum ratio, among all candidates in Π, of the collected total reward over the optimal collectable reward under the same candidate. We prove that RRP is NP-hard and inapproximable, and develop Ψ-Saturate, a pseudo-polynomial time algorithm that achieves an ϵ-additive approximation by exceeding the budget constraint by a factor that scales as O(ln |Π|/ϵ). In addition, we present several heuristics, most prominently one inspired by a dynamic programming algorithm for the max–min 0–1 KNAPSACK problem. We corroborate our theoretical analysis with an experimental evaluation on synthetic and real data.
Keywords:
Planning and Scheduling: PS: Planning under uncertainty
Data Mining: DM: Networks
Planning and Scheduling: PS: Markov decisions processes
Search: S: Combinatorial search and optimisation