Abstract
Strengthening Schedules Through Uncertainty Analysis
In this paper, we describe an approach to scheduling under uncertainty that achieves scalability through a coupling of deterministic and probabilistic reasoning. Our specific focus is a class of oversubscribed scheduling problems where the goal is to maximize the reward earned by a team of agents in a distributed execution environment. There is uncertainty in both the duration and outcomes of executed activities. To ensure scalability, our solution approach takes as its starting point an initial deterministic schedule for the agents, computed using expected duration reasoning. This initial agent schedule is probabilistically analyzed to find likely points of failure, and then selectively strengthened based on this analysis. For each scheduled activity, the probability of failing and the impact that failure would have on the schedule's overall reward are calculated and used to focus schedule strengthening actions. Such actions generally entail fundamental trade-offs; for example, modifications that increase the certainty that a high-reward activity succeeds may decrease the schedule slack available to accommodate uncertainty during execution. We describe a principled approach to handling these trade-offs based on the schedule's "expected reward," using it as a metric to ensure that all schedule modifications are ultimately beneficial. Finally, we present experimental results obtained using a multi-agent simulation environment, which confirm that executing schedules strengthened in this way result in significantly higher rewards than are achieved by executing the corresponding initial schedules.
Laura M. Hiatt, Terry L. Zimmerman, Stephen F. Smith, Reid Simmons