Robust Losses for Decision-Focused Learning

Robust Losses for Decision-Focused Learning

Noah Schutte, Krzysztof Postek, Neil Yorke-Smith

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 4868-4875. https://doi.org/10.24963/ijcai.2024/538

Optimization models used to make discrete decisions often contain uncertain parameters that are context-dependent and estimated through prediction. To account for the quality of the decision made based on the prediction, decision-focused learning (end-to-end predict-then-optimize) aims at training the predictive model to minimize regret, i.e., the loss incurred by making a suboptimal decision. Despite the challenge of the gradient of this loss w.r.t. the predictive model parameters being zero almost everywhere for optimization problems with a linear objective, effective gradient-based learning approaches have been proposed to minimize the expected loss, using the empirical loss as a surrogate. However, empirical regret can be an ineffective surrogate because empirical optimal decisions can vary substantially from expected optimal decisions. To understand the impact of this deficiency, we evaluate the effect of aleatoric and epistemic uncertainty on the accuracy of empirical regret as a surrogate. Next, we propose three novel loss functions that approximate expected regret more robustly. Experimental results show that training two state-of-the-art decision-focused learning approaches using robust regret losses improves test-sample empirical regret in general while keeping computational time equivalent relative to the number of training epochs.
Keywords:
Machine Learning: ML: Robustness
Constraint Satisfaction and Optimization: CSO: Constraint optimization problems
Machine Learning: ML: Regression
Machine Learning: ML: Optimization