SPS Past Prize Winners
Celebrating excellence in Stochastic Programming—discover the past prize events that recognized outstanding contributions and achievements in the field.
2025: DUPAČOVÁ-PRÉKOPA STUDENT PAPER PRIZE AWARDS
First Prize: Maria Carolina Bazotte (Polytechnique Montréal)
Solving Two-Stage Programs with Endogenous Uncertainty via Random Variable Transformation (with Margarida Carvalho and Thibaut Vidal)
Citation: This paper introduces a new methodology based on random variable transformation for solving two-stage stochastic programs with endogenous uncertainty––a class of problems that, while common in real-world applications, has received limited attention in the literature compared to models with exogenous uncertainty. In particular, there is a notable lack of general-purpose, efficient methods for handling the nonconvex and nonlinear nature typical of such problems. The authors propose modeling endogenous uncertainty by defining
transformation functions that combine first-stage decisions with decision-independent random variables. This transforms the original stochastic program with endogenous uncertainty into an equivalent one with exogenous uncertainty, allowing the use of well-established solution techniques. They further demonstrate that, in the case of discrete distribution selection—a widely studied special case—a simple transformation suffices. For several classical forms of endogenous uncertainty, the approach yields mixed-integer linear or convex programs, making the problems significantly more tractable. To validate their method, the authors apply it to critical infrastructure planning problems, including network design and facility protection, demonstrating the practical impact and broad applicability of their framework.
Second Prize: Mengmeng Li (EPFL)
Towards Optimal Offline Reinforcement Learning (with Daniel Kuhn and Tobias Sutter)
Citation: This paper proposes a new approach to offline reinforcement learning for tabular Markov decision problems (MDP) that only requires access to a single trajectory of correlated data, which can be generated from an unknown policy. Through a novel statistical analysis of Markov Decision Processes, the authors create a distributionally robust off-policy evaluation oracle that is statistically efficient - i.e., it optimally balances in-sample performance with out-of-sample disappointment. This estimator leads to an efficient estimator of the corresponding reinforcement learning problem that can be computed by solving a robust MDP with non-rectangular uncertainty set. Although this is a hard problem, the authors derive an actor-critic algorithm that can be used to approximately solve the problem and numerical experiments demonstrate that it achieves performance comparable to state-of-the-art methods. Overall, this work combines deep statistical analysis with distributional robust optimization to obtain a new method for data-driven dynamic stochastic optimization with strong theoretical guarantees.
Runner-up (in alphabetical order):
- Haoming Shen, University of Arkansas: Convex Chance-Constrained Programs with Wasserstein Ambiguity (with Ruiwei Jiang)
- Tianyu Wang, Columbia University : Optimizer’s Information Criterion: Dissecting and Correcting Bias in Data-Driven Optimization (with Garud Iyengar and Henry Lam)
- Xian Yu, Ohio State University: Multistage distributionally robust mixed-integer programming with decision-dependent moment-based ambiguity sets (with Siqian Shen)
Prize Committee: Jim Luedtke (University of Wisconsin-Madison, chair), Miloš Kopa (Charles University, Prague), Kamel Shehadeh (University of Southern California).