**When**: August 7, 7-8 AM Pacific Time

**Speaker**: David Morton

**Title**: COVID-19: How to relax social distancing if you must

**Abstract**: Following the April 16, 2020 release of the "Opening Up America Again" guidelines for relaxing COVID-19 social distancing policies, local leaders in the US have been concerned about future pandemic waves and lack robust strategies for tracking and suppressing transmission. We present a strategy for restricting and relaxing social distancing orders when hospital admissions surpass optimized thresholds. We formulate a chance-constrained model on top of a simulation model of epidemiological dynamics with age-group, risk-group, and temporal fidelity. From the model, we derive triggers that ensure, with high probability, that hospital surges will not exceed local capacity while minimizing the expected number of days in lock-down. We discuss how the model is being employed in Austin, Texas. This is joint work with Daniel Duque, Achyut Kasi, Cindy Sanchez, Bismark Singh, Ozge Surer, Haoxiang Yang and Zhanwei Du, Remy Pasco, Kelly Pierce, Paul Rathouz, and Lauren Ancel Meyers.

**Bio**: Dave Morton is the Sachs Professor and Department Chair in Industrial Engineering & Management Sciences at Northwestern University. Prior to joining the faculty at Northwestern, he was the Engineering Foundation Professor at the University of Texas at Austin and a National Research Council Postdoctoral Fellow at the Naval Postgraduate School. He worked as a Fulbright Research Scholar at Charles University in Prague, and is an INFORMS Fellow. He has served as Chair of the INFORMS Optimization Society and he chaired the Committee on Stochastic Programming.

**When:** July 24, 7-8 AM Pacific Time

**Speaker:** Daniel Kuhn

**Title:** From Moderate Deviations Theory to Distributionally Robust Optimization: Learning from Correlated Data

**Abstract**: We aim to learn a performance function of the invariant state distribution of an unknown linear dynamical system based on a single trajectory of correlated state observations. The function to be learned may represent, for example, an identification objective or a value function. To this end, we develop a distributionally robust estimation scheme that evaluates the worst- and best-case values of the given performance function across all stationary state distributions that are sufficiently likely to have generated the observed state trajectory. By leveraging new insights from moderate deviations theory, we prove that our estimation scheme offers consistent upper and lower confidence bounds whose exponential convergence rate can be actively tuned. In the special case of a quadratic cost, we show that the proposed confidence bounds can be computed efficiently by solving Riccati equations. We exemplify the proposed methods in the context of reinforcement learning, hypothesis testing and system identification. This is joint work with Tobias Sutter, Wouter Jongeneel and Soroosh Shafieezadeh-Abadeh.

**Short Bio**: Daniel Kuhn holds the Chair of Risk Analytics and Optimization at EPFL. Before joining EPFL, he was a faculty member at Imperial College London (2007–2013) and a postdoctoral researcher at Stanford University (2005–2006). He received a Ph.D. in Economics from the University of St. Gallen in 2004 and an M.Sc. in Theoretical Physics from ETH Zürich in 1999. His research interests revolve around stochastic programming and robust optimization.

**When:** July 10, 7-8 AM Pacific Time

**Speaker:** Alejandro Jofré

**Title:** Bilevel optimization applied to strategic pricing in electricity markets and extension to markets with massive entry of renewable energies and distributed generation

**Abstract:** In this work, we present a bilevel programming formulation associated with a generator strategically choosing a bid to maximize profit given the choices of the other generators. More precisely, given a specific generator, we define a set of scenarios for the remaining agents and maximize the expected profit of the chosen company over all scenarios, this approach was presented by Baíllo et al. [1] and studied in the linear case by Fampa et al. [3]. Here is assumed that, after the clearing of each market mechanism, information about the submitted aggregate offer and demand curves is made publicly available and agents can then build scenarios for its rivals bids. The capability of the agent represented by the leader to affect the market price is considered by the model. The follower of the problem is the electric system operator, which runs a minimum cost program that respects physical network constraints. In this work, no transmission constraints and convex piecewise linear functions as cost and bids were considered. A penalty algorithm is formulated together with an efficient algorithm for solving the follower problem, and convergence to a local maximum is proven. This formulation is also compared with the Nash equilibrium formulation, where the competition process is simulated until a set of price equilibrium bids is obtained. It was proven in Jofré et al. [2] that the mixed Nash strategies equilibrium exists in even a more general network, considering transmission constraints. In this work, it is shown that if the probabilities associated with the scenarios approach are close to those of the mixed strategies equilibrium, then the expected payoffs under both formulations are close and that under small resistance values and Schweppe et al. [4] approximation for the losses due to thermal considerations, the payoffs on the simplified model are close to the ones from the more general network. These ideas are extended and applied to the case where we have massive entry of renewable energies and distributed generation.

**References**

[1] A. Baíllo, M. Ventosa, M. Rivier, and A. Ramos. Optimal offering strategies for generation companies operating in electricity spot markets. IEEE Transactions on Power Systems, 19(2):745–753, May 2004.

[2] J. Escobar and A. Jofre. Equilibrium analysis of electricity auctions (submitted). 2019.

[3] M. Fampa, L. A. Barroso, D. Candal, and L. Simonetti. Bilevel optimization applied to strategic pricing in competitive electricity markets. Computational Optimization and Applications, 39(2):121–142, Mar 2008.

[4] Fred C. Schweppe, Michael C. Caramanis, Richard D. Tabors, and Roger E. Bohn. Spot Pricing of Electricity. Springer US, 1988.

**Bio:** Prof. Alejandro Jofré is a Mathematical Engineer from the Universidad de Chile and a PhD in Applied Mathematics from the University of Pau, France. He is full Professor at the Universidad de Chile. He has held the positions of director and deputy director of the Center for Mathematical Modeling, among others. His areas of research are optimization, stochastic optimization, game theory, economic equilibrium and electricity markets. He has led research projects and developed optimization tools for price analysis, planning and market behavior for energy systems, telecommunications markets and sustainable exploitation of natural resources such as copper, energy and forestry.

**When:** June 26, 7-8 AM Pacific Time

**Speaker:** Alexander Shapiro

**Title:** Computational and theoretical aspects of Solving Multistage Stochastic Programs

**Abstract:** In this talk we discuss computational approaches to solving convex stochastic programming problems. We start with a discussion of sample complexity of solving static problems and argue that this is essentially different from sample complexity of solving multistage programs. In some applications the considered multistage stochastic programs have a periodical behavior. We demonstrate that in such cases it is possible to drastically reduce the number of stages by introducing a periodical analog of the so-called Bellman equations, used in Markov Decision Processes and Stochastic Optimal Control. Furthermore, we describe primal and dual variants of the cutting plane type algorithm applied to the constructed periodical Bellman equations, and show numerical experiments for the Brazilian interconnected power system problem.

**Bio: **Alexander Shapiro is Russell Chandler III Chair and Professor in the School of Industrial and Systems Engineering at Georgia Institute of Technology, Atlanta, USA. He has published more than 140 research articles in peer review journals and is a coauthor of several books. He was on the editorial board of several professional journals. He was an area editor (Optimization) of {\em Operations Research}, and the Editor-in-Chief of the {\em Mathematical Programming, Series A,} 2012-2017, flagship journal of the Mathematical Optimization Society. He gave numerous invited keynote and plenary talks, including invited section talk (section Control Theory \& Optimization) at the International Congress of Mathematicians 2010, Hyderabad, India. In 2013 he was a recipient of Khachiyan prize awarded by the INFORMS Optimization Society, and in 2018 he was a recipient of the Dantzig Prize awarded by the Mathematical Optimization Society and Society for Industrial and Applied Mathematics. In 2020 he was elected to the National Academy of Engineering.

**When:** June 12, 7-8 AM Pacific Time

**Speaker:** Cynthia Rudin

**Title:** Interpretability vs. Explainability in Machine Learning

**Abstract:** With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice. Explanations for black box models are not reliable, and can be misleading. If we use interpretable machine learning models, they come with their own explanations, which are faithful to what the model actually computes.

In this talk, I will discuss some of the reasons that black boxes with explanations can go wrong, whereas using inherently interpretable models would not have these same problems. I will give an example of where an explanation of a black box model went wrong, namely, I will discuss ProPublica's analysis of the COMPAS model used in the criminal justice system: ProPublica’s explanation of the black box model COMPAS was flawed because it relied on wrong assumptions to identify the race variable as being important. Luckily in recidivism prediction applications, black box models are not needed because inherently interpretable models exist that are just as accurate as COMPAS. I will also give examples of interpretable models in healthcare. One of these models, the 2HELPS2B score, is actually used in intensive care units in hospitals; most machine learning models cannot be used when the stakes are so high. Finally, I will discuss two long-term projects my lab is working on, namely optimal sparse decision trees and interpretable neural networks.

**Bio:** Cynthia Rudin is a professor of computer science, electrical and computer engineering, and statistical science at Duke University. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. Her degrees are from the University at Buffalo and Princeton University. She is a three time winner of the INFORMS Innovative Applications in Analytics Award, was named as one of the "Top 40 Under 40" by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. She has served on committees for INFORMS, the National Academies, the American Statistical Association, DARPA, the NIJ, and AAAI. She is a fellow of both the American Statistical Association and Institute of Mathematical Statistics. She is a Thomas Langford Lecturer at Duke University for 2019-2020.

**When:** May 29, 7-8 AM Pacific Time

**Speaker:** Steffen Rebennack

**Title:** Cut-sharing in Stochastic Dual Dynamic Programming (see below for abstract and bio)

This is joint work with Christian Füllner (KIT).

**Abstract:** Stochastic dual dynamic programming (SDDP) is a widely used method for solving large-scale multi-stage stochastic linear programming problems by introducing scenario sampling to the nested Benders decomposition method. However, in its classical form SDDP relies heavily on the assumption of interstage independent random vectors so that Benders cuts can be shared among different scenarios at the same stage. In many practical applications this assumption might not be satisfied. Therefore, recently cut sharing has been generalized to linear or at least convex interstage dependent uncertainty in the right-hand side of the problem. We build upon this work and further generalize the cut-sharing methodology to a broader class of nonlinear uncertainty models. A real-life power system example is examined to illustrate the effectiveness of the proposed techniques.

**Bio:** Steffen Rebennack obtained his PhD at the University of Florida in Industrial & Systems Engineering in 2010. He obtained early tenure at the Colorado School of Mines in the USA and joined the KIT in 2017. Currently, he is the head of the Stochastic Programming group which belongs to the Institute of Operations Research (IOR). His research interests are modeling and the design of exact optimization algorithms for problems of special structure arising, for example, in stochastic optimization problems. In 2015, he received the “ENRE Young Researcher Prize of 2015” for his applied work in power systems optimization under uncertainty. He serves as an editor for the European Journal of Operational Research since 2018.