AI4OPT Seminar Series

Date: Thursday, October 20, 2022

Location: 9th floor Atrium in Coda Building (756 W Peachtree St NW, Atlanta, GA 30308)

Time: Noon – 1:00 pm

Meeting Link: https://gatech.zoom.us/j/99381428980

Speaker: Karthyek Murthy


Optimizing Tail Risks With Limited Samples: Can Algorithms Engineer Effective Reductions in Variance & Model-bias?

Abstract: The ability to learn and control tail risks, besides being an integral part of quantitative risk management, is central to running operations requiring high service levels and cyber-physical systems with high-reliability specifications. Despite this significance, scalable algorithmic approaches have remained elusive: This is due to the rarity with which relevant risky samples get observed, and the critical role experts play in engineering model selection & variance reduction techniques to tackle this rarity. Our goal is to examine if these intricately tailored bias & variance reduction benefits can be induced, to an extent, instead by instance-agnostic algorithms. We show how this goal is achievable by algorithms set up to exploit the similarity with which risk events unfold at different scales. The two novel algorithms we introduce to this end are as follows:

  • Self-Structuring importance samplers offering versatile variance reduction: Conventionally, an explicit large deviations analysis has been a necessary ingredient for arriving at efficient samplers. This limiting requirement is overcome by means of a transformation that learns and replicates the concentration properties observed in less rare samples. This radically different approach leads to asymptotically optimal variance reduction despite being oblivious to the problem structure and is fit to serve as a vehicle for variance reduction in optimization formulations.
  • Debiased learning for minimization of tail risks: Due to insufficient samples in relevant tail regions, it is often inevitable that we plug-in a parametric distribution to solve a downstream optimization problem. We show how this plug-in bias is rectifiable in optimization by utilizing debiased learning in conjunction with the self-similarity in distribution tails. Robust optimization models retain the worst-case model error in the objective. In contrast, this targeted approach seeks to automatically cancel the bias introduced despite not knowing the nature of the error committed in the estimation step.

Bio: Karthyek Murthy serves as an Assistant Professor in Singapore University of Technology & Design. His research interests lie in the intersection of applied probability, optimization under uncertainty, and simulation. Prior to joining SUTD, he was a postdoctoral researcher in Columbia University Industrial Engineer & Operations Research department. His research has been recognized with 2021 INFORMS Junior Faculty Forum (JFIG) Paper competition (Third place), 2019 WSC Best Paper Award, TIFR-Sasken Best Ph.D. Thesis Award, IBM and TCS research Fellowships. Karthyek serves as an Associate Editor for Stochastic Systems.

Note: Lunch with be served at the seminar. Please stop by 15 minutes before the seminar to pick up lunch.

Please subscribe to ai4opt-seminars mailing list at https://lists.isye.gatech.edu/mailman/listinfo/ai4opt-seminars