Skip to main content

Structure in Min-Max Optimization (and How to Use It!)

Save to calendar

Jun 08

Date and time: 8 June 2021, 15:00 – 16:00 CEST (UTC +2)
Speaker: Jelena Diakonikolas, University of Wisconsin-Madison
Title: Structure in Min-Max Optimization (and How to Use It!)

Zoom: https://kth-se.zoom.us/j/69560887455
Meeting ID: 695 6088 7455
Password: 755440

Watch the recorded presentation:

 

Picture of Elena DiakonikolasAbstract: Min-max optimization problems arise in a variety of applications in mathematical programming, engineering, finance, and machine learning. Recently, they have attracted significant renewed research interest as they naturally model adversarial training problems in machine learning. In this context, min-max problem form arises because, in addition to minimizing a loss function corresponding to the learning model, we also need to account for adversarial corruptions of the data. Such corruptions can be modelled by an adversary trying to maximize the same loss, hence giving rise to the min-max form of the problem.

Although deceptively similar to standard (min) optimization, min-max optimization has unique features that often lead natural approaches such as gradient descent-type updates to fail even on relatively mild problem instances. Further, the types of guarantees that can be established are generally more restricted than in the counterpart setups of standard convex or smooth nonconvex optimization.

I will discuss how introducing structure into min-max optimization or exploiting structure already present in common problems can be utilized to surpass many of the obstacles raised by the worst-case instances. The first example of structure is a correspondence between smooth convex-concave optimization problems and fixed-point equations with nonexpansive maps that leads to near-optimal and parameter-free methods for broad classes of min-max problems. The second is a novel structural condition for nonconvex-nonconcave setups that is present in many problem instances and allows guaranteeing convergence of an Extragradient-type method, surpassing the impossibility results that exist for general smooth nonconvex-nonconcave min-max optimization. Finally, on the positive side, I will discuss how the min-max perspective can be leveraged to exploit the separable structure of convex ERM problems and obtain a faster variance-reduced method, even surpassing the obstacles of existing lower bounds for general composite optimization.

Bio: Jelena Diakonikolas is an assistant professor in the Department of Computer Sciences at UW-Madison. Prior to joining UW-Madison, she held postdoctoral positions at UC Berkeley and Boston University. She received her PhD from Columbia University. Her research interests are inefficient optimization methods for large scale problems and connections between optimization and other areas such as dynamical systems and Markov Chain Monte Carlo methods. Jelena is a recipient of a Simons-Berkeley Research Fellowship, the Morton B. Friedman Prize for Excellence at Columbia Engineering, and a Qualcomm Innovation Fellowship.
Link to the profile of Jelena Diakonikolas