Constantinos Daskalakis

  • Constantinos Daskalakis

Title: Equilibrium Computation and the Foundations of Deep Learning

Abstract: Deep Learning has recently yielded important advances in single-agent learning challenges, much of that progress being fueled by the empirical success of gradient descent and its variants in computing local optima of non-convex optimization problems. In multi-agent learning applications, the role of single-objective optimization is played by equilibrium computation, yet our understanding of its complexity in settings that are relevant for Deep Learning remains sparse. In this talk we focus on min-max optimization of nonconvex-nonconcave objectives, which has found applications in GANs, and other adversarial learning problems. Here, not only are there no known gradient-descent based methods converging to even local and approximate min-max equilibria, but the computational complexity of identifying them remains poorly understood. We show that finding approximate local min-max equilibria of Lipschitz and smooth objectives requires a number of queries to the function and its gradient that is exponential in the relevant parameters, in sharp contrast to the polynomial number of queries required to find approximate local minima of non-convex objectives. Our oracle lower bound is a byproduct of a complexity-theoretic result showing that finding approximate local min-max equilibria is computationally equivalent to finding Brouwer fixed points, and Nash equilibria in non zero-sum games, and thus PPAD-complete. Minimal complexity theory knowledge will be assumed in the talk.

(This is joint work with Stratis Skoulakis and Manolis Zampetakis)

Bio: Constantinos Daskalakis is a Professor of Computer Science and Electrical Engineering at MIT. He holds a Diploma in Electrical and Computer Engineering from the National Technical University of Athens, and a Ph.D. in Electrical Engineering and Computer Sciences from UC-Berkeley. He works on Computation Theory and its interface with Game Theory, Economics, Probability Theory, Machine Learning and Statistics. He has resolved long-standing open problems about the computational complexity of the Nash equilibrium, the mathematical structure and computational complexity of multi-item auctions, and the behavior of machine-learning methods. He has obtained computationally and statistically efficient methods for statistical hypothesis testing and learning in high-dimensional settings, as well as results characterizing the structure and concentration properties of high-dimensional distributions. He has been honored with the 2007 Microsoft Graduate Research Fellowship, the 2008 ACM Doctoral Dissertation Award, the Game Theory and Computer Science (Kalai) Prize from the Game Theory Society, the 2010 Sloan Fellowship in Computer Science, the 2011 SIAM Outstanding Paper Prize, the 2011 Ruth and Joel Spira Award for Distinguished Teaching, the 2012 Microsoft Research Faculty Fellowship, the 2015 Research and Development Award by the Giuseppe Sciacca Foundation, the 2017 Google Faculty Research Award, the 2018 Simons Investigator Award, the 2018 Rolf Nevanlinna Prize from the International Mathematical Union, the 2018 ACM Grace Murray Hopper Award, the 2019 Bodossaki Foundation Distinguished Young Scientists Award, and the 2019 Frank Quick Faculty Research Innovation Fellowship. He is also a recipient of Best Paper awards at the ACM Conference on Economics and Computation in 2006 and in 2013.