Plenary Speakers

Nika Haghtalab

Talk title: TBD

Bio: Nika Haghtalab is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. She works broadly on the theoretical aspects of machine learning and algorithmic economics. Prof. Haghtalab’s work builds theoretical foundations for ensuring both the performance of learning algorithms in presence of everyday economic forces and the integrity of social and economic forces that are born out of the use of machine learning systems. Previously, Prof. Haghtalab was an Assistant Professor in the CS department of Cornell University, in 2019-2020. She received her Ph.D. from the Computer Science Department of Carnegie Mellon University. She is a co-founder of Learning Theory Alliance (LeT-All). Among her honors are an NSF CAREER award, a NeurIPS outstanding paper award, the CMU School of Computer Science Dissertation Award, SIGecom Dissertation Honorable Mention, and other industry research awards.

Taiji Suzuki

Talk title: TBD

Bio: Taiji Suzuki is currently an Associate Professor in the Department of Mathematical Informatics at the University of Tokyo. He also serves as the team leader of “Deep learning theory” team in AIP-RIKEN. He received his Ph.D. degree in information science and technology from the University of Tokyo in 2009. He worked as an assistant professor in the department of mathematical informatics, the University of Tokyo between 2009 and 2013, and then he was an associate professor in the department of mathematical and computing science, Tokyo Institute of Technology between 2013 and 2017. He has a broad research interest in statistical learning theory on deep learning, kernel methods and sparse estimation, and stochastic optimization for large-scale machine learning problems. He served as area chairs of premier conferences such as NeurIPS, ICML, ICLR, AISTATS and a program chair of ACML. He received the Outstanding Paper Award at ICLR2021, the MEXT Young Scientists’ Prize, and Outstanding Achievement Award in 2017 from the Japan Statistical Society.

Matus Telgarsky

Talk title: TBD

Bio: Matus Telgarsky is an assistant professor at the University of Illinois, Urbana-Champaign, specializing in deep learning theory.  He was fortunate to receive a PhD at UCSD under Sanjoy Dasgupta.  Other highlights include: co-founding, in 2017, the Midwest ML Symposium (MMLS) with Po-Ling Loh; receiving a 2018 NSF CAREER award; and organizing two Simons Institute programs, one on deep learning theory (summer 2019), and one on generalization (fall 2024).

Vladimir Vovk

Talk Title: “Conformal prediction in online compression models: Twenty years later”

Abstract: My plan is to review the current state of conformal prediction in online compression models. This is the topic that I started in my ALT 2003 paper, whose expanded version was published in the ALT 2003 Special Issue of Theoretical Computer Science in 2006. Perhaps the most popular online compression model is the exchangeability model, which is standard in mainstream machine learning, but I will describe several other useful models of this kind. Online compression models are a perfect home for conformal prediction, which is a way to produce set predictions and probabilistic predictions with guaranteed properties of validity (namely, a guaranteed probability of error for set predictions and probabilistic calibration for probabilistic predictions). The properties of validity make it possible to “invert” conformal prediction to obtain online methods of testing online compression models. A recent book-length review of conformal prediction and testing is “Algorithmic Learning in a Random World” (second edition) by Vovk, Gammerman, and Shafer published by Springer in December 2022; in this talk I will give a few highlights.

Bio: Vladimir Vovk is Professor of Computer Science at Royal Holloway, University of London. His research interests include machine learning and the foundations of probability and statistics. He was one of the founders of prediction with expert advice, an area of machine learning avoiding making any statistical assumptions about the data. In 2001 he and Glenn Shafer published a book (“Probability and Finance: It’s Only a Game”) on new game-theoretic foundations of probability; the sequel (“Game-theoretic Foundations for Probability and Finance”) appeared in 2019. His second book (“Algorithmic Learning in a Random World”, 2005), co-authored with Alex Gammerman and Glenn Shafer, is the first monograph on conformal prediction, method of machine learning that provides provably valid measures of confidence for their predictions; an expanded and updated second edition has just been published (December 2022). His current research centres on applications of game-theoretic probability to statistics and machine learning.