Fan Chung Graham
Fan Chung Graham (UC San Diego)
Title: Clustering in graphs with high clustering coefficients
Abstract: Many real world networks possess the so-called small world phenomenon where every node is relatively close to every other node and have a large clustering coefficient, i.e., friends of friends are likely friends. The task of learning an adequate similarity measure on various feature spaces often involves graphs with high clustering coefficients. We investigate the clustering effect in sparse clustering graphs by examining the structural and spectral properties as well as the enumeration of patterns. In addition, we consider random graph models for clustering graphs that can be used to analyze the behavior of complex networks.
Bio: Fan Chung Graham is a Distinguished Professor of Mathematics and Professor of Computer Science and Engineering at the University of California, San Diego. She is also the Paul Erdos Professor in Combinatorics. Her research interests are primarily in graph theory, combinatorics, and algorithmic analysis. She specializes in spectral graph theory, extremal graphs, the theory of quasi-randomness and the probabilistic analysis of complex networks. Recently her work includes local graph algorithms for complex networks and random walks based ranking algorithms. She was awarded the Allendoerfer Award by Mathematical Association of America in 1990. She is a member of the American Academy of Arts and Sciences and she is an academician of Academic Sinica. She is a fellow of American Mathematical Society and a SIAM fellow.
Stefanie Jegelka
Stefanie Jegelka (MIT and TU Munich)
Title: Benefits of learning with symmetries: eigenvectors, graph representations and sample complexity
Abstract: In many applications, especially in the sciences, data and tasks have known invariances. Encoding such invariances directly into a machine learning model can improve learning outcomes, while it also poses challenges on efficient model design.
In the first part of the talk, we will focus on the invariances relevant to eigenvectors and eigenspaces being inputs to a neural network. Such inputs are important, for instance, for graph representation learning. We will discuss targeted architectures that can universally express functions with the relevant invariances or equivariances – sign flips and changes of basis – and their theoretical and empirical benefits.
Second, we will take a broader, theoretical perspective. Empirically, it is known that encoding invariances into the machine learning model can reduce sample complexity. For the simplified setting of kernel ridge regression or random features, we will discuss new bounds that illustrate two ways in which invariances can reduce sample complexity. Our results hold for learning on manifolds and for invariances to a wide range of group actions.
This talk is based on joint work with Joshua Robinson, Derek Lim, Behrooz Tahmasebi, Lingxiao Zhao, Tess Smidt, Suvrit Sra and Haggai Maron.
Bio: Stefanie Jegelka is a Humboldt Professor at TU Munich and an Associate Professor in the Department of EECS at MIT. Before joining MIT, she was a postdoctoral researcher at UC Berkeley, and obtained her PhD from ETH Zurich and the Max Planck Institute for Intelligent Systems. Stefanie has received a Sloan Research Fellowship, an NSF CAREER Award, a DARPA Young Faculty Award, the German Pattern Recognition Award, a Best Paper Award at ICML and an invitation as sectional lecturer at the International Congress of Mathematicians. She has co-organized multiple workshops on (discrete) optimization in machine learning and graph representation learning, and has served as an Action Editor at JMLR and a program chair of ICML 2022. Her research interests span the theory and practice of algorithmic machine learning, in particular, learning problems that involve combinatorial, algebraic or geometric structure.
Gergely Neu
Gergely Neu (Universitat Pompeu Fabra)
Title: Online-to-PAC Conversions: Generalization Bounds via Regret Analysis
Abstract: We present a new framework for deriving bounds on the generalization bound of statistical learning algorithms from the perspective of online learning. Specifically, we construct an online learning game called the “generalization game”, where an online learner is trying to compete with a fixed statistical learning algorithm in predicting the
sequence of generalization gaps on a training set of i.i.d. data points. We establish a connection between the online and statistical learning setting by showing that the existence of an online learning
algorithm with bounded regret in this game implies a bound on the generalization error of the statistical learning algorithm, up to a martingale concentration term that is independent of the complexity of the statistical learning method. This technique allows us to recover several standard generalization bounds including a range of PAC-Bayesian and information-theoretic guarantees, as well as generalizations thereof.
Bio: Gergely Neu is a research assistant professor at the University of Pompeu Fabra in Barcelona, Spain, where he leads a group supported by the ERC Starting Grant project ScaleR. His research is mainly on online optimization, bandit problems, and reinforcement learning theory.
Gergely obtained his PhD from the Budapest University of Technology and Economics, and did a postdoc at INRIA Sequel in Lille, France.
Gergely was a Program Chair of COLT 2023, and before that of ALT 2020 in San Diego.
Gregory Valiant
Gregory Valiant (Stanford University)
Title: Memory and Energy: Two Bottlenecks for Learning
Abstract: For many large-scale learning systems, two critical bottlenecks are 1) the amount of memory required, and 2) the amount of energy required both in training and at inference time. (In certain settings, the role of these resources may now eclipse the more traditional resources of time, and the amount of training data.) In this talk, I’ll discuss the current landscape of memory bounded learning and optimization, and will present some new perspectives on energy-centric models of computing and learning, together with a proposal for a low-energy learning system. Throughout, the emphasis will be on the many open directions on both fronts.
Bio: Gregory Valiant is an Associate Professor at Stanford, where he works on a wide variety of problems across learning, information theory, algorithms, and optimization.