Title: Online-to-PAC Conversions: Generalization Bounds via Regret Analysis
Abstract: We present a new framework for deriving bounds on the generalization
bound of statistical learning algorithms from the perspective of
online learning. Specifically, we construct an online learning game
called the “generalization game”, where an online learner is trying to
compete with a fixed statistical learning algorithm in predicting the
sequence of generalization gaps on a training set of i.i.d. data
points. We establish a connection between the online and statistical
learning setting by showing that the existence of an online learning
algorithm with bounded regret in this game implies a bound on the
generalization error of the statistical learning algorithm, up to a
martingale concentration term that is independent of the complexity of
the statistical learning method. This technique allows us to recover
several standard generalization bounds including a range of
PAC-Bayesian and information-theoretic guarantees, as well as
generalizations thereof.
Bio: Gergely Neu is a research assistant professor at the University of Pompeu Fabra in Barcelona, Spain, where he leads a group supported by the ERC Starting Grant project ScaleR. His research is mainly on online optimization, bandit problems, and reinforcement learning theory.
Gergely obtained his PhD from the Budapest University of Technology and Economics, and did a postdoc at INRIA Sequel in Lille, France.
Gergely was a Program Chair of COLT 2023, and before that of ALT 2020 in San Diego.