Bio:
Sanjeev Arora is Charles C. Fitzmorris Professor of Computer Science at Princeton University and Visiting Professor in Mathematics at the Institute for Advanced Study. He works on theoretical computer science and theoretical machine learning. He has received the Packard Fellowship (1997), Simons Investigator Award (2012), Gödel Prize (2001 and 2010), ACM Prize in Computing (formerly the ACM-Infosys Foundation Award in the Computing Sciences) (2012), and the Fulkerson Prize in Discrete Math (2012). He is a fellow of the American Academy of Arts and Sciences and member of the National Academy of Science.
Talk: Theory for Representation Learning
The goal of representation learning is to use unlabeled data to learn a representation function f such that replacing data point x by feature vector f(x) in new classification tasks reduces requirement for labeled data. This is distinct from semisupervised learning where the learning can leverage labeled as well as unlabeled data during training.
We survey some theory that has been developed for representation learning recently in our group, especially for word embeddings and text representations. Most of the talk is about a recent paper that gives theoretical guarantees for recent empirical methods such as QuickThoughts [Logeswaran and Lee 2018) which computes text representations. These methods rely upon access to pairs of “semantically similar” pairs and try to ensure that their representations have high inner product. We call such methods “contrastive learning” since they leverage the contrast between similar pairs and random pairs. I’ll describe experiments that support and illustrate the theory.
(Joint work with Hrishikesh Khandeparkar, Mikhail Khodak, Orestis Plevrakis and Nikunj Saunshi)