Thursday, March 21, 2019 |
|
19:00 – 22:00 | Reception with appetizers (Streeterville room) |
Friday, March 22, 2019 |
|
08:45 – 09:00 | Opening remarks |
09:00 – 11:00 | Tutorial 1: Exploration-Exploitation in Reinforcement Learning Alessandro Lazaric, Matteo Pirotta and Ronan Fruit [Tutorial webpage] |
11:00 – 11:30 | Break |
11:30 – 13:00 | Session 1: Sequential Learning |
11:30 | Online Non-Additive Path Learning under Full and Partial Information Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Holakou Rahmanian and Manfred Warmuth |
11:48 | Dynamic Pricing with Finitely Many Unknown Valuations Nicolo Cesa-Bianchi, Tommaso Renato Cesari and Vianney Perchet |
12:06 | Online Influence Maximization with Local Observations Julia Olkhovskaya, Gergely Neu and Gabor Lugosi |
12:24 | Competitive ratio vs regret minimization: achieving the best of both worlds Amit Daniely and Yishay Mansour |
12:42 | Average-Case Information Complexity of Learning Ido Nachum and Amir Yehudayoff |
13:00 – 14:00 | Lunch break |
14:00 – 15:00 | Plenary talk 1: Why is fair machine learning hard and how can theory help? Jennifer Wortman Vaughan |
15:00 – 15:15 | Break |
15:15 – 17:03 | Session 2: Learning theory I |
15:15 | Adaptive Exact Learning of Decision Trees from Membership Queries Nader Bshouty and Catherine Haddad-Zaknoon |
15:33 | Limit Learning Equivalence Structures Ekaterina Fokina, Timo Kötzing and Luca San Mauro |
15:51 | Generalize Across Tasks: Efficient Algorithms for Linear Representation Learning Brian Bullins, Elad Hazan, Adam Kalai and Roi Livni |
16:09 | Attribute-efficient learning of monomials over highly-correlated variables Alexandr Andoni, Rishabh Dudeja, Daniel Hsu and Kiran Vodrahalli |
16:27 | A Sharp Lower Bound for Agnostic Learning with Sample Compression Schemes Steve Hanneke and Aryeh Kontorovich |
16:45 | Improved generalization bounds for robust learning Idan Attias, Aryeh Kontorovich and Yishay Mansour |
17:03 – 17:30 | Walk to boat trip departure point at 401 N Michigan Ave, Chicago, IL 60611 |
17:30 – 19:00 | Boat trip |
Saturday, March 23, 2019 |
|
09:00 – 11:00 | Tutorial 2: Structured Random Matrices Ramon van Handel |
11:00 – 11:30 | Break |
11:30 – 13:00 | Session 3: Bandits, partial feedback, privacy, fairness |
11:30 | Cleaning up the neighborhood: A full classification for adversarial partial monitoring Tor Lattimore and Csaba Szepesvári |
11:48 | PAC Battling Bandits in the Plackett-Luce Model Aadirupa Saha and Aditya Gopalan |
12:06 | Differentially Private Empirical Risk Minimization in Non-interactive Local Model via Polynomial of Inner Product Approximation Di Wang, Adam Smith and Jinhui Xu |
12:24 | Old Techniques in Differentially Private Linear Regression Or Sheffet |
12:42 | PeerReview4All: Fair and Accurate Reviewer Assignment in Peer Review Ivan Stelmakh, Nihar Shah and Aarti Singh |
13:00 – 14:00 | Lunch break |
14:00 – 15:00 | Plenary talk 2: Theory for Representation Learning Sanjeev Arora |
15:00 – 15:15 | Break |
15:15 – 16:45 | Session 4: Optimization |
15:15 | Two-Player Games for Efficient Non-Convex Constrained Optimization Andrew Cotter, Heinrich Jiang and Karthik Sridharan |
15:33 | General parallel optimization without metric Xuedong Shang, Emilie Kaufmann and Michal Valko |
15:51 | Online Linear Optimization with Sparsity Constraints Chi-Jen Lu, Jun-Kun Wang and Shou-De Lin |
16:09 | Stochastic Nonconvex Optimization with Large Minibatches Weiran Wang and Nathan Srebro |
16:27 | A simple parameter-free and adaptive approach to optimization under a minimal local smoothness assumption Peter Bartlett, Victor Gabillon and Michal Valko |
16:45 – 17:00 | Break |
17:00 – 18:48 | Session 5: Statistics and Learning I |
17:00 | Interplay of minimax estimation and minimax support recovery under sparsity Mohamed Ndaoud |
17:18 | Uniform regret bounds over R^d for the sequential linear regression problem with the square loss Pierre Gaillard, Sebastien Gerchinovitz, Malo Huard and Gilles Stoltz |
17:36 | Ising Models with Latent Conditional Gaussian Variables Frank Nussbaum and Joachim Giesen |
17:54 | Exploiting geometric structure in mixture proportion estimation with generalised Blanchard-Lee-Scott estimators Henry Reeve and Ata Kaban |
18:12 | A minimax near-optimal algorithm for adaptive rejection sampling Juliette Achdou, Joseph Lam, Alexandra Carpentier and Gilles Blanchard |
18:30 | An Exponential Efron-Stein Inequality for Lq Stable Learning Rules. The Deleted Estimate Case Karim Abou-Moustafa and Csaba Szepesvári |
18:48 – 19:00 | Break |
19:00 – 19:30 | Business meeting |
19:30 – 22:30 | Banquet at the conference hotel (Lakeshore East room) |
Sunday, March 24, 2019 |
|
09:00 – 11:00 | Tutorial 3: Computation and the Brain Christos Papadimitriou |
11:00 – 11:30 | Break |
11:30 – 13:00 | Session 6: Learning theory II |
11:30 | Hardness of Improper One-sided Learning of Conjunctions For All Uniformly Falsifiable CSPs Alexander Durgin and Brendan Juba |
11:48 | Optimal Collusion-Free Teaching David Kirkpatrick, Hans Simon and Sandra Zilles |
12:06 | Sample Compression for Real-Valued Learners Steve Hanneke, Aryeh Kontorovich and Menachem Sadigurschi |
12:24 | On Learning Graphs with Edge-Detecting Queries Hasan Abasi and Nader Bshouty |
12:42 | Can Adversarially Robust Learning Leverage Computational Hardness? Saeed Mahloujifar and Mohammad Mahmoody |
13:00 – 14:00 | Lunch break |
14:00 – 15:30 | Session 7: Statistics and Learning II |
14:00 | Sequential change-point detection: Laplace concentration of scan statistics and non-asymptotic delay bounds Odalric-Ambrym Maillard |
14:18 | Dimensionality Reduction and (Bucket) Ranking: a Mass Transportation Approach Mastane Achab, Anna Korba and Stéphan Clémençon |
14:36 | Minimax Learning of Ergodic Markov Chains Geoffrey Wolfer and Aryeh Kontorovich |
14:54 | A Generalized Neyman-Pearson Criterion for Optimal Domain Adaptation Clayton Scott |
15:12 | A Tight Excess Risk Bound via a Unified PAC-Bayesian–Rademacher–Shtarkov–MDL Complexity Peter Grünwald and Nishant Mehta |
15:30 – 16:00 | Break |
16:00 – 18:30 | Workshop: When Smaller Sample Sizes Suffice for Learning |