Saturday, February 8, 2020: ITALT workshop day |
|
08:00 – 08:30 | Light breakfast |
08:30 – 10:20 | Tutorial: Deep Learning Essentials Ruslan Salakhutdinov |
10:20 – 10:40 | Break |
10:40 – 11:30 | Invited talk: In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors Daniel M. Roy |
11:30 – 11:50 | Break |
11:50 – 12:40 | Invited talk: Margins, perceptrons, and deep networks Matus Telgarsky |
12:40 – 14:00 | Lunch (on your own) |
14:00 – 15:50 | Tutorial: Incentivizing and Coordinating Exploration Alex Slivkins and Bobby Kleinberg |
15:50 – 16:10 | Break |
16:10 – 17:00 | Invited talk: PAC-Bayes, Rademacher and Descriptional Complexities: Three Sides of the Same Coin Peter Grünwald |
Sunday, February 9, 2020 |
|
08:15 – 08:45 | Light breakfast |
08:45 – 09:00 | Opening remarks |
09:00 – 11:00 | Tutorial: A survey on random projections Jelani Nelson |
11:00 – 11:20 | Break |
11:20 – 12:20 | Bandits I |
11:20 | Top-k Combinatorial Bandits with Full-Bandit Feedback Idan Rejwan and Yishay Mansour |
11:40 | Thompson Sampling for Adversarial Bit Prediction Yuval Lewi, Haim Kaplan and Yishay Mansour |
12:00 | Bandit Algorithms Based on Thompson Sampling for Bounded Reward Distributions Charles Riou and Junya Honda |
12:20 – 14:00 | Lunch (on your own) |
14:00 – 15:00 | Plenary talk: The Unreasonable Effectiveness of Gradient Descent John Lafferty |
15:00 – 15:20 | Break |
15:20 – 16:40 | Unsupervised and interactive learning |
15:20 | Toward Universal Testing of Dynamic Network Models Abram Magner and Wojciech Szpankowski |
15:40 | On the Analysis of EM for truncated mixtures of two Gaussians Sai Ganesh Nagarajan and Ioannis Panageas |
16:00 | Algebraic and Analytic Approaches for Parameter Learning in Mixture Models Akshay Krishnamurthy, Arya Mazumdar, Andrew McGregor and Soumyabrata Pal |
16:20 | Interactive Learning of a Dynamic Structure Ehsan Emamjomeh-Zadeh, David Kempe, Mohammad Mahdian and Robert Schapire |
16:40 – 17:00 | Break |
17:00 – 18:20 | Dynamical systems, RL, control |
17:00 | Planning in Hierarchical Reinforcement Learning: Guarantees for Using Local Policies Tom Zahavy, Avinatan Hassidim, Haim Kaplan and Yishay Mansour |
17:20 | Robust guarantees for learning an autoregressive filter Holden Lee and Cyril Zhang |
17:40 | Mixing Time Estimation in Ergodic Markov Chains from a Single Trajectory with Contraction Methods Geoffrey Wolfer |
18:00 | The Nonstochastic Control Problem Elad Hazan, Sham Kakade and Karan Singh |
18:30 – 20:30 | Poster session and reception |
20:30 – 21:00 | Business meeting |
Monday, February 10, 2020 |
|
08:30 – 09:00 | Light breakfast |
09:00 – 11:00 | Tutorial: Stochastic Calculus in Machine Learning: Optimization, Sampling, Simulation Maxim Raginsky |
11:00 – 11:20 | Break |
11:20 – 12:20 | Statistical learning theory I |
11:20 | On the Complexity of Proper Distribution-Free Learning of Linear Classifiers Philip Long and Raphael Long |
11:40 | Distribution Free Learning with Local Queries Galit Bary Weisberg, Amit Daniely and Shai Shalev-Shwartz |
12:00 | On the Expressive Power of Kernel Methods and the Efficiency of Kernel Learning by Association Schemes Roi Livni and Pravesh K Kothari |
12:20 – 14:00 | Lunch (on your own) |
13:20 – 13:50 | AALT meeting |
14:00 – 15:00 | Plenary talk: A Hard Look at Soft Concepts Dafna Shahaf |
15:00 – 15:20 | Break |
15:20 – 16:40 | Optimization |
15:20 | Don’t Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop Dmitry Kovalev, Samuel Horvath and Peter Richtárik |
15:40 | Leverage Score Sampling for Faster Accelerated Regression and ERM Naman Agarwal, Sham Kakade, Rahul Kidambi, Yin Tat Lee, Praneeth Netrapalli and Aaron Sidford |
16:00 | A Tight Convergence Analysis for Stochastic Gradient Descent with Delayed Updates Yossi Arjevani, Ohad Shamir and Nathan Srebro |
16:20 | Finding Robust Nash equilibria Vianney Perchet |
16:40 – 17:00 | Break |
17:00 – 18:40 | Nonstandard models |
17:00 | A Non-Trivial Algorithm Enumerating Relevant Features over Finite Fields Mikito Nanashima |
17:20 | Approximate Representer Theorems in Non-reflexive Banach Spaces Kevin Schlegel |
17:40 | Cautious Limit Learning Vanja Doskoc and Timo Kötzing |
18:00 | What relations are reliably embeddable in Euclidean space? Robi Bhattacharjee and Sanjoy Dasgupta |
18:20 | On Learning Causal Structures from Non-Experimental Data without Any Faithfulness Assumption Hanti Lin and Jiji Zhang |
18:40 – 19:30 | Break |
19:00 | Banquet |
Tuesday, February 11, 2020 |
|
08:30 – 09:00 | Light breakfast |
09:00 – 10:40 | Online learning and optimization |
09:00 | Cooperative Online Learning: Keeping your Neighbors Updated Nicolò Cesa-Bianchi, Tommaso Cesari and Claire Monteleoni |
09:20 | Online Non-Convex Learning: Following the Perturbed Leader is Optimal Arun Suggala and Praneeth Netrapalli |
09:40 | An adaptive stochastic optimization algorithm for resource allocation Xavier Fontaine, Shie Mannor and Vianney Perchet |
10:00 | Exponentiated Gradient Meets Gradient Descent Udaya Ghai, Elad Hazan and Yoram Singer |
10:20 | Robust Algorithms for Online k-means Clustering Aditya Bhaskara and Aravinda Kanchana Ruwanpathirana |
10:40 – 11:00 | Break |
11:00 – 12:20 | Bandits II |
11:00 | First-Order Bayesian Regret Analysis of Thompson Sampling Mark Sellke and Sébastien Bubeck |
11:20 | Feedback graph regret bounds for Thompson Sampling and UCB Thodoris Lykouris, Eva Tardos and Drishti Wali |
11:40 | Optimal delta correct best-arm selection for general distributions Shubhada Agrawal, Sandeep Juneja and Peter Glynn |
12:00 | Solving Bernoulli Rank-One Bandits with Unimodal Thompson Sampling Cindy Trinh, Emilie Kaufmann, Claire Vernade and Richard Combes |
12:20 – 14:00 | Lunch (on your own) |
14:00 – 15:00 | Plenary talk: Winnowing with gradient descent Manfred Warmuth |
15:00 – 15:20 | Break |
15:20 – 16:20 | Statistical learning theory II |
15:20 | Optimal multiclass overfitting by sequence reconstruction from Hamming queries Jayadev Acharya and Ananda Theertha Suresh |
15:40 | Adversarially Robust Learning Could Leverage Computational Hardness Sanjam Garg, Somesh Jha, Saeed Mahloujifar and Mohammad Mahmoody |
16:00 | On Learnability with Computable Learners Sushant Agarwal, Nivasini Ananthakrishnan, Shai Ben-David, Tosca Lechner and Ruth Urner |
16:20 – 16:40 | Break |
16:40 – 17:40 | Privacy and stability | 16:40 | Sampling Without Compromising Accuracy in Adaptive Data Analysis Benjamin Fish, Lev Reyzin and Benjamin Rubinstein |
17:00 | Privately Answering Classification Queries in the Agnostic PAC Model Anupama Nandi and Raef Bassily |
17:20 | Efficient Private Algorithms for Learning Large-Margin Halfspaces Huy Nguyen, Jonathan Ullman and Lydia Zakynthinou |