Conference Schedule

ALT 2024 will be held over 4 days. Each accepted paper will be presented as a 12-minute talk during area-based sessions. Additionally, this year we offer to all authors the option to bring and present a poster during a poster session after lunch. Posters corresponding to papers presented during the day should be put up either before the beginning of the first session or during the first coffee break. This way, there will be 9-13 posters per day and discussions around each presented work can continue informally.

Sunday, 25 FebruaryMonday, 26 February
9:00 – 10:00Opening remarks; Online Learning 1Neural Networks
10:00 – 10:45Coffee breakCoffee break
10:45 – 11:15Unsupervised and Semi-supervised LearningPrivacy 1
11:35 – 12:30Invited talk: Stefanie Jegelka
Benefits of learning with symmetries:
eigenvectors, graph representations and sample complexity
Invited talk: Gregory Valiant
Memory and Energy: Two Bottlenecks for Learning
12:30 – 13:30Lunch breakLunch break
13:30 – 14:15PostersPosters
14:15 – 15:15OptimizationGames and Bandits
15:15 – 16:00Coffee break15:00 – 15:45: Coffee break
16:00 – 16:45Reinforcement Learning15:45 – 16:45: Generalization Bounds
16:45 – 17:15: Impromptu talks

The second part of the conference will have slightly shorter lunch breaks to allow for a business meeting slot as well as some free hiking time.

Tuesday, 27 FebruaryWednesday, 28 February
9:00 – 10:00Supervised Learning9:15-10:00: Query Learning
10:00 – 10:45Coffee breakCoffee break
10:45 – 11:30Online Learning 2Bandit Problems
11:35 – 12:30Invited talk: Fan Chung Graham
Clustering in graphs with high clustering coefficients
Invited talk: Gergely Neu
Online-to-PAC Conversions: Generalization Bounds via Regret Analysis
12:30 – 13:15Lunch breakLunch break
13:15 – 14:00PostersPosters
14:00 – 14:30Business meeting14:00 – 15:00: Learnability
14:30 – 15:15Privacy 2
15:15 – Free afternoon, hike (sunset at 17:44) 15:00: Closing remarks
Reception (18:00)

Sunday, 25 February

Online Learning 1
Improving Adaptive Online Learning Using Refined Discretization
Online Infinite-Dimensional Regression: Learning Linear Operators
The Dimension of Self-Directed Learning

Unsupervised and Semi-supervised Learning
Concentration of empirical barycenters in metric spaces
Distances for Markov Chains, and Their Differentiation

Dueling Optimization with a Monotone Adversary
RedEx: Beyond Fixed Representation Methods via Convex Optimization
Adaptive Combinatorial Maximization: Beyond Approximate Greedy Policies
Alternating minimization for generalized rank one matrix sensing: Sharp predictions from a random initialization

Reinforcement Learning
The complexity of non-stationary reinforcement learning
Near-continuous time Reinforcement Learning for continuous state-action spaces
Slowly Changing Adversarial Bandit Algorithms are Efficient for Discounted MDPs

Monday, 26 February

Neural Networks
Universal Representation of Permutation-Invariant Functions on Vectors and Tensors
A Mechanism for Sample-Efficient In-Context Learning for Sparse Retrieval Tasks
Computation with Sequences of Assemblies in a Model of the Brain
Provable Accelerated Convergence of Nesterov’s Momentum for Deep ReLU Neural Networks

Privacy 1
Differentially Private Non-Convex Optimization under the KL Condition with Optimal Rates
Not All Learnable Distributions are Privately Learnable

Games and Bandits
The Attractor of the Replicator Dynamic in Zero-Sum Games
CRIMED: Lower and Upper Bounds on Regret for Bandits with Unbounded Stochastic Corruption
Adversarial Contextual Bandits Go Kernelized

Generalization Bounds
Tight bounds for maximum $\ell_1$-margin classifiers
On the Sample Complexity of Two-Layer Networks: Lipschitz Vs. Element-Wise Lipschitz Activation
Efficient Agnostic Learning with Average Smoothness
Tight Bounds for Local Glivenko-Cantelli

Tuesday, 27 February

Supervised Learning
Semi-supervised Group DRO: Combating Sparsity with Unlabeled Data
Predictor-Rejector Multi-Class Abstention: Theoretical Analysis and Algorithms
On the Computational Benefit of Multimodal Learning
Partially Interpretable Models with Guarantees on Coverage and Accuracy

Online Learning 2
Adversarial Online Collaborative Filtering
Corruption-Robust Lipschitz Contextual Search
Multiclass Online Learnability under Bandit Feedback

Privacy 2
Private PAC Learning May be Harder than Online Learning
A Polynomial Time, Pure Differentially Private Estimator for Binary Product Distributions
Mixtures of Gaussians are Privately Learnable with a Polynomial Number of Samples

Wednesday, 28 February

Query Learning
Learning Spanning Forests Optimally in Weighted Undirected Graphs with CUT queries
Agnostic Membership Query Learning with Nontrivial Savings: New Results and Techniques
Learning Hypertrees From Shortest Path Queries

Bandits Problems
Importance-Weighted Offline Learning Done Right
Optimal Regret Bounds for Collaborative Learning in Bandits
Online Recommendations for Agents with Discounted Adaptive Preferences

Multiclass Learnability Does Not Imply Sample Compression
The Impossibility of Parallelizing Boosting
Learning bounded-degree polytrees with known skeleton