ALT 2025 will be held over 4 days. Each accepted paper will be presented as a 12-minute talk, with two minutes for questions. There will also be a poster session at the end of each day, where authors of papers presented each day will additionally present a poster (roughly A0 size or smaller) for their paper. Consider setting up posters early during the day, so that discussion can happen informally during the breaks.
Conference talks will be held in the Rogers room (Aula Rogers) of Building 11 (Architettura). All catering and poster sessions will be held in the Vetrata room (Aula Vetrata) of Building 13 (Trifoglio).
There are plans to have a banquet in one of the evenings (details TBD).
Monday, 24 February | |
---|---|
9:00 – 10:15 | Opening remarks; Session 1 |
10:15 – 10:45 | Coffee break |
10:45 – 11:45 | Plenary talk: Boaz Barak AI safety via Inference-time compute |
11:45 – 1:15 | Lunch break |
1:15 – 2:15 | Session 2 |
2:15 – 2:45 | Coffee break |
2:45 – 3:45 | Session 3 |
3:45 – 4:45 | Poster Session |
4:45 onwards | Aperitivo (reception) |
Tuesday, 25 February | |
---|---|
9:00 – 10:15 | Session 4 |
10:15 – 10:45 | Coffee break |
10:45 – 11:45 | Plenary talk: Massimiliano Pontil Linear Operators Learning for Dynamical Systems |
11:45 – 1:15 | Lunch break |
12:45 – 1:15 | Business Meeting |
1:15 – 2:30 | Session 5 |
2:30 – 3:00 | Coffee break |
3:00 – 4:00 | Session 6 |
4:00 – 5:00 | Poster Session |
Wednesday, 26 February | |
---|---|
9:00 – 10:15 | Session 7 |
10:15 – 10:45 | Coffee break |
10:45 – 11:45 | Plenary talk: Nikita Zhivotovskiy From Estimation to Prediction: What Assumptions Do We Need? |
11:45 – 1:15 | Lunch break |
1:15 – 2:15 | Session 8 |
2:15 – 2:45 | Coffee break |
2:45 – 3:15 | Session 9 |
3:15 – 4:00 | Interview with TBD |
4:00 – 5:00 | Poster Session |
Thursday, 27 February | |
---|---|
9:00 – 10:15 | Session 10 |
10:15 – 10:45 | Coffee break |
10:45 – 11:45 | Plenary talk: Claire Vernade RL beyond expectations: Planning for utility functions |
11:45 – 1:15 | Lunch break |
1:15 – 2:30 | Session 11 |
2:30 – 3:00 | Coffee break |
3:00 – 4:00 | Session 12 |
4:00 – 5:00 | Poster Session |
Monday, 24 February
Session 1
Efficient Optimal PAC Learning
Do PAC-Learners Learn the Marginal Distribution?
Is Transductive Learning Equivalent to PAC Learning?
Sample Compression Scheme Reductions
Session 2
Quantile Multi-Armed Bandits with 1-bit Feedback
Logarithmic Regret for Unconstrained Submodular Maximization Stochastic Bandit
Clustering with bandit feedback: breaking down the computation/information gap
A Complete Characterization of Learnability for Stochastic Noisy Bandits
Session 3
Boosting, Voting Classifiers and Randomized Sample Compression Schemes
Understanding Aggregations of Proper Learners in Multiclass Classification
Minimax Adaptive Boosting for Online Nonparametric Regression
Sharp bounds on aggregate expert error
Tuesday, 25 February
Session 4
Cost-Free Fairness in Online Correlation Clustering
Optimal Rates for O(1)-Smooth DP-SCO with a Single Epoch and Large Batches
Differentially Private Multi-Sampling from Distributions
Agnostic Private Density Estimation for GMMs via List Global Stability
Computationally efficient reductions between some statistical models
Session 5
On the Hardness of Learning One Hidden Layer Neural Networks
On Generalization Bounds for Neural Networks with Low Rank Layers
Sample Complexity of Recovering Low Rank Tensors from Symmetric Rank-One Measurements
High-accuracy sampling from constrained spaces with the Metropolis-adjusted Preconditioned Langevin Algorithm
Fast Convergence of $\Phi$-Divergence Along the Unadjusted Langevin Algorithm and Proximal Sampler
Session 6
When and why randomised exploration works (in linear bandits)
For Universal Multiclass Online Learning, Bandit Feedback and Full Supervision are Equivalent
Nearly-tight Approximation Guarantees for the Improving Multi-Armed Bandits Problem
Non-stochastic Bandits With Evolving Observations
Wednesday, 26 February
Session 7
Online Learning of Quantum States with Logarithmic Loss via VB-FTRL
A Unified Theory of Supervised Online Learnability
Full Swap Regret and Discretized Calibration
Data Dependent Regret Bounds for Online Portfolio Selection with Predicted Returns
Center-Based Approximation of a Drifting Distribution
Session 8
Efficient PAC Learning of Halfspaces with Constant Malicious Noise Rate
Noisy Computing of the Threshold Function
How rotation invariant algorithms are fooled by noise on sparse targets
A Model for Combinatorial Dictionary Learning and Inference
Session 9
Strategyproof Learning with Advice
An Online Feasible Point Method for Benign Generalized Nash Equilibrium Problems
Thursday, 27 February
Session 10
A PAC-Bayesian Link Between Generalisation and Flat Minima
The Dimension Strikes Back with Gradients: Generalization of Gradient Methods in Stochastic Convex Optimization
Generalization bounds for mixing processes via delayed online-to-PAC conversions
Enhanced $H$-Consistency Bounds
Generalisation under gradient descent via deterministic PAC-Bayes
Session 11
Reliable Active Apprenticeship Learning
The Plugin Approach for Average-Reward and Discounted MDPs: Optimal Sample Complexity Analysis
Optimal and learned algorithms for the online list update problem with Zipfian accesses
Self-Directed Node Classification on Graphs
Error dynamics of mini-batch gradient descent with random reshuffling for least squares regression
Session 12
Effective Littlestone dimension
Proper Learnability and the Role of Unlabeled Data
A Characterization of List Regression
Refining the Sample Complexity of Comparative Learning