Conference Schedule

ALT 2025 will be held over 4 days (Mon-Thu) with ShaiFest on the following day (Fri). Each accepted paper will be presented as a 10-minute talk, with 2 minutes for questions. There will also be a poster session each day.

See below the table for the talks in each section.

Monday, February 23
9:00 – 9:30Registration and Welcome Remarks
9:30 – 10:30Session 1
10:30 – 11:00Coffee Break
11:00 – 12:00Plenary Talk: Surbhi Goel
12:00 – 2:00Lunch at Fields with Poster Session
2:00 – 3:00Session 2
3:00 – 3:30Coffee Break
3:30 – 4:45Session 3
Tuesday, February 24
9:00 – 10:30Session 4
10:30 – 11:00Coffee Break
11:00 – 12:00Plenary Talk: Aaron Roth
12:00 – 2:00Lunch at Fields with Poster Session
2:00 – 3:00Session 5
3:00 – 3:30Coffee Break
3:30 – 4:45Session 6
[Fields closes for the day]
5:30 – EndConference Dinner at Old Mill Toronto (map)
Wednesday, February 25
9:00 – 10:30Session 7
10:30 – 11:00Coffee Break
11:00 – 12:00Plenary Talk: Ohad Shamir
12:00 – 2:00Lunch at Fields with Poster Session
2:00 – 3:00Session 8
3:00 – 3:15Coffee Break
3:15 – 4:15Session 9
4:15 – 4:45Business Meeting
Thursday, February 26
9:00 – 10:30Session 10
10:30 – 11:00Coffee Break
11:00 – 12:00Plenary Talk: Vitaly Feldman
12:00 – 2:00Lunch at Fields with Poster Session
2:00 – 3:00Session 11
3:00 – 3:30Coffee Break
3:30 – 4:45Session 12
This is the end of the main ALT 2026 Conference. ShaiFest will follow on Friday, February 27. See the ShaiFest page for more information and a schedule

Talk Schedule

Session 1: Monday 9:30 – 10:30

  • Regularized Robustly Reliable Learners
    Avrim Blum, Donya Saless
  • Learning with Monotone Adversarial Corruptions
    Kasper Green Larsen, Chirag Pabbaraju, Abhishek Shetty
  • Group-realizable multi-group learning by minimizing empirical risk
    Navid Ardeshir, Samuel Deng, Daniel Hsu, Jingwen Liu
  • Improved Replicable Boosting with Majority-of-Majorities
    Kasper Green Larsen, Markus Engelund Mathiasen, Clement Svendsen
  • Sample-Near-Optimal Agnostic Boosting in Fixed-Parameter Tractable Time
    Arthur da Cunha, Mikael Møller Høgsgaard, Andrea Paudice

Session 2: Monday 2:00 – 3:00

  • Sink equilibria and the attractors of learning in games
    Oliver Biggar, Christos H. Papadimitriou
  • Last-iterate Convergence for Symmetric, General-sum, 2 × 2 Games Under The Exponential Weights Dynamic
    Guanghui Wang, Krishna Acharya, Lokranjan Lakshmikanthan, Vidya Muthukumar, Juba Ziani
  • Strategy-robust Online Learning in Contextual Pricing
    Joon Suk Huh, Kirthevasan Kandasamy
  • A Novel Data-Dependent Learning Paradigm for Large Hypothesis Classes
    Alireza F. Pour, Shai Ben-David
  • Learning from Synthetic Data: Limitations of ERM
    Kareem Amin, Alex Bie, Weiwei Kong, Umar Syed, Sergei Vassilvitskii

Session 3: Monday 3:30 – 4:45

  • Closeness testing from distributed measurements
    Clement Louis Canonne, Aditya Vikram Singh
  • Nearly Minimax Discrete Distribution Estimation in Kullback-Leibler Divergence with High Probability
    Dirk van der Hoeven, Julia Olkhovskaya, Tim van Erven
  • On Purely Private Covariance Estimation
    Tommaso d’Orsi, Gleb Novikov
  • Differentially Private Bilevel Optimization
    Guy Kornowski
  • Privately Learning Decision Lists and a Differentially Private Winnow
    Mark Bun, William Fang
  • Improved Regret in Stochastic Decision-Theoretic Online Learning under Differential Privacy
    Ruihan Wu, Yu-Xiang Wang

Session 4: Tuesday 9:00 – 10:30

  • Phase Transition of Regret for Logistic Regression with Large Weights
    Michael Drmota, Philippe Jacquet, Changlong Wu, Wojciech Szpankowski
  • Optimal L2 Regularization in High-dimensional Continual Linear Regression
    Gilad Karpel, Edward Moroshko, Ran Levinstein, Ron Meir, Daniel Soudry, Itay Evron
  • Quantitative Convergence Analysis of Projected Stochastic Gradient Descent for Non-Convex Losses via the Goldstein Subdifferential
    Yuping Zheng, Andrew Lamperski
  • Variance Reduction and Low Sample Complexity in Stochastic Optimization via Proximal Point Method
    Jiaming Liang
  • Accelerated Mirror Descent for Non-Euclidean Star-convex Functions
    Clement LEZANE, Sophie Langer, Wouter M Koolen
  • DS-Compatible Log-Linear Reliability with KL-Prox EM: Monotone Ascent, Identifiability, and Generalization
    Shiva Koreddi, Sravani Sowrupilli
  • Online Convex Optimization with Heavy Tails: Old Algorithms, New Regrets, and Applications
    Zijian Liu

Session 5: Tuesday 2:00 – 3:00

  • Sample Complexity Bounds for Linear Constrained MDPs with a Generative Model
    Xingtu Liu, Lin F. Yang, Sharan Vaswani
  • Complexity of Vector-valued Prediction: From Linear Models to Stochastic Convex Optimization
    Matan Schliserman, Tomer Koren
  • Smoothed Online Optimization for Target Tracking: Robust and Learning-Augmented Algorithms
    Ali Zeynali, Mahsa Sahebdel, Qingsong Liu, Ramesh K. Sitaraman, Mohammad Hajiesmaili
  • Sparse Nonparametric Contextual Bandits
    Hamish Flynn, Julia Olkhovskaya, Paul Rognon-Vael
  • Ranking Items from Discrete Ratings: The Cost of Unknown User Thresholds
    Oscar Villemaud, Suryanarayana Sankagiri, Matthias Grossglauser

Session 6: Tuesday 3:30 – 4:45

  • On the Hardness of Learning Regular Expressions
    Idan Attias, Lev Reyzin, Nathan Srebro, Gal Vardi
  • Large Average Subtensor Problem: Ground-State, Algorithms, and Algorithmic Barriers
    Abhishek Hegade K. R., Eren C. Kizildag
  • The Planted Number Partitioning Problem
    Eren C. Kizildag
  • Uniform Convergence Beyond Glivenko-Cantelli
    Tanmay Devale, Pramith Devulapalli, Steve Hanneke
  • Optimal Bounds for Tyler’s M-Estimator for Elliptical Distributions
    Akshay Ramachandran, Lap Lau
  • Talagrand Meets Talagrand: Upper and Lower Bounds on Expected Soft Maxima of Gaussian Processes with Finite Index Sets
    Yifeng Chu, Maxim Raginsky

Session 7: Wednesday 9:00 – 10:30

  • Distribution-Dependent Rates for Multi-Distribution Learning
    Rafael Hanashiro, Patrick Jaillet
  • From Continual Learning to SGD and Back: Better Rates for Continual Linear Models
    Itay Evron, Ran Levinstein, Matan Schliserman, Uri Sherman, Tomer Koren, Daniel Soudry, Nathan Srebro
  • Beyond Discrepancy: A Closer Look at the Theory of Distribution Shift
    Robi Bhattacharjee, Nicholas Rittler, Kamalika Chaudhuri
  • Efficient and Provable Algorithms for Covariate Shift
    Deeksha Adil, Jaroslaw Blasiok
  • Multi-distribution Learning: From Worst-Case Optimality to Lexicographic Min-Max Optimality
    Guanghui Wang, Umar Syed, Robert E. Schapire, Jacob Abernethy
  • PAC-Bayesian Analysis of the Surrogate Relation between Joint Embedding and Supervised Downstream Losses
    Theresa Wasserer, Maximilian Fleissner, Debarghya Ghoshdastidar
  • Bridging Lifelong and Multi-Task Representation Learning via Algorithm and Complexity Measure
    Zhi Wang, Chicheng Zhang, Ramya Korlakai Vinayak

Session 8: Wednesday 2:00 – 3:00

  • Recycling History: Efficient Recommendations from Contextual Dueling Bandits
    Authors: Suryanarayana Sankagiri, Jalal Etesami, Pouria Fatemi, Matthias Grossglauser
  • Eventually LIL Regret: Almost Sure ln ln T Regret for a sub-Gaussian Mixture on Unbounded Data
    Shubhada Agrawal, Aaditya Ramdas
  • Robust Online Learning
    Sajad Ashkezari
  • Universal Dynamic Regret and Constraint Violation Bounds for Constrained Online Convex Optimization
    Subhamon Supantha, Abhishek Sinha
  • Efficient Opportunistic Approachability
    Teodor Vanislavov Marinov, Mehryar Mohri, Princewill Okoroafor, Jon Schneider, Julian Zimmert

Session 9: Wednesday 3:15 – 4:15

  • On the Role of Transformer Feed-Forward Layers in Nonlinear In-Context Learning
    Haoyuan Sun, Ali Jadbabaie, Navid Azizan
  • Shallow Neural Networks Learn Low-Degree Spherical Polynomials with Learnable Channel Attention
    Yingzhen Yang
  • Online Markov Decision Processes with Terminal Law Constraints
    Bianca Marin Moreno, Margaux Brégère, Pierre Gaillard, Nadia Oudjane
  • Online and Offline Learning of Orderly Hypergraphs Using Queries
    Shaun Fallat, Kamyar Khodamoradi, David G. Kirkpatrick, Valerii Maliuk, Seyed Ahmad Mojallal, Sandra Zilles
  • Enjoying Non-linearity in Multinomial Logistic Bandits: A Minimax-Optimal Algorithm
    Pierre Boudart, Pierre Gaillard, Alessandro Rudi

Session 10: Thursday 9:00 – 10:30

  • Graph Inference with Effective Resistance Queries
    Evelyn Warton, Huck Bennett, Mitchell Black, Amir Nayyeri
  • Compressibility Barriers to Neighborhood-Preserving Data Visualization
    Szymon Snoeck, Noah Bergam, Nakul Verma
  • Predictive inference for time series: why is split conformal effective despite temporal dependence?
    Rina Foygel Barber, Ashwin Pananjady
  • Universality of conformal prediction under the assumption of randomness
    Vladimir Vovk
  • A Martingale Kernel Two-Sample Test
    Anirban Chatterjee, Aaditya Ramdas
  • Vector-valued self-normalized concentration inequalities beyond sub-Gaussianity
    Diego Martinez-Taboada, Tomás González, Aaditya Ramdas
  • No Scale Sensitive Dimension for Distribution Learning
    Tosca Lechner, Shai Ben-David

Session 11: Thursday 2:00 – 3:00

  • Reusing Samples in Variance Reduction
    Yujia Jin, Ishani Karmarkar, Aaron Sidford, Jiayi Wang
  • Convex optimization with p-norm oracles
    Deeksha Adil, Brian Bullins, Arun Jambulapati, Aaron Sidford
  • How to Set β1, β2 in Adam: An Online Learning Perspective
    Quan M. Nguyen
  • Suspicious Alignment of SGD: A Fine-Grained Step Size Condition Analysis
    Shenyang Deng, Boyao Liao, Zhuoli Ouyang, Tianyu Pang, Minhak Song, Yaoqing Yang
  • Designing Algorithms for Entropic Optimal Transport from an Optimisation Perspective
    Vishwak Srinivasan, Qijia Jiang

Session 12: Thursday 3:30 – 4:45

  • Pareto-optimal Non-uniform Language Generation
    Moses Charikar, Chirag Pabbaraju
  • On Characterizations for Language Generation: Interplay of Hallucinations, Breadth, and Stability
    Alkis Kalavasis, Anay Mehrotra, Grigoris Velegkas
  • Online Covering with Multiple Experts
    Kim Thang Nguyen
  • Discriminative Feature Feedback with General Teacher Classes
    Omri Bar Oz, Tosca Lechner, Sivan Sabato
  • Relative Information Gain and Gaussian Process Regression
    Hamish Flynn
  • Reward Selection with Noisy Observations
    Kamyar Azizzadenesheli, Trung Dang, Aranyak Mehta, Alexandros Psomas, Qian Zhang