Sébastien Bubeck – Publications

Lecture Notes, Survey, PhD Thesis

  • M. Racz and S. Bubeck, Basic models and questions in statistical network analysis. Statist. Surv. 11 (2017), 1–47. [arxiv]

  • S. Bubeck, Convex Optimization: Algorithms and Complexity. In Foundations and Trends in Machine Learning, Vol. 8: No. 3-4, pp 231-357, 2015 [pdf]

  • S. Bubeck, The complexities of optimization. Lecture notes, 2013. Available on my blog.

  • S. Bubeck and N. Cesa-Bianchi, Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. In Foundations and Trends in Machine Learning, Vol 5: No 1, 1-122, 2012. [pdf]

  • S. Bubeck, Introduction to Online Optimization. Lecture Notes, 2011. [draft]

  • S. Bubeck, Bandits Games and Clustering Foundations. PhD dissertation, Université Lille 1, 2010 (Jacques Neveu prize 2010, runner-up for the french AI prize 2011, runner-up for the Gilles Kahn prize 2010). [pdf] [slides of the defense] [slides of my jobtalk] [slides for the Gilles Kahn prize]

Preprints

  • K. Ahn, S. Bubeck, S. Chewi, Y.T. Lee, F. Suarez, Y. Zhang, Learning threshold neurons via the “edge of stability”. [arxiv]

  • A. Kumar, R. Shen, S. Bubeck, S. Gunasekar, How to Fine-Tune Vision Models with SGD. [arxiv]

  • Y. Zhang, A. Backurs, S. Bubeck, R. Eldan, S. Gunasekar, T. Wagner, Unveiling Transformers with LEGO: a synthetic reasoning task. [arxiv]

Journal Papers

  • S. Bubeck, M. B. Cohen, J. R. Lee, Y.T. Lee, Metrical task systems on trees via mirror descent and unfair gluing. In SICOMP, 2021. [arxiv]

  • S. Bubeck, Y.T. Lee, Y. Li, M. Sellke, Competitively Chasing Convex Bodies. In SICOMP. [arxiv]

  • S. Bubeck, R. Eldan and Y. T. Lee, Kernel-based methods for bandit convex optimization. In JACM, 2021. [arxiv]

  • K. Scaman, F. Bach, S. Bubeck, Y. T. Lee, L. Massoulié, Optimal Convergence Rates for Convex Distributed Optimization in Networks. In Journal of Machine Learning Research, 2019.

  • S. Bubeck and R. Eldan, Exploratory distributions for convex functions. In Mathematical Statistics and Learning, 2018. [arxiv]

  • S. Bubeck, J. Lehec, and R. Eldan, Sampling from a log-concave distribution with Projected Langevin Monte Carlo. In Discrete & Computational Geometry, 2018. [arxiv]

  • S. Bubeck and R. Eldan, The entropic barrier: exponential families, log-concave geometry and self-concordance. In Mathematics of Operations Research, 2018. [arxiv]

  • E. Arias-Castro, S. Bubeck, G. Lugosi and N. Verzelen, Detecting Markov Random Fields Hidden in White Noise. In Bernoulli, 2017. [arxiv]

  • S. Bubeck and S. Ganguly, Entropic CLT and phase transition in high-dimensional Wishart matrices. In International Mathematics Research Notices, 2015. [arxiv]

  • S. Bubeck, J. Ding, R. Eldan and M. Racz, Testing for high-dimensional geometry in random graphs. In Random Structures and Algorithms, 2015. [draft]

  • L. Addario-Berry, S. Bhamidi, S. Bubeck, L. Devroye, G. Lugosi, and R. Imbuzeiro Oliveira, Exceptional rotations of random graphs: a VC theory. In Journal of Machine Learning Research (JMLR), 2015. [arxiv]

  • S. Bubeck, R. Eldan, E. Mossel and M. Racz, From trees to seeds: on the inference of the seed from large trees in the uniform attachment model. In Bernoulli, 2014. [draft]

  • S. Bubeck, L. Devroye and G. Lugosi, Finding Adam in random growing trees. In Random Structures and Algorithms, 2015. [draft]

  • S. Bubeck, E. Mossel and M. Racz, On the influence of the seed graph in the preferential attachment model. IEEE Transactions on Network Science and Engineering 1, 30-39, 2015. [arxiv]

  • E. Arias-Castro, S. Bubeck and G. Lugosi, Detecting Positive Correlations in a Multivariate Sample. Bernoulli 21, 209-241, 2015. [draft]

  • S. Bubeck and N. Linial, On the local profiles of trees. Journal of Graph Theory, 2015. [draft]

  • J.Y. Audibert, S. Bubeck and G. Lugosi, Regret in Online Combinatorial Optimization. Mathematics of Operations Research, 39, 31-45, 2014. [draft]

  • S. Bubeck, N. Cesa-Bianchi and G. Lugosi, Bandits with heavy tail. IEEE Transactions on Information Theory 59, 7711-7717, 2013. [pdf]

  • S. Bubeck, D. Ernst and A. Garivier, Optimal discovery with probabilistic expert advice: finite time analysis and macroscopic optimality. Journal of Machine Learning Research (JMLR), 14, 601-623, 2013. [pdf]

  • E. Arias-Castro, S. Bubeck and G. Lugosi, Detection of correlations. Annals of Statistics 40, 412-435, 2012. [pdf]

  • S. Bubeck, R. Munos, G. Stoltz and C. Szepesvari, X-Armed Bandits. Journal of Machine Learning Research (JMLR) 12, 1587-1627, 2011. [pdf] [slides] [Video]

  • J.Y. Audibert and S. Bubeck, Regret Bounds and Minimax Policies under Partial Monitoring. Journal of Machine Learning Research (JMLR) 11, 2635-2686, 2010. [pdf]

  • S. Bubeck, R. Munos and G. Stoltz, Pure Exploration in Finitely-Armed and Continuously-Armed Bandits. Theoretical Computer Science 412, 1832-1852, 2011. [pdf]

  • S. Bubeck, M. Meila and U. von Luxburg, How the Initialization Affects the Stability of the k-means Algorithm. ESAIM: Probability and Statistics 16, 436-452, 2012. [pdf]

  • S. Bubeck and U. von Luxburg, Nearest Neighbor Clustering: A Baseline Method for Consistent Clustering with Arbitrary Objective Functions. Journal of Machine Learning Research (JMLR) 10, 657-698, 2009. [pdf]

Conference Paper

  • S. Bubeck, C. Coester, Y. Rabani, The Randomized k-Server Conjecture is False!. STOC 2023 (Best Paper Award). [arxiv]

  • S. Chewi, S. Bubeck, A. Salim, On the complexity of finding stationary points of smooth functions in one dimension. ALT 2023 (Best Student Paper Award). [arxiv]

  • S. Bubeck, C. Coester, Y. Rabani, Shortest Paths without a Map, but with an Entropic Regularizer. FOCS 2022. [arxiv]

  • R. Shen, S. Bubeck, S. Gunasekar, Data Augmentation as Feature Manipulation: a story of desert cows and grass cows. ICML 2022 [arxiv]

  • P. Bartlett, S. Bubeck, and Y. Cherapanamjeri, Adversarial Examples in Multi-Layer Random ReLU Networks. NeurIPS 2021. [arxiv]

  • S. Bubeck and M. Sellke, A Universal Law of Robustness via Isoperimetry. NeurIPS 2021 (Best Paper Award). [arxiv]

  • S. Bubeck, Y. Cherapanamjeri, G. Gidel, R. Tachet des Combes, A single gradient step finds adversarial examples on random two-layers neural networks. NeurIPS 2021. [arxiv]

  • S. Bubeck, T. Budzinski, and M. Sellke, Cooperative and Stochastic Multi-Player Multi-Armed Bandit: Optimal Regret With Neither Communication Nor Collisions. COLT 2021. [arxiv]

  • S. Bubeck, Y. Li, and D. Nagaraj, A law of robustness for two-layers neural networks. COLT 2021. [arxiv]

  • S. Bubeck, N. Buchbinder, C. Coester, and M. Sellke, Metrical Service Systems with Transformations. ITCS 2021. [arxiv]

  • S. Bubeck, Y. Rabani, and M. Sellke, Online Multiserver Convex Chasing and Optimization. SODA 2021. [arxiv]

  • S. Bubeck, R. Eldan, Y.T. Lee, and D. Mikulincer, Network size and weights size for memorization with two-layers neural networks. NeurIPS 2020. [arxiv]

  • S. Bubeck, S. Chen, and J. Li, Entanglement is Necessary for Optimal Quantum Property Testing. FOCS 2020. [arxiv]

  • S. Bubeck and T. Budzinski, Coordination without communication: optimal regret in two players multi-armed bandits. COLT 2020. [arxiv]

  • S. Bubeck and D. Mikulincer, How to trap a gradient flow. COLT 2020. [arxiv]

  • S. Bubeck, Y. Li, Y. Peres, M. Sellke, Non-Stochastic Multi-Player Multi-Armed Bandits: Optimal Rate With Collision Information, Sublinear Without. COLT 2020. [arxiv]

  • A. Kolobov, S. Bubeck, and J. Zimmert, Online Learning for Active Cache Synchronization. ICML 2020. [arxiv]

  • H. Hendrikx, L. Xiao, S. Bubeck, F. Bach, and L. Massoulie, Statistically Preconditioned Accelerated Gradient Method for Distributed Optimization. ICML 2020. [arxiv]

  • S. Bubeck and Y. Rabani, Parametrized Metrical Task Systems. Approx 2020. [arxiv]

  • S. Bubeck, B. Klartag, Y.T. Lee, Y. Li, M. Sellke, Chasing Nested Convex Bodies Nearly Optimally. SODA 2020. [arxiv]

  • S. Bubeck, Q. Jiang, Y.T. Lee, Y. Li, A. Sidford, Complexity of Highly Parallel Non-Smooth Convex Optimization. NeurIPS 2019. [arxiv]

  • H. Salman, G. Yang, J. Li, P. Zhang, H. Zhang, I. Razenshteyn, S. Bubeck, Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers. NeurIPS 2019. [arxiv]

  • S. Bubeck and M. Sellke, First-Order Regret Analysis of Thompson Sampling. ALT 2019. [arxiv]

  • S. Bubeck, Y. Li, H. Luo, C.-Y. Wei, Improved Path-length Regret Bounds for Bandits. COLT 2019. [arxiv]

  • S. Bubeck, Q. Jiang, Y.T. Lee, Y. Li, A. Sidford, Near-optimal method for highly smooth convex optimization. COLT 2019. [arxiv]

  • S. Bubeck, Y.T. Lee, Y. Li, M. Sellke, Competitively Chasing Convex Bodies. Extended abstract in STOC 2019. [arxiv]

  • S. Bubeck, M. B. Cohen, J. R. Lee, Y.T. Lee, Metrical task systems on trees via mirror descent and unfair gluing. Extended abstract in SODA 2019. [arxiv]

  • C.J. Argue, S. Bubeck, M. B. Cohen, A. Gupta, Y.T. Lee, Nearly linear competitive ratio for chasing nested convex bodies. SODA 2019. [arxiv]

  • S. Bubeck, Y.T. Lee, E. Price, I. Razenshteyn, Adversarial Examples from Cryptographic Pseudo-Random Generators. (merged with paper below) [arxiv]

  • S. Bubeck, E. Price, I. Razenshteyn, Adversarial examples from computational constraints. ICML 2019 [arxiv]

  • K. Scaman, F. Bach, S. Bubeck, Y. T. Lee, L. Massoulié, Optimal Algorithms for Non-Smooth Distributed Optimization in Networks. NeurIPS 2018 (Best Paper Award) [arxiv]

  • Z. Allen-Zhu, S. Bubeck, and Y. Li, Make the Minority Great Again: First-Order Regret Bound for Contextual Bandits. NeurIPS 2018. [arxiv]

  • S. Bubeck, M. B. Cohen, J. R. Lee, Y.T. Lee, A. Madry, k-server via multiscale entropic regularization. STOC 2018. [arxiv].

  • S. Bubeck, M. B. Cohen, Y.T. Lee, Y. Li, An homotopy method for ell_p regression provably beyond self-concordance and in input-sparsity time. STOC 2018. [arxiv].

  • S. Bubeck, M. B. Cohen, Y. Li, Sparsity, variance and curvature in multi-armed bandits . ALT 2018 (Best Student Paper Award). [arxiv].

  • K. Scaman, F. Bach, S. Bubeck, Y. T. Lee, L. Massoulié, Optimal algorithms for smooth and strongly convex distributed optimization in networks. ICML 2017 [arxiv]

  • S. Bubeck, N. R. Devanur, Z. Huang, R. Niazadeh, Online Auctions and Multi-scale Online Learning. EC 2017 [arxiv]

  • S. Bubeck, R. Eldan and Y. T. Lee, Kernel-based methods for bandit convex optimization. Extended abstract in STOC 2017. [arxiv]

  • O. Angel, S. Bubeck, Y. Peres and F. Wei, Local max-cut in smoothed polynomial time. STOC 2017. [arxiv]

  • S. Bubeck and R. Eldan, Multi-scale exploration of convex functions and bandit convex optimization. COLT 2016 (Best Paper Award). [arxiv]

  • S. Bubeck and Y. T. Lee, Black-box optimization with a politician. ICML 2016. [arxiv]

  • S. Bubeck, J. Lehec, and R. Eldan, Sampling from a log-concave distribution with Projected Langevin Monte Carlo. Extended abstract in NIPS 2015. [draft]

  • S. Bubeck, O. Dekel, T. Koren and Y. Peres, Bandit Convex Optimization: sqrt{T} Regret in One Dimension. COLT 2015. [arxiv]

  • S. Bubeck and R. Eldan, The entropic barrier: a simple and optimal universal self-concordant barrier. COLT 2015. [arxiv]

  • C.Y. Liu and S. Bubeck, Most Correlated Arms Identification. COLT 2014. [draft]

  • K. Jamieson, M. Malloy, S. Bubeck and R. Nowak, lil’ UCB: An Optimal Exploration Algorithm for Multi-Armed Bandits. COLT 2014. [draft]

  • S. Bubeck and C.Y. Liu, Prior-free and prior-dependent regret bounds for Thompson Sampling. NIPS 2013. [pdf] [Supp. material]

  • S. Bubeck, V. Perchet and P. Rigollet, Bounded regret in stochastic multi-armed bandits. COLT 2013. [pdf] [Video] (Note: The proof of Theorem 8 is not correct. We do not know if the theorem holds true.)

  • S. Bubeck, T. Wang and N. Viswanathan, Multiple Identifications in Multi-Armed Bandits. ICML 2013. [pdf]

  • S. Bubeck, N. Cesa-Bianchi and S. M. Kakade, Towards Minimax Policies for Online Linear Optimization with Bandit Feedback. COLT 2012. [pdf] [slides] [Video]

  • S. Bubeck and A. Slivkins, The best of both worlds: stochastic and adversarial bandits. COLT 2012. [pdf]

  • V. Gabillon. M. Ghavamzadeh, A. Lazaric and S. Bubeck, Multi-Bandit Best Arm Identification. NIPS 2011. [pdf] [Tech. reportl]

  • S. Bubeck, G. Stoltz and J. Y. Yu, Lipschitz Bandits without the Lipschitz Constant. In Proceedings of the 22nd International Conference on Algorithmic Learning Theory (ALT), 2011. [pdf]

  • J.Y. Audibert, S. Bubeck and G. Lugosi, Minimax Policies for Combinatorial Prediction Games. COLT 2011. [pdf] [slides] [Video]

  • J.Y. Audibert, S. Bubeck and R. Munos, Best Arm Identification in Multi-Armed Bandits. In Proceedings of the 23rd Annual Conference on Learning Theory (COLT), 2010. [pdf] [slides]

  • S. Bubeck and R. Munos, Open-Loop Optimistic Planning. In Proceedings of the 23rd Annual Conference on Learning Theory (COLT), 2010. [pdf] [slides]

  • S. Bubeck, R. Munos and G. Stoltz, Pure Exploration in Multi-Armed Bandit Problems. In Proceedings of the 20th International Conference on Algorithmic Learning Theory (ALT), 2009. [pdf] [slides]

  • J.Y. Audibert and S. Bubeck, Minimax Policies for Adversarial and Stochastic Bandits. In Proceedings of the 22nd Annual Conference on Learning Theory (COLT), 2009 (Best Student Paper Award). [pdf] [slides]

  • S. Bubeck, R. Munos, G. Stoltz and C. Szepesvari, Online Optimization in X-Armed Bandits. In Advances in Neural Information Processing Systems (NIPS) 22, 2009. [pdf] [Supp. material] [poster]

  • U. von Luxburg, S. Bubeck, S. Jegelka and M. Kaufmann, Consistent Minimization of Clustering Objective Functions. In Advances in Neural Information Processing Systems (NIPS) 21, 2008. [pdf] [Supp. material] [poster]

Misc.

  • C. White, M. Khodak, R. Tu, S. Shah, S. Bubeck, D. Dey, A Deeper Look at Zero-Cost Proxies for Lightweight NAS. ICLR 2022 Blogpost track. [blog post]

  • D. Dey, S. Shah, and S. Bubeck, FEAR: A Simple Lightweight Method to Rank Architectures. In submission, 2021. [arxiv]

  • S. Bubeck, K. Edwards, H. Mania and C. Supko, On paths, stars and wyes in trees. Draft, 2016. [arxiv]

  • S. Bubeck and E. Gwynne, Asymptotic behavior of the Eden model with positively homogeneous edge weights. Draft, 2015. [arxiv]

  • S. Bubeck, Y. T. Lee, and M. Singh, A geometric alternative to Nesterov's accelerated gradient descent. Draft, 2015. [draft]

  • K. Jamieson, M. Malloy, S. Bubeck and R. Nowak, On Finding the Largest Mean Among Many. Asilomar, 2013. [draft]

  • S. Bubeck, D. Ernst and A. Garivier, Optimal discovery with probabilistic expert advice. 51st IEEE Conference on Decision and Control (CDC), 2012. [pdf] [Extended version]

  • J.Y. Audibert, S. Bubeck and R. Munos, Bandit View on Noisy Optimization. In Optimization for Machine Learning, MIT press, 2011. [pdf]