Sébastien Bubeck – Publications
Lecture Notes, Survey, PhD Thesis
M. Racz and S. Bubeck, Basic models and questions in statistical network analysis. Statist. Surv. 11 (2017), 1–47. [arxiv]
S. Bubeck, Convex Optimization: Algorithms and Complexity. In Foundations and Trends in Machine Learning, Vol. 8: No. 34, pp 231357, 2015 [pdf]
S. Bubeck, The complexities of optimization. Lecture notes, 2013. Available on my blog.
S. Bubeck and N. CesaBianchi, Regret Analysis of Stochastic and Nonstochastic Multiarmed Bandit Problems. In Foundations and Trends in Machine Learning, Vol 5: No 1, 1122, 2012. [pdf]
S. Bubeck, Introduction to Online Optimization. Lecture Notes, 2011. [draft]
S. Bubeck, Bandits Games and Clustering Foundations. PhD dissertation, Université Lille 1, 2010 (Jacques Neveu prize 2010, runnerup for the french AI prize 2011, runnerup for the Gilles Kahn prize 2010). [pdf] [slides of the defense] [slides of my jobtalk] [slides for the Gilles Kahn prize]
“Preprints” (to remain as preprints for the time being)
Phi3 Technical Report: A Highly Capable Language Model Locally on Your Phone, 2024 [arxiv]
Positional Description Matters for Transformers Arithmetic, 2023 [arxiv]
Textbooks Are All You Need II: phi1.5 technical report, 2023 [arxiv]
Textbooks Are All You Need, 2023 [arxiv]
Sparks of Artificial General Intelligence: Early experiments with GPT4, 2023 [arxiv]
Unveiling Transformers with LEGO: a synthetic reasoning task, 2022 [arxiv]
Journal Papers
P Lee, S Bubeck, J Petro, Benefits, limits, and risks of GPT4 as an AI chatbot for medicine. New England Journal of Medicine 388 (13), 12331239.
S. Bubeck and M. Sellke, A Universal Law of Robustness via Isoperimetry. In JACM, 2023. [arxiv]
S. Bubeck, M. B. Cohen, J. R. Lee, Y.T. Lee, Metrical task systems on trees via mirror descent and unfair gluing. In SICOMP, 2021. [arxiv]
S. Bubeck, Y.T. Lee, Y. Li, M. Sellke, Competitively Chasing Convex Bodies. In SICOMP. [arxiv]
S. Bubeck, R. Eldan and Y. T. Lee, Kernelbased methods for bandit convex optimization. In JACM, 2021. [arxiv]
K. Scaman, F. Bach, S. Bubeck, Y. T. Lee, L. Massoulié, Optimal Convergence Rates for Convex Distributed Optimization in Networks. In Journal of Machine Learning Research, 2019.
S. Bubeck and R. Eldan, Exploratory distributions for convex functions. In Mathematical Statistics and Learning, 2018. [arxiv]
S. Bubeck, J. Lehec, and R. Eldan, Sampling from a logconcave distribution with Projected Langevin Monte Carlo. In Discrete & Computational Geometry, 2018. [arxiv]
S. Bubeck and R. Eldan, The entropic barrier: exponential families, logconcave geometry and selfconcordance. In Mathematics of Operations Research, 2018. [arxiv]
E. AriasCastro, S. Bubeck, G. Lugosi and N. Verzelen, Detecting Markov Random Fields Hidden in White Noise. In Bernoulli, 2017. [arxiv]
S. Bubeck and S. Ganguly, Entropic CLT and phase transition in highdimensional Wishart matrices. In International Mathematics Research Notices, 2015. [arxiv]
S. Bubeck, J. Ding, R. Eldan and M. Racz, Testing for highdimensional geometry in random graphs. In Random Structures and Algorithms, 2015. [draft]
L. AddarioBerry, S. Bhamidi, S. Bubeck, L. Devroye, G. Lugosi, and R. Imbuzeiro Oliveira, Exceptional rotations of random graphs: a VC theory. In Journal of Machine Learning Research (JMLR), 2015. [arxiv]
S. Bubeck, R. Eldan, E. Mossel and M. Racz, From trees to seeds: on the inference of the seed from large trees in the uniform attachment model. In Bernoulli, 2014. [draft]
S. Bubeck, L. Devroye and G. Lugosi, Finding Adam in random growing trees. In Random Structures and Algorithms, 2015. [draft]
S. Bubeck, E. Mossel and M. Racz, On the influence of the seed graph in the preferential attachment model. IEEE Transactions on Network Science and Engineering 1, 3039, 2015. [arxiv]
E. AriasCastro, S. Bubeck and G. Lugosi, Detecting Positive Correlations in a Multivariate Sample. Bernoulli 21, 209241, 2015. [draft]
S. Bubeck and N. Linial, On the local profiles of trees. Journal of Graph Theory, 2015. [draft]
J.Y. Audibert, S. Bubeck and G. Lugosi, Regret in Online Combinatorial Optimization. Mathematics of Operations Research, 39, 3145, 2014. [draft]
S. Bubeck, N. CesaBianchi and G. Lugosi, Bandits with heavy tail. IEEE Transactions on Information Theory 59, 77117717, 2013. [pdf]
S. Bubeck, D. Ernst and A. Garivier, Optimal discovery with probabilistic expert advice: finite time analysis and macroscopic optimality. Journal of Machine Learning Research (JMLR), 14, 601623, 2013. [pdf]
E. AriasCastro, S. Bubeck and G. Lugosi, Detection of correlations. Annals of Statistics 40, 412435, 2012. [pdf]
S. Bubeck, R. Munos, G. Stoltz and C. Szepesvari, XArmed Bandits. Journal of Machine Learning Research (JMLR) 12, 15871627, 2011. [pdf] [slides] [Video]
J.Y. Audibert and S. Bubeck, Regret Bounds and Minimax Policies under Partial Monitoring. Journal of Machine Learning Research (JMLR) 11, 26352686, 2010. [pdf]
S. Bubeck, R. Munos and G. Stoltz, Pure Exploration in FinitelyArmed and ContinuouslyArmed Bandits. Theoretical Computer Science 412, 18321852, 2011. [pdf]
S. Bubeck, M. Meila and U. von Luxburg, How the Initialization Affects the Stability of the kmeans Algorithm. ESAIM: Probability and Statistics 16, 436452, 2012. [pdf]
S. Bubeck and U. von Luxburg, Nearest Neighbor Clustering: A Baseline Method for Consistent Clustering with Arbitrary Objective Functions. Journal of Machine Learning Research (JMLR) 10, 657698, 2009. [pdf]
Conference Paper
K. Ahn, S. Bubeck, S. Chewi, Y.T. Lee, F. Suarez, Y. Zhang, Learning threshold neurons via the “edge of stability”. NeurIPS 2023 [arxiv]
A. Kumar, R. Shen, S. Bubeck, S. Gunasekar, How to FineTune Vision Models with SGD. ICLR 2024 [arxiv]
S. Bubeck, C. Coester, Y. Rabani, The Randomized kServer Conjecture is False!. STOC 2023 (Best Paper Award). [arxiv]
S. Chewi, S. Bubeck, A. Salim, On the complexity of finding stationary points of smooth functions in one dimension. ALT 2023 (Best Student Paper Award). [arxiv]
S. Bubeck, C. Coester, Y. Rabani, Shortest Paths without a Map, but with an Entropic Regularizer. FOCS 2022. [arxiv]
R. Shen, S. Bubeck, S. Gunasekar, Data Augmentation as Feature Manipulation: a story of desert cows and grass cows. ICML 2022 [arxiv]
P. Bartlett, S. Bubeck, and Y. Cherapanamjeri, Adversarial Examples in MultiLayer Random ReLU Networks. NeurIPS 2021. [arxiv]
S. Bubeck and M. Sellke, A Universal Law of Robustness via Isoperimetry. NeurIPS 2021 (Best Paper Award). [arxiv]
S. Bubeck, Y. Cherapanamjeri, G. Gidel, R. Tachet des Combes, A single gradient step finds adversarial examples on random twolayers neural networks. NeurIPS 2021. [arxiv]
S. Bubeck, T. Budzinski, and M. Sellke, Cooperative and Stochastic MultiPlayer MultiArmed Bandit: Optimal Regret With Neither Communication Nor Collisions. COLT 2021. [arxiv]
S. Bubeck, Y. Li, and D. Nagaraj, A law of robustness for twolayers neural networks. COLT 2021. [arxiv]
S. Bubeck, N. Buchbinder, C. Coester, and M. Sellke, Metrical Service Systems with Transformations. ITCS 2021. [arxiv]
S. Bubeck, Y. Rabani, and M. Sellke, Online Multiserver Convex Chasing and Optimization. SODA 2021. [arxiv]
S. Bubeck, R. Eldan, Y.T. Lee, and D. Mikulincer, Network size and weights size for memorization with twolayers neural networks. NeurIPS 2020. [arxiv]
S. Bubeck, S. Chen, and J. Li, Entanglement is Necessary for Optimal Quantum Property Testing. FOCS 2020. [arxiv]
S. Bubeck and T. Budzinski, Coordination without communication: optimal regret in two players multiarmed bandits. COLT 2020. [arxiv]
S. Bubeck and D. Mikulincer, How to trap a gradient flow. COLT 2020. [arxiv]
S. Bubeck, Y. Li, Y. Peres, M. Sellke, NonStochastic MultiPlayer MultiArmed Bandits: Optimal Rate With Collision Information, Sublinear Without. COLT 2020. [arxiv]
A. Kolobov, S. Bubeck, and J. Zimmert, Online Learning for Active Cache Synchronization. ICML 2020. [arxiv]
H. Hendrikx, L. Xiao, S. Bubeck, F. Bach, and L. Massoulie, Statistically Preconditioned Accelerated Gradient Method for Distributed Optimization. ICML 2020. [arxiv]
S. Bubeck and Y. Rabani, Parametrized Metrical Task Systems. Approx 2020. [arxiv]
S. Bubeck, B. Klartag, Y.T. Lee, Y. Li, M. Sellke, Chasing Nested Convex Bodies Nearly Optimally. SODA 2020. [arxiv]
S. Bubeck, Q. Jiang, Y.T. Lee, Y. Li, A. Sidford, Complexity of Highly Parallel NonSmooth Convex Optimization. NeurIPS 2019. [arxiv]
H. Salman, G. Yang, J. Li, P. Zhang, H. Zhang, I. Razenshteyn, S. Bubeck, Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers. NeurIPS 2019. [arxiv]
S. Bubeck and M. Sellke, FirstOrder Regret Analysis of Thompson Sampling. ALT 2019. [arxiv]
S. Bubeck, Y. Li, H. Luo, C.Y. Wei, Improved Pathlength Regret Bounds for Bandits. COLT 2019. [arxiv]
S. Bubeck, Q. Jiang, Y.T. Lee, Y. Li, A. Sidford, Nearoptimal method for highly smooth convex optimization. COLT 2019. [arxiv]
S. Bubeck, Y.T. Lee, Y. Li, M. Sellke, Competitively Chasing Convex Bodies. Extended abstract in STOC 2019. [arxiv]
S. Bubeck, M. B. Cohen, J. R. Lee, Y.T. Lee, Metrical task systems on trees via mirror descent and unfair gluing. Extended abstract in SODA 2019. [arxiv]
C.J. Argue, S. Bubeck, M. B. Cohen, A. Gupta, Y.T. Lee, Nearly linear competitive ratio for chasing nested convex bodies. SODA 2019. [arxiv]
S. Bubeck, Y.T. Lee, E. Price, I. Razenshteyn, Adversarial Examples from Cryptographic PseudoRandom Generators. (merged with paper below) [arxiv]
S. Bubeck, E. Price, I. Razenshteyn, Adversarial examples from computational constraints. ICML 2019 [arxiv]
K. Scaman, F. Bach, S. Bubeck, Y. T. Lee, L. Massoulié, Optimal Algorithms for NonSmooth Distributed Optimization in Networks. NeurIPS 2018 (Best Paper Award) [arxiv]
Z. AllenZhu, S. Bubeck, and Y. Li, Make the Minority Great Again: FirstOrder Regret Bound for Contextual Bandits. NeurIPS 2018. [arxiv]
S. Bubeck, M. B. Cohen, J. R. Lee, Y.T. Lee, A. Madry, kserver via multiscale entropic regularization. STOC 2018. [arxiv].
S. Bubeck, M. B. Cohen, Y.T. Lee, Y. Li, An homotopy method for ell_p regression provably beyond selfconcordance and in inputsparsity time. STOC 2018. [arxiv].
S. Bubeck, M. B. Cohen, Y. Li, Sparsity, variance and curvature in multiarmed bandits . ALT 2018 (Best Student Paper Award). [arxiv].
K. Scaman, F. Bach, S. Bubeck, Y. T. Lee, L. Massoulié, Optimal algorithms for smooth and strongly convex distributed optimization in networks. ICML 2017 [arxiv]
S. Bubeck, N. R. Devanur, Z. Huang, R. Niazadeh, Online Auctions and Multiscale Online Learning. EC 2017 [arxiv]
S. Bubeck, R. Eldan and Y. T. Lee, Kernelbased methods for bandit convex optimization. Extended abstract in STOC 2017. [arxiv]
O. Angel, S. Bubeck, Y. Peres and F. Wei, Local maxcut in smoothed polynomial time. STOC 2017. [arxiv]
S. Bubeck and R. Eldan, Multiscale exploration of convex functions and bandit convex optimization. COLT 2016 (Best Paper Award). [arxiv]
S. Bubeck and Y. T. Lee, Blackbox optimization with a politician. ICML 2016. [arxiv]
S. Bubeck, J. Lehec, and R. Eldan, Sampling from a logconcave distribution with Projected Langevin Monte Carlo. Extended abstract in NIPS 2015. [draft]
S. Bubeck, O. Dekel, T. Koren and Y. Peres, Bandit Convex Optimization: sqrt{T} Regret in One Dimension. COLT 2015. [arxiv]
S. Bubeck and R. Eldan, The entropic barrier: a simple and optimal universal selfconcordant barrier. COLT 2015. [arxiv]
C.Y. Liu and S. Bubeck, Most Correlated Arms Identification. COLT 2014. [draft]
K. Jamieson, M. Malloy, S. Bubeck and R. Nowak, lil’ UCB: An Optimal Exploration Algorithm for MultiArmed Bandits. COLT 2014. [draft]
S. Bubeck and C.Y. Liu, Priorfree and priordependent regret bounds for Thompson Sampling. NIPS 2013. [pdf] [Supp. material]
S. Bubeck, V. Perchet and P. Rigollet, Bounded regret in stochastic multiarmed bandits. COLT 2013. [pdf] [Video]
(Note: The proof of Theorem 8 is not correct. We do not know if the theorem holds true.)
S. Bubeck, T. Wang and N. Viswanathan, Multiple Identifications in MultiArmed Bandits. ICML 2013. [pdf]
S. Bubeck, N. CesaBianchi and S. M. Kakade, Towards Minimax Policies for Online Linear Optimization with Bandit Feedback. COLT 2012. [pdf] [slides] [Video]
S. Bubeck and A. Slivkins, The best of both worlds: stochastic and adversarial bandits. COLT 2012. [pdf]
V. Gabillon. M. Ghavamzadeh, A. Lazaric and S. Bubeck, MultiBandit Best Arm Identification. NIPS 2011. [pdf] [Tech. reportl]
S. Bubeck, G. Stoltz and J. Y. Yu, Lipschitz Bandits without the Lipschitz Constant. In Proceedings of the 22nd International Conference on Algorithmic Learning Theory (ALT), 2011. [pdf]
J.Y. Audibert, S. Bubeck and G. Lugosi, Minimax Policies for Combinatorial Prediction Games. COLT 2011. [pdf] [slides] [Video]
J.Y. Audibert, S. Bubeck and R. Munos, Best Arm Identification in MultiArmed Bandits. In Proceedings of the 23rd Annual Conference on Learning Theory (COLT), 2010. [pdf] [slides]
S. Bubeck and R. Munos, OpenLoop Optimistic Planning. In Proceedings of the 23rd Annual Conference on Learning Theory (COLT), 2010. [pdf] [slides]
S. Bubeck, R. Munos and G. Stoltz, Pure Exploration in MultiArmed Bandit Problems. In Proceedings of the 20th International Conference on Algorithmic Learning Theory (ALT), 2009. [pdf] [slides]
J.Y. Audibert and S. Bubeck, Minimax Policies for Adversarial and Stochastic Bandits. In Proceedings of the 22nd Annual Conference on Learning Theory (COLT), 2009 (Best Student Paper Award). [pdf] [slides]
S. Bubeck, R. Munos, G. Stoltz and C. Szepesvari, Online Optimization in XArmed Bandits. In Advances in Neural Information Processing Systems (NIPS) 22, 2009. [pdf] [Supp. material] [poster]
U. von Luxburg, S. Bubeck, S. Jegelka and M. Kaufmann, Consistent Minimization of Clustering Objective Functions. In Advances in Neural Information Processing Systems (NIPS) 21, 2008. [pdf] [Supp. material] [poster]
Misc.
C. White, M. Khodak, R. Tu, S. Shah, S. Bubeck, D. Dey, A Deeper Look at ZeroCost Proxies for Lightweight NAS. ICLR 2022 Blogpost track. [blog post]
D. Dey, S. Shah, and S. Bubeck, FEAR: A Simple Lightweight Method to Rank Architectures. Draft, 2021. [arxiv]
S. Bubeck, K. Edwards, H. Mania and C. Supko, On paths, stars and wyes in trees. Draft, 2016. [arxiv]
S. Bubeck and E. Gwynne, Asymptotic behavior of the Eden model with positively homogeneous edge weights. Draft, 2015. [arxiv]
S. Bubeck, Y. T. Lee, and M. Singh, A geometric alternative to Nesterov's accelerated gradient descent. Draft, 2015. [draft]
K. Jamieson, M. Malloy, S. Bubeck and R. Nowak, On Finding the Largest Mean Among Many. Asilomar, 2013. [draft]
S. Bubeck, D. Ernst and A. Garivier, Optimal discovery with probabilistic expert advice. 51st IEEE Conference on Decision and Control (CDC), 2012. [pdf] [Extended version]
J.Y. Audibert, S. Bubeck and R. Munos, Bandit View on Noisy Optimization. In Optimization for Machine Learning, MIT press, 2011. [pdf]
