Sébastien Bubeck – Old News

My best works have been around online decision making, with a couple of solutions to long-standing problems (minimax rate for multi-armed bandits and linear bandits at COLT 2009/COLT 2012/ALT 2018, best of both worlds for multi-armed bandits at COLT 2011, bandit convex optimization at COLT 2016/STOC 2017, progress on k-server and metrical task systems at STOC 2017/SODA 2018/STOC 2023, chasing convex bodies at STOC 2019, multiplayer multi-armed bandit at COLT 2020/COLT 2021, layered graph traversal in 2022).

I also did a couple of works in convex optimization (entropic barrier at COLT 2015, geometric view on acceleration in 2015, optimal distributed rates at ICML 2017/NIPS 2018/NeurIPS 2019/ICML 2020) and in network analysis (influence of the seed in preferential attachment graphs, and dimension estimation in random geometric graphs, work done in 2013/2014, appeared in Random Structures and Algorithms). Some other fun side projects included Langevin diffusion (NIPS 2015), entropic CLT (International Mathematics Research Notices 2016), smoothed analysis of local search (STOC 2017), and finding critical points on non-convex functions in low dimensions (COLT 2020, ALT 2023).

In 2019-2020-2021 I spent a lot of time thinking about adversarial examples in ML (computational complexity of adversarial examples at ICML 2019, more efficient randomized smoothing at NeurIPS 2019, construction of compact and smooth two-layer neural networks at NeurIPS 2020, law of robustness at COLT 2021 and NeurIPS 2021).

In 2022 I started thinking more seriously about feature learning in neural networks.

Shortest path without a map, but with an entropic regularizer

In shortest path without a map, a searcher tries to find a shortest path between a source and a target while the environment is revealed sequentially. Specifically, in layered graph traversal, the underlying graph is revealed layer by layer. The optimal randomized competitive ratio for this problem remained open since the original paper by Papadimitriou and Yannakakis in 1989. With Christian Coester and Yuval Rabani we resolved the problem using mirror descent, with the twist that the mirror map is evolving as the geometry of the space is being revealed, see the paper here.

A law of robustness for neural networks

We proved with Mark Sellke that very large overparametrization is necessary for memorization of high-dimensional data with a Lipschitz neural network, see the paper here and a more popular account of the result on Quanta magazine here.

This work is a follow-up on a conjecture we had with Yuanzhi Li and Dheeraj Nagaraj for a more specific scenario (two-layer neural network and Gaussian data). The conjecture remains open in the sense that the result with Mark assumes non-exponential size weights, while for two-layer nets this assumption might be superfluous.

Multiplayer multi-armed bandit

The non-stochastic version of cooperative multiplayer multi-armed bandit (with collisions) turns out to be a surprisingly challenging problem. In this first paper we obtain an optimal algorithm for the model with announced collisions. The model where collisions are not announced remains wide open (both from upper and lower bounds perspective). In the stochastic case we proved that in fact one can get optimal regret without ANY collisions (see here for the multi-armed version).

Chasing convex bodies competitively

With an extremely fun team of co-authors (Yin Tat Lee, Yuanzhi Li, Mark Sellke) we finally managed to obtain a competitive algorithm for chasing convex bodies (after a couple of years of infructuous attempts), see also this youtube video. We also obtained a rather complete picture of the nested version of the problem. The latter approach was then used to obtain the optimal competitive ratio in this paper by Mark Sellke (see also this paper with very similar results).

A regularization approach for k-server and metrical task systems with the multiscale entropy

With a fantastic team of co-authors (Michael Cohen, James R. Lee, Yin Tat Lee, Aleksander Madry) we improved the state of the art competitive ratio for k-server and metrical task systems by using the mirror descent algorithm. To learn more about it I recommend to first take a look at this [youtube video], then these 3 blog posts ([part 1], [part 2], [part 3]) and finally [the k-server paper] itself. The [MTS paper] is finally online too, and it is a great start to get into this line of work.

Polynomial-time algorithm for bandit convex optimization

From July 2014 to July 2016 with various co-authors at MSR we dedicated a lot of energy to bandit convex optimization. The end product is a new efficient algorithm. To learn more about it I recommend to first take a look at this [youtube video], then these 3 blog posts ([part 1], [part 2], [part 3]), and finally [the paper] itself.

Research Interests

  • machine learning

  • theoretical computer science

  • convex optimization

  • multi-armed bandits

  • online algorithms (in particular metrical task systems and k-server)

  • statistical network analysis, random graphs and random matrices

  • applications of information theory to learning, optimization, and probability