Batiment Alan Turing,
I am an assistant professor in Statistics at École Polytechnique, Palaiseau, in the applied mathematics departement.
My main research interests are statistics, optimization, stochastic approximation, Federated Learning, high-dimensional learning, non-parametric statistics, scalable kernel methods.
Before that, I was postdoctoral Researcher at EPFL (Ecole Polytechnique Fédérale de Lausanne), in the MLO team, directed by Martin Jaggi. Before that, I was Ph.D. student in the Sierra Team, which is part of the DI/ENS (Computer Science Department of École Normale Supérieure). I was supervised by Francis Bach. I graduated from Ecole Normale Supérieure de Paris (Ulm) in 2014 and got a Masters Degree in Mathematics, Probability and Statistics (at Université Paris-Sud, Orsay).
From March to August 2016, I was a visiting scholar researcher at University of California Berkeley, under the supervision of Martin Wainwright.
11/21 - Two new preprints are availbale, respectively on Compression on Gaussian random codebooks with applications to FL, and on Utility-Privacy tradeoffs in Heterogeneous FL frameworks. See below for links and details!
Two papers have been accepted at Neurips 2021! Federated Expectation Maximization with heterogeneity mitigation and variance reduction, and Preserved central model for faster bidirectional compression in distributed settings, see below for links and details!
29/09 - FLOW - Federated Learning One World Seminar. Presentation on Preserved central model for faster bidirectional compression in distributed settings. Slides.
On September 16, 2021, join us at the Federated Learning Workshop, a full-day hybrid event that takes place both online and in Paris. A great panel of speakers from academia and industry will forecast the most promising directions for future research on federated learning and the development of new benchmarks and application challenges.
06/2021: A couple of new papers are available on arxiv, especially on ``super acceleration'' (faster than momentum acceleration under assumptions on the Hessian matrix) and Quantized Langevin Dynamics.
04/2021: Maxence Noble is starting his research internship! Maxence will be working on utility and privacy tradeoffs in Federated Learning, and is co-supervised by Aurélien Bellet.
04/2021: Alexis Ayme is starting his research internship! Alexis will be working on learning missing data, and is co-supervised by Claire Boyer and Erwan Scornet.
12/2020: Margaux Zaffran is starting her PhD! Margaux will be working on Electricity Price Prediction, with EDF. She is co-supervised with Julie Josse (Inria), Yannig Goude and Olivier Féron.
10/2020: Baptiste Goujaud is starting his PhD! Baptiste will be working on First Order Optimization. He is supervised by myself and Éric Moulines. Baptiste has already worked on optimization during his time at MILA, especially the tuning and convergence of first order optimization algorithms.
October 2020: Our team is still looking for a Postdocs and Research engineers, with very competitive conditions!! If you have a PhD in statistics, optimization and Machine learning and are interested to join a great team in Paris, send me an email.
09/2020: Our paper ``Debiasing Stochastic Gradient Descent to handle missing values'' has been accepted at NeurIPS2020. This is a joint work with Aude Sportisse, Claire Boyer and Julie Josse. see Arxiv version
06/2020: Our paper ``On Convergence-Diagnostic based Step Sizes for Stochastic Gradient Descent'' was accepted at ICML 2020. This is a joint work with Scott Pesme and Nicolas Flammarion at EPFL. see Arxiv version
04/2020: The applied mathematics department at École Polytechnique has open positions for Tenure Track professors, one on Statistics and the other one on Statistics and Energy. These positions offer competitive conditions.
26/01/2020: The third edition of the "Advances in Machine Learning, theory meets practice", at Applied Machine Learning Days (AMLD) in Lausanne,
that we were co-organizing with Sebastian Stich was a nice occasion to bring theoreticians and
practitioners together: thanks to the speakers for their great talks!
See the workshop page for slides and details.
01/2020: Our paper on using optimal transport for NLP has been accepted to AISTATS2020!
12/2019: Our paper on unsupervised time series representation has successfully passed the Neurips reproducibility challenge! About 70 papers published at NeurIPS 2019 were picked by independent researchers, to reproduce the results, assess if the description of the framework was complete, and provide feedback. You can find the discussion on our paper here and the full 12 pages report on our work here. F. Liljefors, M. M. Sorkhei and S. Broomé were able to reproduce and reimplement our methods from the description given in the paper, and to obtain the same results as in our original paper! We would like to thank them for their work on our paper!
12/2019: I will be presenting two papers at NeurIPS 2019 in Vancouver. The first one one distributed optimization, more specifically Local SGD, with K. K. Patel, and the second one on a new methods to generate representations of time series in an unsupervised fashion, with J.Y. Franceschi M. Jaggi. Links to the papers below !
12/2019: Constantin Philippenko is starting his PhD ! Constantin will be working on Federated Learning, especially on problems arising from privacy concerns. He will be supervised by myself and Éric Moulines, and will also be working with Richard Vidal and Leatitia Kameni from the research team at Accenture. Welcome to my first PhD student ! :)
Publications and Preprints
Reviews and Commintees
I was a member of the Jury for Belhal Karimi's PhD defense, on September, the 19th, 2019.
I am a referee for Luigi Carratino's PhD dissertation, that was defended in early spring 2020.
I am part of the scientific committee for the seminar le Palaisien.
September 2021 - FLOW - Federated Learning One World Seminar. Presentation on Preserved central model for faster bidirectional compression in distributed settings. Slides.
March 2021, Bi-directional compression for Federated Learning: Artemis & MCM Séminaire, Télécom Paris. Slides here
March 2020, On Convergence-Diagnostic based Step Sizes for Stochastic Gradient Descent, Parisian Seminar of Optimization.
January 2020, On Convergence-Diagnostic based Step Sizes for Stochastic Gradient Descent, Inria Paris
October 2019, Large Scale Learning and Optimization, Tbilisi,
Georgia. 6 hours lecture at ASML, slides here!
April 2019, Journées Calcul et Apprentissage, Lyon
December 2018, Communication trade-offs for synchronized
distributed SGD with large step size, CMStatistics 2018,
Pisa, Italy [slides]
January 2018, Tutoriel d’Optimization, Journées “YSP”
organisées par la SFDS, Institut
Henri Poincaré [slides]
Decembrer 2017, Stochastic algorithms in Machine
learning, Tutoriel at “journée algorithmes
stochastiques", Paris Dauphine. [slides]
November 2017, Stochastic approximation and
Markov chains, Invited talk, Télécom
Paristech, Paris [slides]
February 2017, Scalable methods for
Statistics, a short presentation,
Cambridge, UK . [slides]
March 2016, Non parametric stochastic approxiation, UC Berkeley.
October 2015, Tradeoffs of learning
in Hilbert spaces, ENSAI
June 2015, Non parametric
Machine Learning Summer
I defended my thesis on Thursday, September 28, at 2.30 pm, at INRIA.slides.