Welcome to the website of the Statistical Machine Learning group at ETH Zurich!
We are a group of curious minds from different parts of the world who study exciting questions in the intersection of statistics and machine learning. On a high level, we like to develop theoretical understanding for methodological advancements and vice versa.
Most projects in our group revolve around (robust and fair) generalization of overparameterized models in high dimensions (linear or neural networks). Please have a look at our recent papers to get a better sense for our research interests. For example, we currently study the effects of inductive bias on interpolating models for standard and robust generalization, semi-supervised learning and memorization in the context of worst-group accuracy, and work on developing interpretable models that simultaneously learn high-level concepts.
|Jan 21, 2023||Our papers on adversarial training hurts robust accuracy and strong inductive biases preventing harmless interpolation were accepted to ICLR ‘23!|
|Dec 11, 2022||Here are the slides for my talk at the NeurIPs workshop on empirical falsification “I can’t believe it’s not better” in New Orleans, Dec. 2022 on the failure of adversarial training vs. standard training for robust generalization and the failure of uncertainty-based sampling vs. uniform sampling.|
|Oct 25, 2022||Here are the slides for my tutorial-style talk at the mathematics of machine learning workshop at the BCAM Bilbao and the online 1W-MINDS seminar about our work on the new bias-variance trade-off by interpolators induced by the strength of inductive bias.|
|May 16, 2022||The papers on fairness and privacy trade-off (oral presentation) and semi-supervised novelty detection were accepted to UAI ‘22, and the paper on close to optimal rates for noisy interpolators accepted to ICML ‘22|