Konstantin Donhauser

PhD student at AI Center
(joint with Afonso Bandeira)

I am an ETH AI Center Doctoral Fellow. My research interest is in High-​Dimensional Statistics and more generally in the combination of Mathematics & Machine Learning.  I’m part of the groups led by Fanny Yang and Afonso Bandeira.


Papers

  1. Hidden yet quantifiable: A lower bound for confounding strength using randomized trials
    Piersilvio De Bartolomeis*, Javier Abad*, Konstantin Donhauser, and Fanny Yang
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2024
  2. Certified private data release for sparse Lipschitz functions
    Konstantin Donhauser, Johan Lokna, Amartya Sanyal, March Boedihardjo, Robert Hoenig, and Fanny Yang
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2024
  3. Tight bounds for maximum l1-margin classifiers
    Stefan Stojanovic, Konstantin Donhauser, and Fanny Yang
    Algorithmic Learning Theory (ALT), 2024
  4. Strong inductive biases provably prevent harmless interpolation
    Michael Aerni*, Marco Milanta*, Konstantin Donhauser, and Fanny Yang
    International Conference on Learning Representations (ICLR), 2023
  5. Fast rates for noisy interpolation require rethinking the effects of inductive bias
    Konstantin Donhauser, Nicolo Ruggeri, Stefan Stojanovic, and Fanny Yang
    International Conference on Machine Learning (ICML), 2022
  6. Tight bounds for minimum l1-norm interpolation of noisy data
    Guillaume Wang*, Konstantin Donhauser*, and Fanny Yang
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2022
  7. How rotational invariance of common kernels prevents generalization in high dimensions
    Konstantin Donhauser, Mingqi Wu, and Fanny Yang
    International Conference on Machine Learning (ICML), 2021
  8. Interpolation can hurt robust generalization even when there is no noise
    Konstantin Donhauser*, Alexandru Tifrea*, Michael Aerni, Reinhard Heckel, and Fanny Yang
    Neural Information Processing Systems (NeurIPS), 2021

Preprints

  1. Privacy-preserving data release leveraging optimal transport and particle gradient descent
    Konstantin Donhauser*, Javier Abad*, Neha Hulkund, and Fanny Yang
    arXiv preprint, 2024

Blog posts

There will be hopefully soon some blog posts.

Short C.V.

04/2021 - PhD, ETH Zurich
10/2018 - 3/2021 Research Intern - SML Group, ETH Zurich
1/2018 - 6/2020 M.Sc. Electrical Engineering, ETH Zurich
10/2017 - 6/2020 B.Sc. Mathematics, ETH Zurich
10/2014 - 4/2018 B.Sc. Electrical Engineering, ETH Zurich


Contact information

You can find me on find me Linkedin, Twitter and Google Scholar or just simply write me an Email via konstantin.donhauser [at] ai.ethz.ch or