How robust accuracy suffers from certified training with convex relaxationsNeurIPS Workshop on empirical falsification (Long Talk) 2022
Margin-based sampling in high dimensions: When being active is less efficient than staying passiveInternational Conference on Machine Learning (ICML), 2023
Why adversarial training can hurt robust accuracyInternational Conference on Learning Representations (ICLR), 2023
Currently, my main interest is in various theoretical perspectives and methods concerning trustworthy machine learning, ranging from classical common image corruptions to adversarial robustness and compositions thereof. To be extended.
firstname.lastname@example.org CAB E62.1 ETH Zürich