papers

publications by categories in reversed chronological order. [*] denotes equal contribution.

preprints

                  1. Privacy-preserving data release leveraging optimal transport and particle gradient descent
                    Konstantin Donhauser*, Javier Abad*, Neha Hulkund, and Fanny Yang
                    arXiv preprint, 2024


                                recent conference publications

                                                1. Detecting critical treatment effect bias in small subgroups
                                                  Piersilvio De Bartolomeis, Javier Abad, Konstantin Donhauser, and Fanny Yang
                                                  Conference on Uncertainty in Artificial Intelligence (UAI), 2024
                                                2. Hidden yet quantifiable: A lower bound for confounding strength using randomized trials
                                                  Piersilvio De Bartolomeis*, Javier Abad*, Konstantin Donhauser, and Fanny Yang
                                                  International Conference on Artificial Intelligence and Statistics (AISTATS), 2024
                                                3. Certified private data release for sparse Lipschitz functions
                                                  Konstantin Donhauser, Johan Lokna, Amartya Sanyal, March Boedihardjo, Robert Hoenig, and Fanny Yang
                                                  International Conference on Artificial Intelligence and Statistics (AISTATS), 2024
                                                4. PILLAR: How to make semi-private learning more effective
                                                  Francesco Pinto, Yaxi Hu, Fanny Yang, and Amartya Sanyal
                                                  IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), 2024
                                                5. Tight bounds for maximum l1-margin classifiers
                                                  Stefan Stojanovic, Konstantin Donhauser, and Fanny Yang
                                                  Algorithmic Learning Theory (ALT), 2024
                                                1. Can semi-supervised learning use all the data effectively? A lower bound perspective
                                                  Alexandru Ţifrea*, Gizem Yüce*, Amartya Sanyal, and Fanny Yang
                                                  Neural Information Processing Systems (NeurIPS), Spotlight, 2023
                                                2. Margin-based sampling in high dimensions: When being active is less efficient than staying passive
                                                  Alexandru Tifrea*, Jacob Clarysse*, and Fanny Yang
                                                  International Conference on Machine Learning (ICML), 2023
                                                3. Strong inductive biases provably prevent harmless interpolation
                                                  Michael Aerni*, Marco Milanta*, Konstantin Donhauser, and Fanny Yang
                                                  International Conference on Learning Representations (ICLR), 2023
                                                4. Why adversarial training can hurt robust accuracy
                                                  Jacob Clarysse, Julia Hörrmann, and Fanny Yang
                                                  International Conference on Learning Representations (ICLR), 2023
                                                1. How unfair is private learning?
                                                  Amartya Sanyal*, Yaxi Hu*, and Fanny Yang
                                                  Conference on Uncertainty in Artificial Intelligence (UAI), Oral, 2022
                                                2. Semi-supervised novelty detection using ensembles with regularized disagreement
                                                  Alexandru Țifrea, Eric Stavarache, and Fanny Yang
                                                  Conference on Uncertainty in Artificial Intelligence (UAI), 2022
                                                3. Fast rates for noisy interpolation require rethinking the effects of inductive bias
                                                  Konstantin Donhauser, Nicolo Ruggeri, Stefan Stojanovic, and Fanny Yang
                                                  International Conference on Machine Learning (ICML), 2022
                                                4. Tight bounds for minimum l1-norm interpolation of noisy data
                                                  Guillaume Wang*, Konstantin Donhauser*, and Fanny Yang
                                                  International Conference on Artificial Intelligence and Statistics (AISTATS), 2022
                                                1. Self-supervised Reinforcement Learning with Independently Controllable Subgoals
                                                  Andrii Zadaianchuk, Georg Martius, and Fanny Yang
                                                  Conference on Robot Learning (CoRL), 2021
                                                2. How rotational invariance of common kernels prevents generalization in high dimensions
                                                  Konstantin Donhauser, Mingqi Wu, and Fanny Yang
                                                  International Conference on Machine Learning (ICML), 2021
                                                3. Interpolation can hurt robust generalization even when there is no noise
                                                  Konstantin Donhauser*, Alexandru Tifrea*, Michael Aerni, Reinhard Heckel, and Fanny Yang
                                                  Neural Information Processing Systems (NeurIPS), 2021
                                                1. Understanding and Mitigating the Tradeoff between Robustness and Accuracy
                                                  Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John Duchi, and Percy Liang
                                                  International Conference on Machine Learning (ICML), 2020
                                                1. Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness
                                                  Fanny Yang, Zuowen Wang, and Christina Heinze-Deml
                                                  Neural Information Processing Systems (NeurIPS), 2019

                                                    workshop papers

                                                                        1. How robust accuracy suffers from certified training with convex relaxations
                                                                          Piersilvio De Bartolomeis, Jacob Clarysse, Amartya Sanyal, and Fanny Yang
                                                                          NeurIPS Workshop on empirical falsification (Long Talk) 2022
                                                                        2. Provable concept learning for interpretable predictions using variational inference
                                                                          Armeen Taeb, Nicolo Ruggeri, Carina Schnuck, and Fanny Yang
                                                                          ICML Workshop AI4Science 2022

                                                                                  More publications can be found on the respective individual pages