no code implementations • 19 Mar 2024 • Konrad Mundinger, Max Zimmer, Sebastian Pokutta
We introduce Neural Parameter Regression (NPR), a novel framework specifically developed for learning solution operators in Partial Differential Equations (PDEs).
1 code implementation • 19 Feb 2024 • Christophe Roux, Max Zimmer, Sebastian Pokutta
In this work, we study the performance of such approaches in the byzantine setting, where a subset of the clients act in an adversarial manner aiming to disrupt the learning process.
no code implementations • 23 Dec 2023 • Max Zimmer, Megi Andoni, Christoph Spiegel, Sebastian Pokutta
Neural Networks can be efficiently compressed through pruning, significantly reducing storage and computational demands while maintaining predictive performance.
1 code implementation • 29 Jun 2023 • Max Zimmer, Christoph Spiegel, Sebastian Pokutta
Model soups (Wortsman et al., 2022) enhance generalization and out-of-distribution (OOD) performance by averaging the parameters of multiple models into a single one, without increasing inference time.
1 code implementation • 1 Jun 2022 • Stephan Wäldchen, Kartikey Sharma, Berkant Turan, Max Zimmer, Sebastian Pokutta
We propose an interactive multi-agent classifier that provides provable interpretability guarantees even for complex agents such as neural networks.
1 code implementation • 24 May 2022 • Max Zimmer, Christoph Spiegel, Sebastian Pokutta
Many existing Neural Network pruning approaches rely on either retraining or inducing a strong bias in order to converge to a sparse solution throughout training.
1 code implementation • 1 Nov 2021 • Max Zimmer, Christoph Spiegel, Sebastian Pokutta
Many Neural Network Pruning approaches consist of several iterative training and pruning steps, seemingly losing a significant amount of their performance after pruning and then recovering it in the subsequent retraining phase.
1 code implementation • 14 Oct 2020 • Sebastian Pokutta, Christoph Spiegel, Max Zimmer
In particular, we show the general feasibility of training Neural Networks whose parameters are constrained by a convex feasible region using Frank-Wolfe algorithms and compare different stochastic variants.