Search Results for author: Max Zimmer

Found 8 papers, 6 papers with code

Neural Parameter Regression for Explicit Representations of PDE Solution Operators

no code implementations19 Mar 2024 Konrad Mundinger, Max Zimmer, Sebastian Pokutta

We introduce Neural Parameter Regression (NPR), a novel framework specifically developed for learning solution operators in Partial Differential Equations (PDEs).

Computational Efficiency Operator learning +1

On the Byzantine-Resilience of Distillation-Based Federated Learning

1 code implementation19 Feb 2024 Christophe Roux, Max Zimmer, Sebastian Pokutta

In this work, we study the performance of such approaches in the byzantine setting, where a subset of the clients act in an adversarial manner aiming to disrupt the learning process.

Federated Learning Knowledge Distillation

PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs

no code implementations23 Dec 2023 Max Zimmer, Megi Andoni, Christoph Spiegel, Sebastian Pokutta

Neural Networks can be efficiently compressed through pruning, significantly reducing storage and computational demands while maintaining predictive performance.

Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging

1 code implementation29 Jun 2023 Max Zimmer, Christoph Spiegel, Sebastian Pokutta

Model soups (Wortsman et al., 2022) enhance generalization and out-of-distribution (OOD) performance by averaging the parameters of multiple models into a single one, without increasing inference time.

Interpretability Guarantees with Merlin-Arthur Classifiers

1 code implementation1 Jun 2022 Stephan Wäldchen, Kartikey Sharma, Berkant Turan, Max Zimmer, Sebastian Pokutta

We propose an interactive multi-agent classifier that provides provable interpretability guarantees even for complex agents such as neural networks.

Feature Correlation

Compression-aware Training of Neural Networks using Frank-Wolfe

1 code implementation24 May 2022 Max Zimmer, Christoph Spiegel, Sebastian Pokutta

Many existing Neural Network pruning approaches rely on either retraining or inducing a strong bias in order to converge to a sparse solution throughout training.

Network Pruning

How I Learned to Stop Worrying and Love Retraining

1 code implementation1 Nov 2021 Max Zimmer, Christoph Spiegel, Sebastian Pokutta

Many Neural Network Pruning approaches consist of several iterative training and pruning steps, seemingly losing a significant amount of their performance after pruning and then recovering it in the subsequent retraining phase.

Network Pruning

Deep Neural Network Training with Frank-Wolfe

1 code implementation14 Oct 2020 Sebastian Pokutta, Christoph Spiegel, Max Zimmer

In particular, we show the general feasibility of training Neural Networks whose parameters are constrained by a convex feasible region using Frank-Wolfe algorithms and compare different stochastic variants.

Cannot find the paper you are looking for? You can Submit a new open access paper.