1 code implementation • 7 Feb 2024 • Sebastian Neumayer, Viktor Stein, Gabriele Steidl, Nicolaj Rux
In this paper, we use the so-called kernel mean embedding to show that the corresponding regularization can be rewritten as the Moreau envelope of some function in the reproducing kernel Hilbert space associated with $K$.
2 code implementations • 21 Aug 2023 • Alexis Goujon, Sebastian Neumayer, Michael Unser
We propose to learn non-convex regularizers with a prescribed upper bound on their weak-convexity modulus.
no code implementations • 31 Mar 2023 • Sebastian Neumayer, Lénaïc Chizat, Michael Unser
In supervised learning, the regularization path is sometimes used as a convenient theoretical proxy for the optimization path of gradient descent initialized from zero.
2 code implementations • 22 Nov 2022 • Alexis Goujon, Sebastian Neumayer, Pakshal Bohra, Stanislas Ducotterd, Michael Unser
The emergence of deep-learning-based methods to solve image-reconstruction problems has enabled a significant increase in reconstruction quality.
1 code implementation • 28 Oct 2022 • Stanislas Ducotterd, Alexis Goujon, Pakshal Bohra, Dimitris Perdios, Sebastian Neumayer, Michael Unser
Lipschitz-constrained neural networks have several advantages over unconstrained ones and can be applied to a variety of problems, making them a topic of attention in the deep learning community.
no code implementations • 14 Jun 2022 • Pol del Aguila Pla, Sebastian Neumayer, Michael Unser
Robustness and stability of image-reconstruction algorithms have recently come under scrutiny.
no code implementations • 13 Apr 2022 • Sebastian Neumayer, Alexis Goujon, Pakshal Bohra, Michael Unser
Lipschitz-constrained neural networks have many applications in machine learning.
1 code implementation • 4 Nov 2020 • Johannes Hertrich, Sebastian Neumayer, Gabriele Steidl
In this paper, we introduce convolutional proximal neural networks (cPNNs), which are by construction averaged operators.
1 code implementation • 7 Sep 2020 • Paul Hagemann, Sebastian Neumayer
In this paper, we analyze the properties of invertible neural networks, which provide a way of solving inverse problems.