no code implementations • 19 Feb 2024 • Alexander Auras, Kanchana Vaishnavi Gandikota, Hannah Droege, Michael Moeller
This paper attempts to provide an overview of current approaches for solving inverse problems in imaging using variational methods and machine learning.
1 code implementation • 18 Feb 2024 • Kanchana Vaishnavi Gandikota, Paramanand Chandramouli, Hannah Droege, Michael Moeller
Both classical approaches and deep networks are affected by such attacks leading to changes in the visual appearance of localized lesions, for extremely small perturbations.
no code implementations • 27 Nov 2023 • Marius Bock, Michael Moeller, Kristof Van Laerhoven
Our results show that state-of-the-art TAL models are able to outperform popular inertial models on 4 out of 6 wearable activity recognition benchmark datasets, with improvements ranging as much as 25% in F1-score.
no code implementations • ICCV 2023 • Maolin Gao, Paul Roetzer, Marvin Eisenberger, Zorah Lähner, Michael Moeller, Daniel Cremers, Florian Bernard
We propose a novel mixed-integer programming (MIP) formulation for generating precise sparse correspondences for highly non-rigid shapes.
no code implementations • 18 Jul 2023 • Jovita Lukasik, Michael Moeller, Margret Keuper
We are interested in the single prediction task for robustness and the joint multi-objective of clean and robust accuracy.
no code implementations • 28 Apr 2023 • Hendrik Sommerhoff, Shashank Agnihotri, Mohamed Saleh, Michael Moeller, Margret Keuper, Andreas Kolb
The success of deep learning is frequently described as the ability to train all parameters of a network on a specific application in an end-to-end fashion.
1 code implementation • 11 Apr 2023 • Marius Bock, Hilde Kuehne, Kristof Van Laerhoven, Michael Moeller
Though research has shown the complementarity of camera- and inertial-based data, datasets which offer both egocentric video and inertial-based sensor data remain scarce.
Egocentric Activity Recognition Human Activity Recognition +2
no code implementations • CVPR 2023 • Harshil Bhatia, Edith Tretschk, Zorah Lähner, Marcel Seelbach Benkner, Michael Moeller, Christian Theobalt, Vladislav Golyanik
Jointly matching multiple, non-rigidly deformed 3D shapes is a challenging, $\mathcal{NP}$-hard problem.
1 code implementation • 14 Dec 2022 • Samira Kabri, Alexander Auras, Danilo Riccio, Hartmut Bauermeister, Martin Benning, Michael Moeller, Martin Burger
The reconstruction of images from their corresponding noisy Radon transform is a typical example of an ill-posed linear inverse problem as arising in the application of computerized tomography (CT).
no code implementations • 13 Oct 2022 • Marcel Seelbach Benkner, Maximilian Krahn, Edith Tretschk, Zorah Lähner, Michael Moeller, Vladislav Golyanik
As a result, the solution encodings can be chosen flexibly and compactly.
no code implementations • 5 Oct 2022 • Kanchana Vaishnavi Gandikota, Paramanand Chandramouli, Michael Moeller
Recent approaches employ deep learning-based solutions for the recovery of a sharp image from its blurry observation.
no code implementations • 24 Sep 2022 • Kanchana Vaishnavi Gandikota, Jonas Geiping, Zorah Lähner, Adam Czapliński, Michael Moeller
Many applications require robustness, or ideally invariance, of neural networks to certain transformations of input data.
1 code implementation • 15 Mar 2022 • Lukas Koestler, Daniel Grittner, Michael Moeller, Daniel Cremers, Zorah Lähner
Neural fields have gained significant attention in the computer vision community due to their excellent performance in novel view synthesis, geometry reconstruction, and generative modeling.
no code implementations • NeurIPS Workshop Deep_Invers 2021 • Jonas Geiping, Jovita Lukasik, Margret Keuper, Michael Moeller
Differentiable architecture search (DARTS) is a widely researched tool for neural architecture search, due to its promising results for image classification.
1 code implementation • 13 Oct 2021 • Marius Bock, Alexander Hoelzemann, Michael Moeller, Kristof Van Laerhoven
Activity recognition systems that are capable of estimating human activities from wearable inertial sensors have come a long way in the past decades.
1 code implementation • ICLR 2022 • Jonas Geiping, Micah Goldblum, Phillip E. Pope, Michael Moeller, Tom Goldstein
It is widely believed that the implicit regularization of SGD is fundamental to the impressive generalization behavior we observe in neural networks.
no code implementations • 12 Aug 2021 • Jonas Geiping, Jovita Lukasik, Margret Keuper, Michael Moeller
In this work, we investigate DAS in a systematic case study of inverse problems, which allows us to analyze these potential benefits in a controlled manner.
1 code implementation • 2 Aug 2021 • Marius Bock, Alexander Hoelzemann, Michael Moeller, Kristof Van Laerhoven
Recent studies in Human Activity Recognition (HAR) have shown that Deep Learning methods are able to outperform classical Machine Learning algorithms.
no code implementations • 13 Jul 2021 • Hartmut Bauermeister, Emanuel Laude, Thomas Möllenhoff, Michael Moeller, Daniel Cremers
In contrast to existing discretizations which suffer from a grid bias, we show that a piecewise polynomial discretization better preserves the continuous nature of our problem.
no code implementations • 8 Jul 2021 • Marcel Seelbach Benkner, Vladislav Golyanik, Christian Theobalt, Michael Moeller
In this work, we address such problems with emerging quantum computing technology and propose several reformulations of QAPs as unconstrained problems suitable for efficient execution on quantum hardware.
no code implementations • 18 Jun 2021 • Kanchana Vaishnavi Gandikota, Jonas Geiping, Zorah Lähner, Adam Czapliński, Michael Moeller
Many applications require the robustness, or ideally the invariance, of a neural network to certain transformations of input data.
no code implementations • ICCV 2021 • Marcel Seelbach Benkner, Zorah Lähner, Vladislav Golyanik, Christof Wunderlich, Christian Theobalt, Michael Moeller
Finding shape correspondences can be formulated as an NP-hard quadratic assignment problem (QAP) that becomes infeasible for shapes with high sampling density.
1 code implementation • 26 Feb 2021 • Jonas Geiping, Liam Fowl, Gowthami Somepalli, Micah Goldblum, Michael Moeller, Tom Goldstein
Data poisoning is a threat model in which a malicious actor tampers with training data to manipulate outcomes at inference time.
1 code implementation • NeurIPS 2020 • Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, Michael Moeller
The idea of federated learning is to collaboratively train a neural network on a server.
no code implementations • 23 Oct 2020 • Hartmut Bauermeister, Martin Burger, Michael Moeller
One of the main challenges in linear inverse problems is that a majority of such problems are ill-posed in the sense that the solution does not depend on the data continuously.
2 code implementations • ICLR 2021 • Jonas Geiping, Liam Fowl, W. Ronny Huang, Wojciech Czaja, Gavin Taylor, Michael Moeller, Tom Goldstein
We consider a particularly malicious poisoning attack that is both "from scratch" and "clean label", meaning we analyze an attack that successfully works against new, randomly initialized models, and is nearly imperceptible to humans, all while perturbing only a small fraction of the training data.
no code implementations • 1 Jul 2020 • Christina Runkel, Stefan Dorenkamp, Hartmut Bauermeister, Michael Moeller
We demonstrate that purely learning on softmax inputs in combination with scarce training data yields overfitting as the network learns the inputs by heart.
1 code implementation • 30 Jun 2020 • Guruprasad Hegde, Avinash Nittur Ramesh, Kanchana Vaishnavi Gandikota, Roman Obermaisser, Michael Moeller
Deep Learning systems have proven to be extremely successful for image recognition tasks for which significant amounts of training data is available, e. g., on the famous ImageNet dataset.
no code implementations • 13 May 2020 • Paramanand Chandramouli, Kanchana Vaishnavi Gandikota, Andreas Goerlitz, Andreas Kolb, Michael Moeller
We develop a generative model conditioned on the central view of the light field and incorporate this as a prior in an energy minimization framework to address diverse light field reconstruction tasks.
no code implementations • 23 Apr 2020 • Jonas Geiping, Fjedor Gaede, Hartmut Bauermeister, Michael Moeller
We discuss this methodology in detail and show examples in multi-label segmentation by minimal partitions and stereo estimation, where we demonstrate that the proposed graph discretization can reduce runtime as well as memory consumption of convex relaxations of matching problems by up to a factor of 10.
6 code implementations • 31 Mar 2020 • Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, Michael Moeller
The idea of federated learning is to collaboratively train a neural network on a server.
1 code implementation • ICLR 2020 • Micah Goldblum, Jonas Geiping, Avi Schwarzschild, Michael Moeller, Tom Goldstein
We empirically evaluate common assumptions about neural networks that are widely held by practitioners and theorists alike.
no code implementations • NeurIPS Workshop Deep_Invers 2019 • Hendrik Sommerhoff, Andreas Kolb, Michael Moeller
In this paper we consider the combination of both approaches by projecting the outputs of a plug-and-play denoising network onto the cone of descent directions to a given energy.
1 code implementation • ICCV 2019 • Jonas Geiping, Michael Moeller
Energy minimization methods are a classical tool in a multitude of computer vision applications.
no code implementations • ICCV 2019 • Michael Moeller, Thomas Möllenhoff, Daniel Cremers
The last decade has shown a tremendous success in solving various computer vision problems with the help of deep learning techniques.
1 code implementation • 24 Jul 2018 • Rania Briq, Michael Moeller, Juergen Gall
Weakly supervised semantic segmentation has been a subject of increased interest due to the scarcity of fully annotated images.
no code implementations • 21 Jun 2018 • Michael Moeller, Otmar Loffeld, Juergen Gall, Felix Krahmer
The idea of compressed sensing is to exploit representations in suitable (overcomplete) dictionaries that allow to recover signals far beyond the Nyquist rate provided that they admit a sparse representation in the respective dictionary.
1 code implementation • ECCV 2018 • Peter Ochs, Tim Meinhardt, Laura Leal-Taixe, Michael Moeller
A lifting layer increases the dimensionality of the input, naturally yields a linear spline when combined with a fully connected layer, and therefore closes the gap between low and high dimensional approximation problems.
no code implementations • 20 Feb 2018 • Jonas Geiping, Michael Moeller
A popular class of algorithms for solving such problems are majorization-minimization techniques which iteratively approximate the composite nonconvex function by a majorizing function that is easy to minimize.
no code implementations • CVPR 2018 • Florian Bernard, Christian Theobalt, Michael Moeller
In this work we study convex relaxations of quadratic optimisation problems over permutation matrices.
1 code implementation • ICLR 2018 • Thomas Frerix, Thomas Möllenhoff, Michael Moeller, Daniel Cremers
Specifically, we show that backpropagation of a prediction error is equivalent to sequential gradient descent steps on a quadratic penalty energy, which comprises the network activations as variables of the optimization.
1 code implementation • ICCV 2017 • Tim Meinhardt, Michael Moeller, Caner Hazirbas, Daniel Cremers
While variational methods have been among the most powerful tools for solving linear inverse problems in imaging, deep (convolutional) neural networks have recently taken the lead in many challenging benchmarks.
1 code implementation • 23 Nov 2016 • Jonas Geiping, Hendrik Dirks, Daniel Cremers, Michael Moeller
The idea of video super resolution is to use different view points of a single scene to enhance the overall resolution and quality.
1 code implementation • 7 Apr 2016 • Emanuel Laude, Thomas Möllenhoff, Michael Moeller, Jan Lellmann, Daniel Cremers
Convex relaxations of nonconvex multilabel problems have been demonstrated to produce superior (provably optimal or near-optimal) solutions to a variety of classical computer vision problems.
2 code implementations • CVPR 2016 • Thomas Möllenhoff, Emanuel Laude, Michael Moeller, Jan Lellmann, Daniel Cremers
We propose a novel spatially continuous framework for convex relaxations based on functional lifting.
no code implementations • ICCV 2015 • Michael Moeller, Julia Diebold, Guy Gilboa, Daniel Cremers
This paper presents the idea of learning optimal filters for color image reconstruction based on a novel concept of nonlinear spectral image decompositions recently proposed by Guy Gilboa.
no code implementations • 5 Oct 2015 • Guy Gilboa, Michael Moeller, Martin Burger
We present in this paper the motivation and theory of nonlinear spectral representations, based on convex regularizing functionals.
no code implementations • 6 Aug 2015 • Joan Duran, Michael Moeller, Catalina Sbert, Daniel Cremers
Even after over two decades, the total variation (TV) remains one of the most popular regularizations for image processing problems and has sparked a tremendous amount of research, particularly to move from scalar to vector-valued functions.
no code implementations • 18 Jun 2015 • Emanuele Rodolà, Michael Moeller, Daniel Cremers
Since their introduction in the shape analysis community, functional maps have met with considerable success due to their ability to compactly represent dense correspondences between deformable shapes, with applications ranging from shape matching and image segmentation, to exploration of large shape collections.
1 code implementation • 1 Aug 2014 • Michael Moeller, Martin Benning, Carola Schönlieb, Daniel Cremers
This paper deals with the problem of reconstructing a depth map from a sequence of differently focused images, also known as depth from focus or shape from focus.
no code implementations • 7 Jul 2014 • Thomas Möllenhoff, Evgeny Strekalovskiy, Michael Moeller, Daniel Cremers
This paper deals with the analysis of a recent reformulation of the primal-dual hybrid gradient method [Zhu and Chan 2008, Pock, Cremers, Bischof and Chambolle 2009, Esser, Zhang and Chan 2010, Chambolle and Pock 2011], which allows to apply it to nonconvex regularizers as first proposed for truncated quadratic penalization in [Strekalovskiy and Cremers 2014].