no code implementations • ICML 2020 • Rotem Mulayoff, Tomer Michaeli
In this paper, we characterize the wide minima in linear neural networks trained with a quadratic loss.
no code implementations • 21 Feb 2024 • Amitay Bar, Rotem Mulayoff, Tomer Michaeli, Ronen Talmon
Langevin dynamics (LD) is widely used for sampling from distributions and for optimization.
no code implementations • 15 Feb 2024 • Hila Manor, Tomer Michaeli
Editing signals using large pre-trained models, in a zero-shot manner, has recently seen rapid advancements in the image domain.
no code implementations • 15 Feb 2024 • Shahar Yadin, Noam Elata, Tomer Michaeli
These approaches achieve state-of-the-art results in image, video, and audio generation.
no code implementations • 23 Jan 2024 • Omer Bar-Tal, Hila Chefer, Omer Tov, Charles Herrmann, Roni Paiss, Shiran Zada, Ariel Ephrat, Junhwa Hur, Guanghui Liu, Amit Raj, Yuanzhen Li, Michael Rubinstein, Tomer Michaeli, Oliver Wang, Deqing Sun, Tali Dekel, Inbar Mosseri
We introduce Lumiere -- a text-to-video diffusion model designed for synthesizing videos that portray realistic, diverse and coherent motion -- a pivotal challenge in video synthesis.
Ranked #6 on Text-to-Video Generation on UCF-101
no code implementations • 12 Dec 2023 • Omer Yair, Elias Nehme, Tomer Michaeli
In ill-posed inverse problems, it is commonly desirable to obtain insight into the full spectrum of plausible solutions, rather than extracting only a single reconstruction.
no code implementations • 14 Nov 2023 • Guy Ohayon, Tomer Michaeli, Michael Elad
We study the behavior of deterministic methods for solving inverse problems in imaging.
no code implementations • 24 Oct 2023 • Noa Cohen, Hila Manor, Yuval Bahat, Tomer Michaeli
To accommodate this, many works generate a diverse set of outputs by attempting to randomly sample from the posterior distribution of natural images given the degraded input.
1 code implementation • 24 Sep 2023 • Hila Manor, Tomer Michaeli
Denoisers play a central role in many applications, from noise suppression in low-grade imaging sensors, to empowering score-based generative models.
no code implementations • 30 Jun 2023 • Mor Shpigel Nacson, Rotem Mulayoff, Greg Ongie, Tomer Michaeli, Daniel Soudry
Finally, we prove that if a function is sufficiently smooth (in a Sobolev sense) then it can be approximated arbitrarily well using shallow ReLU networks that correspond to stable solutions of gradient descent.
no code implementations • 13 Jun 2023 • Rotem Mulayoff, Tomer Michaeli
Furthermore, we show that SGD's stability threshold is equivalent to that of a mixture process which takes in each iteration a full batch gradient step w. p.
1 code implementation • 30 May 2023 • Noam Elata, Bahjat Kawar, Tomer Michaeli, Michael Elad
Diffusion models are the current state-of-the-art in image generation, synthesizing high-quality images by breaking down the generation process into many fine-grained denoising steps.
1 code implementation • 22 May 2023 • Bahjat Kawar, Noam Elata, Tomer Michaeli, Michael Elad
Diffusion models have demonstrated impressive results in both data generation and downstream tasks such as inverse problems, text-based editing, classification, and more.
2 code implementations • 12 Apr 2023 • Inbar Huberman-Spiegelglas, Vladimir Kulikov, Tomer Michaeli
However, this native noise space does not possess a convenient structure, and is thus challenging to work with in editing tasks.
1 code implementation • 20 Mar 2023 • René Haas, Inbar Huberman-Spiegelglas, Rotem Mulayoff, Tomer Michaeli
Recently, a semantic latent space for DDMs, coined `$h$-space', was shown to facilitate semantic image editing in a way reminiscent of GANs.
1 code implementation • CVPR 2023 • Hagay Michaeli, Tomer Michaeli, Daniel Soudry
Although CNNs are believed to be invariant to translations, recent works have shown this is not the case, due to aliasing effects that stem from downsampling layers.
1 code implementation • 18 Dec 2022 • Noa Alkobi, Tamar Rott Shaham, Tomer Michaeli
Image completion is widely used in photo restoration and editing applications, e. g. for object removal.
no code implementations • 3 Dec 2022 • Idan Kligvasser, Tamar Rott Shaham, Noa Alkobi, Tomer Michaeli
Training a generative model on a single image has drawn significant attention in recent years.
1 code implementation • 29 Nov 2022 • Vladimir Kulikov, Shahar Yadin, Matan Kleiner, Tomer Michaeli
Here, we introduce a framework for training a DDM on a single image.
no code implementations • 16 Nov 2022 • Guy Ohayon, Theo Adrai, Michael Elad, Tomer Michaeli
Stochastic restoration algorithms allow to explore the space of solutions that correspond to the degraded input.
no code implementations • 6 Feb 2022 • Nurit Spingarn Eliezer, Ron Banner, Elad Hoffer, Hilla Ben-Yaakov, Tomer Michaeli
Power consumption is a major obstacle in the deployment of deep neural networks (DNNs) on end devices.
no code implementations • NeurIPS 2021 • Idan Kligvasser, Tamar Shaham, Yuval Bahat, Tomer Michaeli
Features extracted from deep layers of classification networks are widely used as image descriptors.
no code implementations • NeurIPS 2021 • Rotem Mulayoff, Tomer Michaeli, Daniel Soudry
First, we extend the existing knowledge on minima stability to non-differentiable minima, which are common in ReLU nets.
no code implementations • 29 Sep 2021 • Nurit Spingarn, Elad Hoffer, Ron Banner, Hilla Ben Yaacov, Tomer Michaeli
Power consumption is a major obstacle in the deployment of deep neural networks (DNNs) on end devices.
no code implementations • NeurIPS 2021 • Dror Freirich, Tomer Michaeli, Ron Meir
In this paper, we derive a closed form expression for this distortion-perception (DP) function for the mean squared-error (MSE) distortion and the Wasserstein-2 perception index.
1 code implementation • NeurIPS 2021 • Gal Greshler, Tamar Rott Shaham, Tomer Michaeli
Models for audio generation are typically trained on hours of recordings.
no code implementations • 3 Mar 2021 • Idan Kligvasser, Tomer Michaeli
Generative adversarial networks (GANs) are known to benefit from regularization or normalization of their critic (discriminator) network during training.
1 code implementation • ICLR 2021 • Nurit Spingarn-Eliezer, Ron Banner, Tomer Michaeli
However, all existing techniques rely on an optimization procedure to expose those directions, and offer no control over the degree of allowed interaction between different transformations.
no code implementations • ICLR 2021 • Omer Yair, Tomer Michaeli
In this paper, we present an alternative derivation of CD that does not require any approximation and sheds new light on the objective that is actually being optimized by the algorithm.
1 code implementation • CVPR 2021 • Tamar Rott Shaham, Michael Gharbi, Richard Zhang, Eli Shechtman, Tomer Michaeli
We introduce a new generator architecture, aimed at fast and efficient high-resolution image-to-image translation.
no code implementations • 29 Sep 2020 • Elias Nehme, Boris Ferdman, Lucien E. Weiss, Tal Naor, Daniel Freedman, Tomer Michaeli, Yoav Shechtman
A long-standing challenge in multiple-particle-tracking is the accurate and precise 3D localization of individual particles at close proximity.
no code implementations • CVPR 2021 • Yuval Bahat, Tomer Michaeli
In spite of this fact, existing decompression algorithms typically produce only a single output, and do not allow the viewer to explore the set of images that map to the given compressed code.
no code implementations • 11 Feb 2020 • Rotem Mulayoff, Tomer Michaeli
In this paper, we characterize the flat minima in linear neural networks trained with a quadratic loss.
2 code implementations • CVPR 2020 • Yuval Bahat, Tomer Michaeli
Single image super resolution (SR) has seen major performance leaps in recent years.
1 code implementation • 21 Jun 2019 • Elias Nehme, Daniel Freedman, Racheli Gordon, Boris Ferdman, Lucien E. Weiss, Onit Alalouf, Reut Orange, Tomer Michaeli, Yoav Shechtman
Localization microscopy is an imaging technique in which the positions of individual nanoscale point emitters (e. g. fluorescent molecules) are determined at high precision from their images.
44 code implementations • ICCV 2019 • Tamar Rott Shaham, Tali Dekel, Tomer Michaeli
We introduce SinGAN, an unconditional generative model that can be learned from a single natural image.
no code implementations • ICLR 2019 • Adar Elad, Doron Haviv, Yochai Blau, Tomer Michaeli
The recently proposed information bottleneck (IB) theory of deep nets suggests that during training, each layer attempts to maximize its mutual information (MI) with the target labels (so as to allow good prediction accuracy), while minimizing its MI with the input (leading to effective compression and thus good generalization).
no code implementations • 23 Jan 2019 • Yochai Blau, Tomer Michaeli
Lossy compression algorithms are typically designed and analyzed through the lens of Shannon's rate-distortion theory, where the goal is to achieve the lowest possible distortion (e. g., low MSE or high SSIM) at any given bit rate.
1 code implementation • 27 Nov 2018 • Idan Kligvasser, Tomer Michaeli
For example, on ImageNet, our DxNet outperforms a ReLU-based DenseNet having 30% more parameters and achieves state-of-the-art results for this budget of parameters.
8 code implementations • 20 Sep 2018 • Yochai Blau, Roey Mechrez, Radu Timofte, Tomer Michaeli, Lihi Zelnik-Manor
This paper reports on the 2018 PIRM challenge on perceptual super-resolution (SR), held in conjunction with the Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018.
Ranked #33 on Video Quality Assessment on MSU SR-QA Dataset
no code implementations • ICML 2018 • Andrey Zhitnikov, Rotem Mulayoff, Tomer Michaeli
In many areas of neuroscience and biological data analysis, it is desired to reveal common patterns among a group of subjects.
no code implementations • CVPR 2018 • Tal Tlusty, Tomer Michaeli, Tali Dekel, Lihi Zelnik-Manor
We present an algorithm for modifying small non-local variations between repeating structures and patterns in multiple images of the same scene.
no code implementations • CVPR 2018 • Noam Yair, Tomer Michaeli
A prominent property of natural images is that groups of similar patches within them tend to lie on low-dimensional subspaces.
no code implementations • CVPR 2018 • Tamar Rott Shaham, Tomer Michaeli
Lossy compression algorithms aim to compactly encode images in a way which enables to restore them with minimal error.
3 code implementations • 29 Jan 2018 • Elias Nehme, Lucien E. Weiss, Tomer Michaeli, Yoav Shechtman
We present an ultra-fast, precise, parameter-free method, which we term Deep-STORM, for obtaining super-resolution images from stochastically-blinking emitters, such as fluorescent molecules used for localization microscopy.
Optics
no code implementations • ICLR 2018 • Baruch Epstein, Ron Meir, Tomer Michaeli
Ideally one would like to allow both the data for the current task and for previous related tasks to self-organize the learning system in such a way that commonalities and differences between the tasks are learned in a data-driven fashion.
1 code implementation • CVPR 2018 • Idan Kligvasser, Tamar Rott Shaham, Tomer Michaeli
However, state-of-the-art results are typically achieved by very deep networks, which can reach tens of layers with tens of millions of parameters.
1 code implementation • CVPR 2018 • Yochai Blau, Tomer Michaeli
Image restoration algorithms are typically evaluated by some distortion measure (e. g. PSNR, SSIM, IFC, VIF) or by human opinion scores that quantify perceived perceptual quality.
no code implementations • 30 May 2017 • Baruch Epstein. Ron Meir, Tomer Michaeli
Ideally one would like to allow both the data for the current task and for previous related tasks to self-organize the learning system in such a way that commonalities and differences between the tasks are learned in a data-driven fashion.
no code implementations • 11 Dec 2016 • Yochai Blau, Tomer Michaeli
Our approach relies on replacing the orthogonality constraints underlying those methods by unpredictability constraints.
no code implementations • 16 Nov 2015 • Tomer Michaeli, Weiran Wang, Karen Livescu
Several nonlinear extensions of the original linear CCA have been proposed, including kernel and deep neural network methods.