no code implementations • 13 Feb 2024 • Suzanna Parkinson, Greg Ongie, Rebecca Willett, Ohad Shamir, Nathan Srebro
We also show that a similar statement in the reverse direction is not possible: any function learnable with polynomial sample complexity by a norm-controlled depth-2 ReLU network with infinite width is also learnable with polynomial sample complexity by a norm-controlled depth-3 ReLU network.
no code implementations • NeurIPS 2023 • Chen Zeno, Greg Ongie, Yaniv Blumenfeld, Nir Weinberger, Daniel Soudry
Neural network (NN) denoisers are an essential building block in many common tasks, ranging from image reconstruction to image generation.
no code implementations • 30 Jun 2023 • Mor Shpigel Nacson, Rotem Mulayoff, Greg Ongie, Tomer Michaeli, Daniel Soudry
Finally, we prove that if a function is sufficiently smooth (in a Sobolev sense) then it can be approximated arbitrarily well using shallow ReLU networks that correspond to stable solutions of gradient descent.
no code implementations • 24 May 2023 • Suzanna Parkinson, Greg Ongie, Rebecca Willett
This paper explores the implicit bias of overparameterized neural networks of depth greater than two layers.
no code implementations • 2 Feb 2022 • Greg Ongie, Rebecca Willett
This paper explores the implicit bias of overparameterized neural networks of depth greater than two layers.
no code implementations • ICLR 2020 • Greg Ongie, Rebecca Willett, Daniel Soudry, Nathan Srebro
In this paper, we characterize the norm required to realize a function $f:\mathbb{R}^d\rightarrow\mathbb{R}$ as a single hidden-layer ReLU network with an unbounded number of units (infinite width), but where the Euclidean norm of the weights is bounded, including precisely characterizing which functions can be realized with finite norm.
no code implementations • NeurIPS Workshop Deep_Invers 2019 • Greg Ongie, Davis Gilton, Rebecca Willett
Recent advances have illustrated that it is often possible to learn to solve linear inverse problems in imaging using training data that can outperform more traditional regularized least squares solutions.
2 code implementations • 13 Jan 2019 • Davis Gilton, Greg Ongie, Rebecca Willett
We present an end-to-end, data-driven method of solving inverse problems inspired by the Neumann series, which we call a Neumann network.
no code implementations • 26 Apr 2018 • Greg Ongie, Daniel Pimentel-Alarcón, Laura Balzano, Rebecca Willett, Robert D. Nowak
This approach will succeed in many cases where traditional LRMC is guaranteed to fail because the data are low-rank in the tensorized representation but not in the original representation.
1 code implementation • ICML 2017 • Greg Ongie, Rebecca Willett, Robert D. Nowak, Laura Balzano
We consider a generalization of low-rank matrix completion to the case where the data belongs to an algebraic variety, i. e. each data point is a solution to a system of polynomial equations.
3 code implementations • 23 Sep 2016 • Greg Ongie, Mathews Jacob
Fourier domain structured low-rank matrix priors are emerging as powerful alternatives to traditional image recovery methods such as total variation and wavelet regularization.
Numerical Analysis Optimization and Control
no code implementations • 1 Oct 2015 • Greg Ongie, Mathews Jacob
In the first stage we estimate a continuous domain representation of the edge set of the image.
no code implementations • 3 Feb 2015 • Greg Ongie, Mathews Jacob
We introduce a Prony-like method to recover a continuous domain 2-D piecewise smooth image from few of its Fourier samples.
no code implementations • 8 Jan 2015 • Greg Ongie, Mathews Jacob
We propose a two-stage algorithm for the super-resolution of MR images from their low-frequency k-space samples.
no code implementations • 15 May 2014 • Yasir Q. Moshin, Greg Ongie, Mathews Jacob
This approach is enabled by the reformulation of current non-local schemes as an alternating algorithm to minimize a global criterion.