no code implementations • 2 Mar 2021 • David Gamarnik, Eren C. Kızıldağ, Ilias Zadik
Using a simple covering number argument, we establish that under quite mild distributional assumptions on the input/label pairs; any such network achieving a small training error on polynomially many data necessarily has a well-controlled outer norm.
no code implementations • 23 Mar 2020 • Matt Emschwiller, David Gamarnik, Eren C. Kızıldağ, Ilias Zadik
Thus a message implied by our results is that parametrizing wide neural networks by the number of hidden nodes is misleading, and a more fitting measure of parametrization complexity is the number of regression coefficients associated with tensorized data.
no code implementations • 3 Dec 2019 • David Gamarnik, Eren C. Kızıldağ, Ilias Zadik
Next, we show that initializing below this barrier is in fact easily achieved when the weights are randomly generated under relatively weak assumptions.
no code implementations • 24 Oct 2019 • David Gamarnik, Eren C. Kızıldağ, Ilias Zadik
Using a novel combination of the PSLQ integer relation detection, and LLL lattice basis reduction algorithms, we propose a polynomial-time algorithm which provably recovers a $\beta^*\in\mathbb{R}^p$ enjoying the mixed-support assumption, from its linear measurements $Y=X\beta^*\in\mathbb{R}^n$ for a large class of distributions for the random entries of $X$, even with one measurement $(n=1)$.