no code implementations • ICML 2020 • Akshay Kamath, Eric Price, Sushrut Karmalkar
for compressed sensing from $L$-Lipschitz generative models $G$.
no code implementations • ICML 2020 • Akshay Kamath, Eric Price, Sushrut Karmalkar
for compressed sensing from $L$-Lipschitz generative models $G$.
no code implementations • 15 Mar 2024 • Ilias Diakonikolas, Daniel M. Kane, Sushrut Karmalkar, Ankit Pensia, Thanasis Pittas
Concretely, for Gaussian robust $k$-sparse mean estimation on $\mathbb{R}^d$ with corruption rate $\epsilon>0$, our algorithm has sample complexity $(k^2/\epsilon^2)\mathrm{polylog}(d/\epsilon)$, runs in sample polynomial time, and approximates the target mean within $\ell_2$-error $O(\epsilon)$.
no code implementations • 16 Feb 2024 • David Jin, Sushrut Karmalkar, Harry Zhang, Luca Carlone
We investigate a variation of the 3D registration problem, named multi-model 3D registration.
no code implementations • 20 Sep 2023 • Ilias Diakonikolas, Sushrut Karmalkar, Jongho Park, Christos Tzamos
Our goal is to accurately recover a \new{parameter vector $w$ such that the} function $g(w \cdot x)$ \new{has} arbitrarily small error when compared to the true values $g(w^* \cdot x)$, rather than the noisy measurements $y$.
no code implementations • 10 Jun 2022 • Ilias Diakonikolas, Daniel M. Kane, Sushrut Karmalkar, Ankit Pensia, Thanasis Pittas
We study the problem of list-decodable sparse mean estimation.
no code implementations • 7 Jun 2022 • Ilias Diakonikolas, Daniel M. Kane, Sushrut Karmalkar, Ankit Pensia, Thanasis Pittas
In this work, we develop the first efficient algorithms for robust sparse mean estimation without a priori knowledge of the covariance.
1 code implementation • 23 Jun 2021 • Ajil Jalal, Sushrut Karmalkar, Jessica Hoffmann, Alexandros G. Dimakis, Eric Price
This motivates the introduction of definitions that allow algorithms to be \emph{oblivious} to the relevant groupings.
1 code implementation • 21 Jun 2021 • Ajil Jalal, Sushrut Karmalkar, Alexandros G. Dimakis, Eric Price
We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors).
no code implementations • 23 Oct 2020 • Ajil Jalal, Sushrut Karmalkar, Alex Dimakis, Eric Price
We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors).
no code implementations • 22 Oct 2020 • Aravind Gollakota, Sushrut Karmalkar, Adam Klivans
Generalizing a beautiful work of Malach and Shalev-Shwartz (2022) that gave tight correlational SQ (CSQ) lower bounds for learning DNF formulas, we give new proofs that lower bounds on the threshold or approximate degree of any function class directly imply CSQ lower bounds for PAC or agnostic learning respectively.
no code implementations • ICML 2020 • Surbhi Goel, Aravind Gollakota, Zhihan Jin, Sushrut Karmalkar, Adam Klivans
Our lower bounds hold for broad classes of activations including ReLU and sigmoid.
no code implementations • 26 May 2020 • Ilias Diakonikolas, Surbhi Goel, Sushrut Karmalkar, Adam R. Klivans, Mahdi Soltanolkotabi
We consider the fundamental problem of ReLU regression, where the goal is to output the best fitting ReLU with respect to square loss given access to draws from some unknown distribution.
no code implementations • 13 May 2020 • Ilias Diakonikolas, Samuel B. Hopkins, Daniel Kane, Sushrut Karmalkar
The key ingredients of this proof are a novel use of SoS-certifiable anti-concentration and a new characterization of pairs of Gaussians with small (dimension-independent) overlap in terms of their parameter distance.
no code implementations • NeurIPS Workshop Deep_Invers 2019 • Akshay Kamath, Sushrut Karmalkar, Eric Price
Second, we show that generative models generalize sparsity as a representation of structure.
3 code implementations • NeurIPS 2019 • Ilias Diakonikolas, Sushrut Karmalkar, Daniel Kane, Eric Price, Alistair Stewart
Specifically, we focus on the fundamental problems of robust sparse mean estimation and robust sparse PCA.
no code implementations • NeurIPS 2019 • Surbhi Goel, Sushrut Karmalkar, Adam Klivans
Let $\mathsf{opt} < 1$ be the population loss of the best-fitting ReLU.
no code implementations • NeurIPS 2019 • Sushrut Karmalkar, Adam R. Klivans, Pravesh K. Kothari
To complement our result, we prove that the anti-concentration assumption on the inliers is information-theoretically necessary.
no code implementations • 21 Sep 2018 • Sushrut Karmalkar, Eric Price
We present a simple and effective algorithm for the problem of \emph{sparse robust linear regression}.
no code implementations • ICLR 2018 • Amit Deshpande, Navin Goyal, Sushrut Karmalkar
We show a similar separation between the expressive power of depth-2 and depth-3 sigmoidal neural networks over a large class of input distributions, as long as the weights are polynomially bounded.
no code implementations • 10 Aug 2017 • Daniel Kane, Sushrut Karmalkar, Eric Price
We consider the problem of robust polynomial regression, where one receives samples $(x_i, y_i)$ that are usually within $\sigma$ of a polynomial $y = p(x)$, but have a $\rho$ chance of being arbitrary adversarial outliers.