no code implementations • 25 Mar 2024 • Ankit Pensia, Varun Jog, Po-Ling Loh
In this paper, we derive a formula that characterizes the sample complexity (up to multiplicative constants that are independent of $p$, $q$, and all error parameters) for: (i) all $0 \le \alpha, \beta \le 1/8$ in the prior-free setting; and (ii) all $\delta \le \alpha/4$ in the Bayesian setting.
no code implementations • 9 Jan 2023 • Ankit Pensia, Amir R. Asadi, Varun Jog, Po-Ling Loh
For the sample complexity of simple hypothesis testing under pure LDP constraints, we establish instance-optimal bounds for distributions with binary support; minimax-optimal bounds for general distributions; and (approximately) instance-optimal, computationally efficient algorithms for general distributions.
no code implementations • 6 Jun 2022 • Ankit Pensia, Varun Jog, Po-Ling Loh
We show that the sample complexity of simple binary hypothesis testing under communication constraints is at most a logarithmic factor larger than in the unconstrained setting and this bound is tight.
no code implementations • NeurIPS 2021 • Muni Sreenivas Pydi, Varun Jog
Adversarial risk quantifies the performance of classifiers on adversarially perturbed data.
no code implementations • 27 Sep 2020 • Ankit Pensia, Varun Jog, Po-Ling Loh
We study the problem of linear regression where both covariates and responses are potentially (i) heavy-tailed and (ii) adversarially contaminated.
no code implementations • 16 Jun 2020 • Varun Jog
The $r$-parallel set of a measurable set $A \subseteq \mathbb R^d$ is the set of all points whose distance from $A$ is at most $r$.
no code implementations • ICML 2020 • Muni Sreenivas Pydi, Varun Jog
We show that the optimal adversarial risk for binary classification with 0-1 loss is determined by an optimal transport cost between the probability distributions of the two classes.
no code implementations • 15 Oct 2019 • Ankit Pensia, Varun Jog, Po-Ling Loh
We propose a novel strategy for extracting features in supervised learning that can be used to construct a classifier which is more robust to small perturbations in the input space.
no code implementations • 1 Aug 2019 • Zheng Liu, Jinnian Zhang, Varun Jog, Po-Ling Loh, Alan B McMillan
Materials and Methods: In this retrospective study, the accuracy of brain tumor segmentation was studied in subjects with low- and high-grade gliomas.
no code implementations • 6 Jul 2019 • Ankit Pensia, Varun Jog, Po-Ling Loh
In the multivariate setting, we generalize our theory to mean estimation for mixtures of radially symmetric distributions, and derive minimax lower bounds on the expected error of any estimator that is agnostic to the scales of individual data points.
no code implementations • 13 Feb 2018 • Muni Sreenivas Pydi, Varun Jog, Po-Ling Loh
We also provide simulations showing the relative convergence rates of our algorithms in comparison to an unbiased random walk, as a function of the smoothness of the graph function.
no code implementations • 12 Jan 2018 • Ankit Pensia, Varun Jog, Po-Ling Loh
In statistical learning theory, generalization error is used to quantify the degree to which a supervised machine learning algorithm may overfit to training data.
no code implementations • NeurIPS 2016 • Justin T. Khim, Varun Jog, Po-Ling Loh
We quantify the gap between our upper and lower bounds in the case of the linear threshold model and illustrate the gains of our upper bounds for independent cascade models in relation to existing results.
no code implementations • 1 Nov 2016 • Justin Khim, Varun Jog, Po-Ling Loh
We consider the problem of influence maximization in fixed networks for contagion models in an adversarial setting.
no code implementations • 10 Jan 2015 • Varun Jog, Po-Ling Loh
We establish bounds on the KL divergence between two multivariate Gaussian distributions in terms of the Hamming distance between the edge sets of the corresponding graphical models.