Search Results for author: Varun Jog

Found 15 papers, 0 papers with code

The Sample Complexity of Simple Binary Hypothesis Testing

no code implementations25 Mar 2024 Ankit Pensia, Varun Jog, Po-Ling Loh

In this paper, we derive a formula that characterizes the sample complexity (up to multiplicative constants that are independent of $p$, $q$, and all error parameters) for: (i) all $0 \le \alpha, \beta \le 1/8$ in the prior-free setting; and (ii) all $\delta \le \alpha/4$ in the Bayesian setting.

Simple Binary Hypothesis Testing under Local Differential Privacy and Communication Constraints

no code implementations9 Jan 2023 Ankit Pensia, Amir R. Asadi, Varun Jog, Po-Ling Loh

For the sample complexity of simple hypothesis testing under pure LDP constraints, we establish instance-optimal bounds for distributions with binary support; minimax-optimal bounds for general distributions; and (approximately) instance-optimal, computationally efficient algorithms for general distributions.

Communication-constrained hypothesis testing: Optimality, robustness, and reverse data processing inequalities

no code implementations6 Jun 2022 Ankit Pensia, Varun Jog, Po-Ling Loh

We show that the sample complexity of simple binary hypothesis testing under communication constraints is at most a logarithmic factor larger than in the unconstrained setting and this bound is tight.

The Many Faces of Adversarial Risk

no code implementations NeurIPS 2021 Muni Sreenivas Pydi, Varun Jog

Adversarial risk quantifies the performance of classifiers on adversarially perturbed data.

Adversarial Robustness

Robust regression with covariate filtering: Heavy tails and adversarial contamination

no code implementations27 Sep 2020 Ankit Pensia, Varun Jog, Po-Ling Loh

We study the problem of linear regression where both covariates and responses are potentially (i) heavy-tailed and (ii) adversarially contaminated.

regression

Reverse Euclidean and Gaussian isoperimetric inequalities for parallel sets with applications

no code implementations16 Jun 2020 Varun Jog

The $r$-parallel set of a measurable set $A \subseteq \mathbb R^d$ is the set of all points whose distance from $A$ is at most $r$.

BIG-bench Machine Learning Two-sample testing

Adversarial Risk via Optimal Transport and Optimal Couplings

no code implementations ICML 2020 Muni Sreenivas Pydi, Varun Jog

We show that the optimal adversarial risk for binary classification with 0-1 loss is determined by an optimal transport cost between the probability distributions of the two classes.

Binary Classification

Extracting robust and accurate features via a robust information bottleneck

no code implementations15 Oct 2019 Ankit Pensia, Varun Jog, Po-Ling Loh

We propose a novel strategy for extracting features in supervised learning that can be used to construct a classifier which is more robust to small perturbations in the input space.

Robustifying deep networks for image segmentation

no code implementations1 Aug 2019 Zheng Liu, Jinnian Zhang, Varun Jog, Po-Ling Loh, Alan B McMillan

Materials and Methods: In this retrospective study, the accuracy of brain tumor segmentation was studied in subjects with low- and high-grade gliomas.

Brain Tumor Segmentation Data Augmentation +3

Estimating location parameters in entangled single-sample distributions

no code implementations6 Jul 2019 Ankit Pensia, Varun Jog, Po-Ling Loh

In the multivariate setting, we generalize our theory to mean estimation for mixtures of radially symmetric distributions, and derive minimax lower bounds on the expected error of any estimator that is agnostic to the scales of individual data points.

regression

Graph-Based Ascent Algorithms for Function Maximization

no code implementations13 Feb 2018 Muni Sreenivas Pydi, Varun Jog, Po-Ling Loh

We also provide simulations showing the relative convergence rates of our algorithms in comparison to an unbiased random walk, as a function of the smoothness of the graph function.

Generalization Error Bounds for Noisy, Iterative Algorithms

no code implementations12 Jan 2018 Ankit Pensia, Varun Jog, Po-Ling Loh

In statistical learning theory, generalization error is used to quantify the degree to which a supervised machine learning algorithm may overfit to training data.

Learning Theory

Computing and maximizing influence in linear threshold and triggering models

no code implementations NeurIPS 2016 Justin T. Khim, Varun Jog, Po-Ling Loh

We quantify the gap between our upper and lower bounds in the case of the linear threshold model and illustrate the gains of our upper bounds for independent cascade models in relation to existing results.

Adversarial Influence Maximization

no code implementations1 Nov 2016 Justin Khim, Varun Jog, Po-Ling Loh

We consider the problem of influence maximization in fixed networks for contagion models in an adversarial setting.

On model misspecification and KL separation for Gaussian graphical models

no code implementations10 Jan 2015 Varun Jog, Po-Ling Loh

We establish bounds on the KL divergence between two multivariate Gaussian distributions in terms of the Hamming distance between the edge sets of the corresponding graphical models.

Model Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.