Search Results for author: N. V. Vinodchandran

Found 8 papers, 0 papers with code

Distribution Learning Meets Graph Structure Sampling

no code implementations13 May 2024 Arnab Bhattacharyya, Sutanu Gayen, Philips George John, Sayantan Sen, N. V. Vinodchandran

This work establishes a novel link between the problem of PAC-learning high-dimensional graphical models and the task of (efficient) counting and sampling of graph structures, using an online learning framework.

PAC learning

Total Variation Distance Estimation Is as Easy as Probabilistic Inference

no code implementations17 Sep 2023 Arnab Bhattacharyya, Sutanu Gayen, Kuldeep S. Meel, Dimitrios Myrisiotis, A. Pavan, N. V. Vinodchandran

In particular, we present an efficient, structure-preserving reduction from relative approximation of TV distance to probabilistic inference over directed graphical models.

Efficient inference of interventional distributions

no code implementations25 Jul 2021 Arnab Bhattacharyya, Sutanu Gayen, Saravanan Kandasamy, Vedant Raval, N. V. Vinodchandran

For sets $\mathbf{X},\mathbf{Y}\subseteq \mathbf{V}$, and setting ${\bf x}$ to $\mathbf{X}$, let $P_{\bf x}(\mathbf{Y})$ denote the interventional distribution on $\mathbf{Y}$ with respect to an intervention ${\bf x}$ to variables ${\bf x}$.

Testing Product Distributions: A Closer Look

no code implementations29 Dec 2020 Arnab Bhattacharyya, Sutanu Gayen, Saravanan Kandasamy, N. V. Vinodchandran

We study the problems of identity and closeness testing of $n$-dimensional product distributions.

Near-Optimal Learning of Tree-Structured Distributions by Chow-Liu

no code implementations9 Nov 2020 Arnab Bhattacharyya, Sutanu Gayen, Eric Price, N. V. Vinodchandran

For a distribution $P$ on $\Sigma^n$ and a tree $T$ on $n$ nodes, we say $T$ is an $\varepsilon$-approximate tree for $P$ if there is a $T$-structured distribution $Q$ such that $D(P\;||\;Q)$ is at most $\varepsilon$ more than the best possible tree-structured distribution for $P$.

Learning and Sampling of Atomic Interventions from Observations

no code implementations ICML 2020 Arnab Bhattacharyya, Sutanu Gayen, Saravanan Kandasamy, Ashwin Maran, N. V. Vinodchandran

Assuming that $G$ has bounded in-degree, bounded c-components ($k$), and that the observational distribution is identifiable and satisfies certain strong positivity condition, we give an algorithm that takes $m=\tilde{O}(n\epsilon^{-2})$ samples from $P$ and $O(mn)$ time, and outputs with high probability a description of a distribution $\hat{P}$ such that $d_{\mathrm{TV}}(P_x, \hat{P}) \leq \epsilon$, and: 1.

Interpretable Classification via Supervised Variational Autoencoders and Differentiable Decision Trees

no code implementations ICLR 2018 Eleanor Quint, Garrett Wirka, Jacob Williams, Stephen Scott, N. V. Vinodchandran

As deep learning-based classifiers are increasingly adopted in real-world applications, the importance of understanding how a particular label is chosen grows.

Decoder General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.