no code implementations • 20 Dec 2022 • Anand Jerry George, Lekshmi Ramesh, Aditya Vikram Singh, Himanshu Tyagi
We provide an algorithm that outputs a mean estimate at every time instant $t$ such that the overall release is user-level $\varepsilon$-DP and has the following error guarantee: Denoting by $M_t$ the maximum number of samples contributed by a user, as long as $\tilde{\Omega}(1/\varepsilon)$ users have $M_t/2$ samples each, the error at time $t$ is $\tilde{O}(1/\sqrt{t}+\sqrt{M}_t/t\varepsilon)$.
no code implementations • 14 Mar 2022 • Jayadev Acharya, Clément L. Canonne, Ziteng Sun, Himanshu Tyagi
Without sparsity assumptions, it has been established that interactivity cannot improve the minimax rates of estimation under these information constraints.
no code implementations • NeurIPS 2021 • Jayadev Acharya, Clement Canonne, YuHan Liu, Ziteng Sun, Himanshu Tyagi
We obtain tight minimax rates for the problem of distributed estimation of discrete distributions under communication constraints, where $n$ users observing $m $ samples each can broadcast only $\ell$ bits.
1 code implementation • 11 Sep 2021 • Shubham K Jha, Prathamesh Mayekar, Himanshu Tyagi
We show that a simple scaled transmission analog coding scheme results in a slowdown in convergence rate by a factor of $\sqrt{d(1+1/\mathtt{SNR})}$.
no code implementations • 20 May 2021 • Lekshmi Ramesh, Chandra R. Murthy, Himanshu Tyagi
For a given budget of $m$ measurements per sample, the goal is to recover the $\ell$ underlying supports, in the absence of the knowledge of group labels.
1 code implementation • 24 Nov 2020 • Prathamesh Mayekar, Shubham Jha, Ananda Theertha Suresh, Himanshu Tyagi
We propose \emph{Wyner-Ziv estimators}, which are communication and computationally efficient and near-optimal when an upper bound for the distance between the side information and the data is known.
no code implementations • 21 Jul 2020 • Jayadev Acharya, Clément L. Canonne, Yu-Han Liu, Ziteng Sun, Himanshu Tyagi
We study the role of interactivity in distributed statistical inference under information constraints, e. g., communication constraints and local differential privacy.
1 code implementation • 24 Apr 2020 • Aditya Gopalan, Himanshu Tyagi
We use the simulation framework to compare the performance of three testing policies: Random Symptomatic Testing (RST), Contact Tracing (CT), and a new Location Based Testing policy (LBT).
no code implementations • 22 Aug 2019 • Prathamesh Mayekar, Himanshu Tyagi
Finally, we propose an adaptive quantizer for gain which when used with RATQ for shape quantizer outperforms uniform gain quantization and is, in fact, close to optimal.
no code implementations • 20 Jul 2019 • Jayadev Acharya, Clément L. Canonne, Yanjun Han, Ziteng Sun, Himanshu Tyagi
We study goodness-of-fit of discrete distributions in the distributed setting, where samples are divided between multiple users who can only release a limited amount of information about their samples due to various information constraints.
no code implementations • 20 May 2019 • Jayadev Acharya, Clément L. Canonne, Himanshu Tyagi
We propose a general-purpose simulate-and-infer strategy that uses only private-coin communication protocols and is sample-optimal for distribution learning.
no code implementations • 30 Dec 2018 • Jayadev Acharya, Clément L. Canonne, Himanshu Tyagi
Underlying our bounds is a characterization of the contraction in chi-square distances between the observed distributions of the samples when information constraints are placed.
no code implementations • 7 Aug 2018 • Jayadev Acharya, Clément L. Canonne, Cody Freitag, Himanshu Tyagi
We are concerned with two settings: First, when we insist on using an already deployed, general-purpose locally differentially private mechanism such as the popular RAPPOR or the recently introduced Hadamard Response for collecting data, and must build our tests based on the data collected via this mechanism; and second, when no such restriction is imposed, and we can design a bespoke mechanism specifically for testing.
no code implementations • 19 Apr 2018 • Jayadev Acharya, Clément L. Canonne, Himanshu Tyagi
Nonetheless, we present a Las Vegas algorithm that simulates a single sample from the unknown distribution using $O(k/2^\ell)$ samples in expectation.
no code implementations • 2 Aug 2014 • Jayadev Acharya, Alon Orlitsky, Ananda Theertha Suresh, Himanshu Tyagi
It was recently shown that estimating the Shannon entropy $H({\rm p})$ of a discrete $k$-symbol distribution ${\rm p}$ requires $\Theta(k/\log k)$ samples, a number that grows near-linearly in the support size.