1 code implementation • 21 Dec 2023 • Osama A. Hanna, Merve Karakas, Lin F. Yang, Christina Fragouli
To our knowledge, these are the first algorithms capable of effectively learning through heterogeneous action erasure channels.
no code implementations • 17 Nov 2022 • Mine Gokce Dogan, Martina Cardone, Christina Fragouli
This paper aims to develop resilient transmission mechanisms to suitably distribute traffic across multiple paths in an arbitrary millimeter-wave (mmWave) network.
no code implementations • 8 Nov 2022 • Osama A. Hanna, Lin F. Yang, Christina Fragouli
When the context distribution is unknown, we establish an algorithm that reduces the stochastic contextual instance to a sequence of linear bandit instances with small misspecifications and achieves nearly the same worst-case regret bound as the algorithm that solves the misspecified linear bandit instances.
no code implementations • 7 Jul 2022 • Osama A. Hanna, Antonious M. Girgis, Christina Fragouli, Suhas Diggavi
In the shuffled model, we also achieve regret of $\tilde{O}(\sqrt{T}+\frac{1}{\epsilon})$ %for small $\epsilon$ as in the central case, while the best previously known algorithm suffers a regret of $\tilde{O}(\frac{1}{\epsilon}{T^{3/5}})$.
no code implementations • 8 Jun 2022 • Osama A. Hanna, Lin F. Yang, Christina Fragouli
Contextual linear bandits is a rich and theoretically important model that has many practical applications.
no code implementations • 11 Nov 2021 • Osama A. Hanna, Lin F. Yang, Christina Fragouli
Existing works usually fail to address this issue and can become infeasible in certain applications.
no code implementations • 1 Aug 2021 • Mine Gokce Dogan, Yahya H. Ezzeldin, Christina Fragouli, Addison W. Bohannon
We consider a source that wishes to communicate with a destination at a desired rate, over a mmWave network where links are subject to blockage and nodes to failure (e. g., in a hostile military environment).
no code implementations • 14 Dec 2020 • Osama A. Hanna, Yahya H. Ezzeldin, Christina Fragouli, Suhas Diggavi
In this paper, we propose an alternate approach to learn from distributed data that quantizes data instead of gradients, and can support learning over applications where the size of gradient updates is prohibitive.
no code implementations • 24 May 2020 • Antonious M. Girgis, Deepesh Data, Kamalika Chaudhuri, Christina Fragouli, Suhas Diggavi
This work examines a novel question: how much randomness is needed to achieve local differential privacy (LDP)?
no code implementations • 14 May 2020 • Tan Li, Linqi Song, Christina Fragouli
In this paper, we are interested in what we term the federated private bandits framework, that combines differential privacy with multi-agent bandit learning.
no code implementations • 1 Nov 2019 • Osama A. Hanna, Yahya H. Ezzeldin, Tara Sadjadpour, Christina Fragouli, Suhas Diggavi
We consider the problem of distributed feature quantization, where the goal is to enable a pretrained classifier at a central node to carry out its classification on features that are gathered from distributed nodes through communication constrained channels.
no code implementations • 15 Oct 2018 • Linqi Song, Christina Fragouli, Devavrat Shah
We consider recommendation systems that need to operate under wireless bandwidth constraints, measured as number of broadcast transmissions, and demonstrate a (tight for some instances) tradeoff between regret and bandwidth for two scenarios: the case of multi-armed bandit with context, and the case where there is a latent structure in the message space that we can exploit to reduce the learning phase.