Distributed Inference with Sparse and Quantized Communication

2 Apr 2020  ·  Aritra Mitra, John A. Richards, Saurabh Bagchi, Shreyas Sundaram ·

We consider the problem of distributed inference where agents in a network observe a stream of private signals generated by an unknown state, and aim to uniquely identify this state from a finite set of hypotheses. We focus on scenarios where communication between agents is costly, and takes place over channels with finite bandwidth. To reduce the frequency of communication, we develop a novel event-triggered distributed learning rule that is based on the principle of diffusing low beliefs on each false hypothesis. Building on this principle, we design a trigger condition under which an agent broadcasts only those components of its belief vector that have adequate innovation, to only those neighbors that require such information. We prove that our rule guarantees convergence to the true state exponentially fast almost surely despite sparse communication, and that it has the potential to significantly reduce information flow from uninformative agents to informative agents. Next, to deal with finite-precision communication channels, we propose a distributed learning rule that leverages the idea of adaptive quantization. We show that by sequentially refining the range of the quantizers, every agent can learn the truth exponentially fast almost surely, while using just $1$ bit to encode its belief on each hypothesis. For both our proposed algorithms, we rigorously characterize the trade-offs between communication-efficiency and the learning rate.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here