no code implementations • 7 Apr 2023 • Marco Carpentiero, Vincenzo Matta, Ali H. Sayed
In this work we derive the performance achievable by a network of distributed agents that solve, adaptively and in the presence of communication constraints, a regression problem.
no code implementations • 25 Jan 2023 • Michele Cirillo, Virginia Bordignon, Vincenzo Matta, Ali H. Sayed
We devise a novel learning strategy where each agent forms a valid belief by completing the partial beliefs received from its neighbors.
no code implementations • 5 Dec 2022 • Mert Kayaalp, Virginia Bordignon, Stefan Vlaski, Vincenzo Matta, Ali H. Sayed
This work studies networked agents cooperating to track a dynamical state of nature under partial information.
no code implementations • 16 Sep 2022 • Roula Nassif, Stefan Vlaski, Marco Carpentiero, Vincenzo Matta, Marc Antonini, Ali H. Sayed
In this paper, we consider decentralized optimization problems where agents have individual cost functions to minimize subject to subspace constraints that require the minimizers across the network to lie in low-dimensional subspaces.
1 code implementation • 17 Dec 2021 • Virginia Bordignon, Stefan Vlaski, Vincenzo Matta, Ali H. Sayed
In the proposed social machine learning (SML) strategy, two phases are present: in the training phase, classifiers are independently trained to generate a belief over a set of hypotheses using a finite number of training samples; in the prediction phase, classifiers evaluate streaming unlabeled observations and share their instantaneous beliefs with neighboring classifiers.
no code implementations • 3 Dec 2021 • Marco Carpentiero, Vincenzo Matta, Ali H. Sayed
We propose a diffusion strategy nicknamed as ACTC (Adapt-Compress-Then-Combine), which relies on the following steps: i) an adaptation step where each agent performs an individual stochastic-gradient update with constant step-size; ii) a compression step that leverages a recently introduced class of stochastic compression operators; and iii) a combination step where each agent combines the compressed updates received from its neighbors.
no code implementations • 23 Oct 2020 • Virginia Bordignon, Stefan Vlaski, Vincenzo Matta, Ali H. Sayed
Combination over time means that the classifiers respond to streaming data during testing and continue to improve their performance even during this phase.
1 code implementation • 24 Jun 2020 • Virginia Bordignon, Vincenzo Matta, Ali H. Sayed
Instead of sharing the entirety of their beliefs, this work considers the case in which agents will only share their beliefs regarding one hypothesis of interest, with the purpose of evaluating its validity, and draws conditions under which this policy does not affect truth learning.
no code implementations • 18 Dec 2019 • Vincenzo Matta, Augusto Santos, Ali H. Sayed
Many optimization, inference and learning tasks can be accomplished efficiently by means of decentralized processing algorithms where the network topology (i. e., the graph) plays a critical role in enabling the interactions among neighboring nodes.
1 code implementation • 30 Oct 2019 • Virginia Bordignon, Vincenzo Matta, Ali H. Sayed
This work studies the learning abilities of agents sharing partial beliefs over social networks.
Signal Processing Multiagent Systems
no code implementations • 5 Apr 2019 • Vincenzo Matta, Augusto Santos, Ali H. Sayed
This claim is proved for three matrix estimators: i) the Granger estimator that adapts to the partial observability setting the solution that is exact under full observability ; ii) the one-lag correlation matrix; and iii) the residual estimator based on the difference between two consecutive time samples.