no code implementations • 3 May 2024 • Sang Bin Moon, Abolfazl Hashemi
The Adversarial Markov Decision Process (AMDP) is a learning framework that deals with unknown and varying tasks in decision-making applications like robotics and recommendation systems.
1 code implementation • 9 Apr 2024 • Sai Aparna Aketi, Abolfazl Hashemi, Kaushik Roy
Decentralized learning is crucial in supporting on-device learning over large distributed datasets, eliminating the need for a central server.
no code implementations • 9 Apr 2024 • Guangchen Lan, Dong-Jun Han, Abolfazl Hashemi, Vaneet Aggarwal, Christopher G. Brinton
Moreover, compared to synchronous FedPG, AFedPG improves the time complexity from $\mathcal{O}(\frac{t_{\max}}{N})$ to $\mathcal{O}(\frac{1}{\sum_{i=1}^{N} \frac{1}{t_{i}}})$, where $t_{i}$ denotes the time consumption in each iteration at the agent $i$, and $t_{\max}$ is the largest one.
no code implementations • 4 Apr 2024 • Ege C. Kaya, Abolfazl Hashemi
This approach bridges the existing gap in the optimization of performance-robustness trade-offs in multi-task subset selection.
no code implementations • 20 Mar 2024 • Vishnu Pandi Chellapandi, Antesh Upadhyay, Abolfazl Hashemi, Stanislaw H. Żak
A novel Decentralized Noisy Model Update Tracking Federated Learning algorithm (FedNMUT) is proposed that is tailored to function efficiently in the presence of noisy communication channels that reflect imperfect information exchange.
no code implementations • 28 Feb 2024 • Deepak Ravikumar, Efstathia Soufleri, Abolfazl Hashemi, Kaushik Roy
Second, we present a novel insight showing that input loss curvature is upper-bounded by the differential privacy parameter.
no code implementations • 14 Jul 2023 • Antesh Upadhyay, Abolfazl Hashemi
We propose an improved convergence analysis technique that characterizes the distributed learning paradigm of federated learning (FL) with imperfect/noisy uplink and downlink communications.
1 code implementation • 9 Jun 2023 • Ege C. Kaya, M. Berk Sahin, Abolfazl Hashemi
This paper focuses on a multi-agent zeroth-order online optimization problem in a federated learning setting for target tracking.
1 code implementation • NeurIPS 2023 • Sai Aparna Aketi, Abolfazl Hashemi, Kaushik Roy
Decentralized learning enables the training of deep learning models over large distributed datasets generated at different locations, without the need for a central server.
1 code implementation • 19 Mar 2023 • Vishnu Pandi Chellapandi, Antesh Upadhyay, Abolfazl Hashemi, Stanislaw H /. Zak
The first algorithm, Federated Noisy Decentralized Learning (FedNDL1), comes from the literature, where the noise is added to their parameters to simulate the scenario of the presence of noisy communication channels.
no code implementations • 10 Feb 2022 • Niklas Lauffer, Mahsa Ghasemi, Abolfazl Hashemi, Yagiz Savas, Ufuk Topcu
The regret of the proposed learning algorithm is independent of the size of the state space and polynomial in the rest of the parameters of the game.
no code implementations • 30 Jun 2021 • Farzan Memarian, Abolfazl Hashemi, Scott Niekum, Ufuk Topcu
We explore methodologies to improve the robustness of generative adversarial imitation learning (GAIL) algorithms to observation noise.
2 code implementations • 16 Jun 2021 • Anish Acharya, Abolfazl Hashemi, Prateek Jain, Sujay Sanghavi, Inderjit S. Dhillon, Ufuk Topcu
Geometric median (\textsc{Gm}) is a classical method in statistics for achieving a robust estimation of the uncorrupted data; under gross corruption, it achieves the optimal breakdown point of 0. 5.
Ranked #20 on Image Classification on MNIST (Accuracy metric)
no code implementations • 13 Jun 2021 • Rudrajit Das, Abolfazl Hashemi, Sujay Sanghavi, Inderjit S. Dhillon
The primary reason for this is that the clipping operation (i. e., projection onto an $\ell_2$ ball of a fixed radius called the clipping threshold) for bounding the sensitivity of the average update to each client's update introduces bias depending on the clipping threshold and the number of local steps in FL, and analyzing this is not easy.
2 code implementations • 4 Mar 2021 • Abolfazl Hashemi, Hayden Schaeffer, Robert Shi, Ufuk Topcu, Giang Tran, Rachel Ward
In particular, we provide generalization bounds for functions in a certain class (that is dense in a reproducing kernel Hilbert space) depending on the number of samples and the distribution of features.
no code implementations • 28 Feb 2021 • Yagiz Savas, Abolfazl Hashemi, Abraham P. Vinod, Brian M. Sadler, Ufuk Topcu
In such a setting, we develop a periodic transmission strategy, i. e., a sequence of joint beamforming gain and artificial noise pairs, that prevents the adversaries from decreasing their uncertainty on the information sequence by eavesdropping on the transmission.
no code implementations • 23 Jan 2021 • Yiyue Chen, Abolfazl Hashemi, Haris Vikalo
To our knowledge, this is the first decentralized optimization framework for time-varying directed networks that achieves such a convergence rate and applies to settings requiring sparsified communication.
no code implementations • 7 Dec 2020 • Rudrajit Das, Anish Acharya, Abolfazl Hashemi, Sujay Sanghavi, Inderjit S. Dhillon, Ufuk Topcu
We propose \texttt{FedGLOMO}, a novel federated learning (FL) algorithm with an iteration complexity of $\mathcal{O}(\epsilon^{-1. 5})$ to converge to an $\epsilon$-stationary point (i. e., $\mathbb{E}[\|\nabla f(\bm{x})\|^2] \leq \epsilon$) for smooth non-convex functions -- under arbitrary client heterogeneity and compressed communication -- compared to the $\mathcal{O}(\epsilon^{-2})$ complexity of most prior works.
1 code implementation • 20 Nov 2020 • Abolfazl Hashemi, Anish Acharya, Rudrajit Das, Haris Vikalo, Sujay Sanghavi, Inderjit Dhillon
In this paper, we show that, in such compressed decentralized optimization settings, there are benefits to having {\em multiple} gossip steps between subsequent gradient iterations, even when the cost of doing so is appropriately accounted for e. g. by means of reducing the precision of compressed information.
no code implementations • 27 May 2020 • Yiyue Chen, Abolfazl Hashemi, Haris Vikalo
We propose a communication-efficient algorithm for decentralized convex optimization that rely on sparsification of local updates exchanged between neighboring agents in the network.
no code implementations • 27 Sep 2019 • Mahsa Ghasemi, Abolfazl Hashemi, Haris Vikalo, Ufuk Topcu
We formulate the task of representation learning as that of mapping the state space of the model to a low-dimensional state space, called the kernel space.
no code implementations • 22 Jul 2019 • Abolfazl Hashemi, Haris Vikalo, Gustavo de Veciana
The latter implies that uniform sampling strategies with a fixed sampling size achieve a non-trivial approximation factor; however, we show that with overwhelming probability, these methods fail to find the optimal subset.
no code implementations • 29 Oct 2018 • Abolfazl Hashemi, Haris Vikalo
The problem of organizing data that evolves over time into clusters is encountered in a number of practical settings.
no code implementations • 19 Jul 2018 • Abolfazl Hashemi, Rasoul Shafipour, Haris Vikalo, Gonzalo Mateos
Then, we consider the Bayesian scenario where we formulate the sampling task as the problem of maximizing a monotone weak submodular function, and propose a randomized-greedy algorithm to find a sub-optimal subset of informative nodes.
no code implementations • 31 Oct 2017 • Abolfazl Hashemi, Rasoul Shafipour, Haris Vikalo, Gonzalo Mateos
We study the problem of sampling a bandlimited graph signal in the presence of noise, where the objective is to select a node subset of prescribed cardinality that minimizes the signal reconstruction mean squared error (MSE).
no code implementations • 31 Oct 2017 • Abolfazl Hashemi, Haris Vikalo
State-of-the-art algorithms for sparse subspace clustering perform spectral clustering on a similarity matrix typically obtained by representing each data point as a sparse combination of other points using either basis pursuit (BP) or orthogonal matching pursuit (OMP).
1 code implementation • 8 Aug 2016 • Abolfazl Hashemi, Haris Vikalo
We analyze the performance of AOLS and establish lower bounds on the probability of exact recovery for both noiseless and noisy random linear measurements.
no code implementations • 8 Aug 2016 • Abolfazl Hashemi, Haris Vikalo
We consider the Orthogonal Least-Squares (OLS) algorithm for the recovery of a $m$-dimensional $k$-sparse signal from a low number of noisy linear measurements.
no code implementations • 22 Feb 2016 • Abolfazl Hashemi, Haris Vikalo
Sparse linear regression, which entails finding a sparse solution to an underdetermined system of linear equations, can formally be expressed as an $l_0$-constrained least-squares problem.