1 code implementation • 9 Apr 2024 • Sai Aparna Aketi, Abolfazl Hashemi, Kaushik Roy
Decentralized learning is crucial in supporting on-device learning over large distributed datasets, eliminating the need for a central server.
no code implementations • 24 Mar 2024 • Timur Ibrayev, Amitangshu Mukherjee, Sai Aparna Aketi, Kaushik Roy
Specifically, the proposed framework models the following mechanisms: 1) ventral (what) stream focusing on the input regions perceived by the fovea part of an eye (foveation), 2) dorsal (where) stream providing visual guidance, and 3) iterative processing of the two streams to calibrate visual focus and process the sequence of focused image patches.
1 code implementation • 5 Mar 2024 • Sai Aparna Aketi, Sakshi Choudhary, Kaushik Roy
State-of-the-art decentralized learning algorithms typically require the data distribution to be Independent and Identically Distributed (IID).
1 code implementation • 24 Oct 2023 • Sai Aparna Aketi, Kaushik Roy
The current state-of-the-art decentralized learning algorithms mostly assume the data distribution to be Independent and Identically Distributed (IID).
1 code implementation • NeurIPS 2023 • Sai Aparna Aketi, Abolfazl Hashemi, Kaushik Roy
Decentralized learning enables the training of deep learning models over large distributed datasets generated at different locations, without the need for a central server.
no code implementations • 9 Apr 2023 • Deepak Ravikumar, Gobinda Saha, Sai Aparna Aketi, Kaushik Roy
The goal of IDKD is to homogenize the data distribution across the nodes.
1 code implementation • 27 Mar 2023 • Sakshi Choudhary, Sai Aparna Aketi, Gobinda Saha, Kaushik Roy
Decentralized learning allows serverless training with spatially distributed data.
1 code implementation • 28 Sep 2022 • Sai Aparna Aketi, Sangamesh Kodge, Kaushik Roy
Our experiments demonstrate that \textit{NGC} and \textit{CompNGC} outperform (by $0-6\%$) the existing SoTA decentralized learning algorithm over non-IID data with significantly less compute and memory requirements.
1 code implementation • 17 Nov 2021 • Sai Aparna Aketi, Sangamesh Kodge, Kaushik Roy
In this paper, we propose and show the convergence of low precision decentralized training that aims to reduce the computational complexity and communication cost of decentralized training.
no code implementations • 10 Feb 2021 • Sai Aparna Aketi, Amandeep Singh, Jan Rabaey
Current deep learning (DL) systems rely on a centralized computing paradigm which limits the amount of available training data, increases system latency, and adds privacy and security constraints.
no code implementations • 25 Feb 2020 • Sai Aparna Aketi, Priyadarshini Panda, Kaushik Roy
To address this issue, we propose an ensemble of classifiers at hidden layers to enable energy efficient detection of natural errors.
1 code implementation • 23 Feb 2020 • Sai Aparna Aketi, Sourjya Roy, Anand Raghunathan, Kaushik Roy
To address all the above issues, we present a simple-yet-effective gradual channel pruning while training methodology using a novel data-driven metric referred to as feature relevance score.