1 code implementation • 29 Apr 2023 • Thuy Dung Nguyen, Anh Duy Nguyen, Kok-Seng Wong, Huy Hieu Pham, Thanh Hung Nguyen, Phi Le Nguyen, Truong Thao Nguyen
Federated learning (FL) enables multiple clients to train a model without compromising sensitive data.
1 code implementation • 21 Feb 2023 • Nang Hung Nguyen, Duc Long Nguyen, Trong Bang Nguyen, Thanh-Hung Nguyen, Huy Hieu Pham, Truong Thao Nguyen, Phi Le Nguyen
By performing an in-depth analysis of the behavior of a classification model's penultimate layer, we introduce a metric that quantifies the similarity between two clients' data distributions without violating their privacy.
1 code implementation • 20 Nov 2022 • Quan Nguyen, Hieu H. Pham, Kok-Seng Wong, Phi Le Nguyen, Truong Thao Nguyen, Minh N. Do
FedDCT reduces the memory requirements and allows low-end devices to participate in FL.
no code implementations • 4 Aug 2022 • Nang Hung Nguyen, Phi Le Nguyen, Duc Long Nguyen, Trung Thanh Nguyen, Thuy Dung Nguyen, Huy Hieu Pham, Truong Thao Nguyen
The uneven distribution of local data across different edge devices (clients) results in slow model training and accuracy reduction in federated learning.
no code implementations • 19 Apr 2021 • Albert Njoroge Kahira, Truong Thao Nguyen, Leonardo Bautista Gomez, Ryousei Takano, Rosa M Badia, Mohamed Wahib
Deep Neural Network (DNN) frameworks use distributed training to enable faster time to convergence and alleviate memory capacity limitations when training large models and/or using high dimension inputs.
no code implementations • 26 Aug 2020 • Mohamed Wahib, Haoyu Zhang, Truong Thao Nguyen, Aleksandr Drozd, Jens Domke, Lingqi Zhang, Ryousei Takano, Satoshi Matsuoka
An alternative solution is to use out-of-core methods instead of, or in addition to, data parallelism.