1 code implementation • 21 Nov 2022 • Yann Fraboni, Martin Van Waerebeke, Kevin Scaman, Richard Vidal, Laetitia Kameni, Marco Lorenzi
Machine Unlearning (MU) is an increasingly important topic in machine learning safety, aiming at removing the contribution of a given data point from a training procedure.
no code implementations • 21 Jun 2022 • Yann Fraboni, Richard Vidal, Laetitia Kameni, Marco Lorenzi
We show that our general framework applies to existing optimization schemes including centralized learning, FedAvg, asynchronous FedAvg, and FedBuff.
1 code implementation • 26 Jul 2021 • Yann Fraboni, Richard Vidal, Laetitia Kameni, Marco Lorenzi
In this work, we provide a general theoretical framework to quantify the impact of a client sampling scheme and of the clients heterogeneity on the federated optimization.
1 code implementation • 12 May 2021 • Yann Fraboni, Richard Vidal, Laetitia Kameni, Marco Lorenzi
This work addresses the problem of optimizing communications between server and clients in federated learning (FL).
1 code implementation • 21 Jun 2020 • Yann Fraboni, Richard Vidal, Marco Lorenzi
Free-rider attacks against federated learning consist in dissimulating participation to the federated learning process with the goal of obtaining the final aggregated model without actually contributing with any data.