Search Results for author: Yann Fraboni

Found 5 papers, 4 papers with code

SIFU: Sequential Informed Federated Unlearning for Efficient and Provable Client Unlearning in Federated Optimization

1 code implementation21 Nov 2022 Yann Fraboni, Martin Van Waerebeke, Kevin Scaman, Richard Vidal, Laetitia Kameni, Marco Lorenzi

Machine Unlearning (MU) is an increasingly important topic in machine learning safety, aiming at removing the contribution of a given data point from a training procedure.

Machine Unlearning

A General Theory for Federated Optimization with Asynchronous and Heterogeneous Clients Updates

no code implementations21 Jun 2022 Yann Fraboni, Richard Vidal, Laetitia Kameni, Marco Lorenzi

We show that our general framework applies to existing optimization schemes including centralized learning, FedAvg, asynchronous FedAvg, and FedBuff.

Federated Learning

A General Theory for Client Sampling in Federated Learning

1 code implementation26 Jul 2021 Yann Fraboni, Richard Vidal, Laetitia Kameni, Marco Lorenzi

In this work, we provide a general theoretical framework to quantify the impact of a client sampling scheme and of the clients heterogeneity on the federated optimization.

Federated Learning

Free-rider Attacks on Model Aggregation in Federated Learning

1 code implementation21 Jun 2020 Yann Fraboni, Richard Vidal, Marco Lorenzi

Free-rider attacks against federated learning consist in dissimulating participation to the federated learning process with the goal of obtaining the final aggregated model without actually contributing with any data.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.