Fidel: Reconstructing Private Training Samples from Weight Updates in Federated Learning

1 Jan 2021  ·  David Enthoven, Zaid Al-Ars ·

With the increasing number of data collectors such as smartphones, immense amounts of data are available. Federated learning was developed to allow for distributed learning on a massive scale whilst still protecting each users' privacy. This privacy is claimed by the notion that the centralized server does not have any access to a client's data, solely the client's model update. In this paper, we evaluate a novel attack method within regular federated learning which we name the First Dense Layer Attack (Fidel). The methodology of using this attack is discussed, and as a proof of viability we show how this attack method can be used to great effect for densely connected networks and convolutional neural networks. We evaluate some key design decisions and show that the usage of ReLu and Dropout are detrimental to the privacy of a client's local dataset. We show how to recover on average twenty out of thirty private data samples from a client's model update employing a fully connected neural network with very little computational resources required. Similarly, we show that over thirteen out of twenty samples can be recovered from a convolutional neural network update. An open source implementation of this attack can be found here https://github.com/Davidenthoven/Fidel-Reconstruction-Demo

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods