The Influence of Dropout on Membership Inference in Differentially Private Models

16 Mar 2021  ·  Erick Galinkin ·

Differentially private models seek to protect the privacy of data the model is trained on, making it an important component of model security and privacy. At the same time, data scientists and machine learning engineers seek to use uncertainty quantification methods to ensure models are as useful and actionable as possible. We explore the tension between uncertainty quantification via dropout and privacy by conducting membership inference attacks against models with and without differential privacy. We find that models with large dropout slightly increases a model's risk to succumbing to membership inference attacks in all cases including in differentially private models.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods