no code implementations • 1 Feb 2024 • Andrei Muresanu, Anvith Thudi, Michael R. Zhang, Nicolas Papernot
Machine unlearning is a desirable operation as models get increasingly deployed on data with unknown provenance.
no code implementations • 1 Jul 2023 • Anvith Thudi, Hengrui Jia, Casey Meehan, Ilia Shumailov, Nicolas Papernot
Put all together, our evaluation shows that this novel DP-SGD analysis allows us to now formally show that DP-SGD leaks significantly less privacy for many datapoints (when trained on common benchmarks) than the current data-independent guarantee.
no code implementations • 28 May 2023 • Stephan Rabanser, Anvith Thudi, Abhradeep Thakurta, Krishnamurthy Dvijotham, Nicolas Papernot
Training reliable deep learning models which avoid making overconfident but incorrect predictions is a longstanding challenge.
no code implementations • 6 Aug 2022 • Congyu Fang, Hengrui Jia, Anvith Thudi, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Varun Chandrasekaran, Nicolas Papernot
They empirically argued the benefit of this approach by showing how spoofing--computing a proof for a stolen model--is as expensive as obtaining the proof honestly by training the model.
no code implementations • 26 May 2022 • Stephan Rabanser, Anvith Thudi, Kimia Hamidieh, Adam Dziedzic, Nicolas Papernot
Selective classification is the task of rejecting inputs a model would predict incorrectly on through a trade-off between input space coverage and model accuracy.
no code implementations • 24 Feb 2022 • Anvith Thudi, Ilia Shumailov, Franziska Boenisch, Nicolas Papernot
We find this greatly reduces the bound on MI positive accuracy.
no code implementations • 22 Oct 2021 • Anvith Thudi, Hengrui Jia, Ilia Shumailov, Nicolas Papernot
Machine unlearning, i. e. having a model forget about some of its training data, has become increasingly more important as privacy legislation promotes variants of the right-to-be-forgotten.
1 code implementation • 27 Sep 2021 • Anvith Thudi, Gabriel Deza, Varun Chandrasekaran, Nicolas Papernot
In this work, we first taxonomize approaches and metrics of approximate unlearning.
no code implementations • 20 Sep 2021 • Varun Chandrasekaran, Hengrui Jia, Anvith Thudi, Adelin Travers, Mohammad Yaghini, Nicolas Papernot
The application of machine learning (ML) in computer systems introduces not only many benefits but also risks to society.
2 code implementations • 9 Mar 2021 • Hengrui Jia, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Anvith Thudi, Varun Chandrasekaran, Nicolas Papernot
In particular, our analyses and experiments show that an adversary seeking to illegitimately manufacture a proof-of-learning needs to perform *at least* as much work than is needed for gradient descent itself.