1 code implementation • CVPR 2023 • Yulong Tian, Fnu Suya, Anshuman Suri, Fengyuan Xu, David Evans
We demonstrate attacks in which an adversary can manipulate the upstream model to conduct highly effective and specific property inference attacks (AUC score $> 0. 9$), without incurring significant performance loss on the main task.
1 code implementation • 30 Apr 2021 • Yulong Tian, Fnu Suya, Fengyuan Xu, David Evans
In a backdoor attack on a machine learning model, an adversary produces a model that performs well on normal inputs but outputs targeted misclassifications on inputs containing a small trigger pattern.