1 code implementation • 25 Mar 2024 • Hossein Souri, Arpit Bansal, Hamid Kazemi, Liam Fowl, Aniruddha Saha, Jonas Geiping, Andrew Gordon Wilson, Rama Chellappa, Tom Goldstein, Micah Goldblum
As a result, we may be able to craft more potent poisons by carefully choosing the base samples.
no code implementations • 11 Nov 2022 • Beltrán Labrador, Guanlong Zhao, Ignacio López Moreno, Angelo Scorza Scarpati, Liam Fowl, Quan Wang
In this paper, we present a novel approach to adapt a sequence-to-sequence Transformer-Transducer ASR system to the keyword spotting (KWS) task.
1 code implementation • 17 Oct 2022 • Yuxin Wen, Jonas Geiping, Liam Fowl, Hossein Souri, Rama Chellappa, Micah Goldblum, Tom Goldstein
Federated learning is particularly susceptible to model poisoning and backdoor attacks because individual users have direct control over the training data and model updates.
no code implementations • 19 Apr 2022 • Pedro Sandoval-Segura, Vasu Singla, Liam Fowl, Jonas Geiping, Micah Goldblum, David Jacobs, Tom Goldstein
We advocate for evaluating poisons in terms of peak test accuracy.
1 code implementation • CVPR 2022 • Gowthami Somepalli, Liam Fowl, Arpit Bansal, Ping Yeh-Chiang, Yehuda Dar, Richard Baraniuk, Micah Goldblum, Tom Goldstein
We also use decision boundary methods to visualize double descent phenomena.
1 code implementation • 1 Feb 2022 • Yuxin Wen, Jonas Geiping, Liam Fowl, Micah Goldblum, Tom Goldstein
Federated learning (FL) has rapidly risen in popularity due to its promise of privacy and efficiency.
1 code implementation • 29 Jan 2022 • Liam Fowl, Jonas Geiping, Steven Reich, Yuxin Wen, Wojtek Czaja, Micah Goldblum, Tom Goldstein
A central tenet of Federated learning (FL), which trains models without centralizing user data, is privacy.
no code implementations • 3 Jan 2022 • Harrison Foley, Liam Fowl, Tom Goldstein, Gavin Taylor
Data poisoning for reinforcement learning has historically focused on general performance degradation, and targeted attacks have been successful via perturbations that involve control of the victim's policy and rewards.
2 code implementations • ICLR 2022 • Liam Fowl, Jonas Geiping, Wojtek Czaja, Micah Goldblum, Tom Goldstein
Federated learning has quickly gained popularity with its promises of increased user privacy and efficiency.
2 code implementations • NeurIPS 2021 • Liam Fowl, Micah Goldblum, Ping-Yeh Chiang, Jonas Geiping, Wojtek Czaja, Tom Goldstein
The adversarial machine learning literature is largely partitioned into evasion attacks on testing data and poisoning attacks on training data.
1 code implementation • 16 Jun 2021 • Hossein Souri, Liam Fowl, Rama Chellappa, Micah Goldblum, Tom Goldstein
In contrast, the Hidden Trigger Backdoor Attack achieves poisoning without placing a trigger into the training data at all.
1 code implementation • 2 Mar 2021 • Eitan Borgnia, Jonas Geiping, Valeriia Cherepanova, Liam Fowl, Arjun Gupta, Amin Ghiasi, Furong Huang, Micah Goldblum, Tom Goldstein
The InstaHide method has recently been proposed as an alternative to DP training that leverages supposed privacy properties of the mixup augmentation, although without rigorous guarantees.
1 code implementation • 26 Feb 2021 • Jonas Geiping, Liam Fowl, Gowthami Somepalli, Micah Goldblum, Michael Moeller, Tom Goldstein
Data poisoning is a threat model in which a malicious actor tampers with training data to manipulate outcomes at inference time.
no code implementations • 16 Feb 2021 • Liam Fowl, Ping-Yeh Chiang, Micah Goldblum, Jonas Geiping, Arpit Bansal, Wojtek Czaja, Tom Goldstein
Large organizations such as social media companies continually release data, for example user images.
1 code implementation • 18 Nov 2020 • Eitan Borgnia, Valeriia Cherepanova, Liam Fowl, Amin Ghiasi, Jonas Geiping, Micah Goldblum, Tom Goldstein, Arjun Gupta
Data poisoning and backdoor attacks manipulate victim models by maliciously modifying training data.
no code implementations • 13 Oct 2020 • Liam Fowl, Micah Goldblum, Arjun Gupta, Amr Sharaf, Tom Goldstein
We validate and deploy this metric on both images and text.
2 code implementations • ICLR 2021 • Jonas Geiping, Liam Fowl, W. Ronny Huang, Wojciech Czaja, Gavin Taylor, Michael Moeller, Tom Goldstein
We consider a particularly malicious poisoning attack that is both "from scratch" and "clean label", meaning we analyze an attack that successfully works against new, randomly initialized models, and is nearly imperceptible to humans, all while perturbing only a small fraction of the training data.
no code implementations • 20 Apr 2020 • Ahmed Abdelkader, Michael J. Curry, Liam Fowl, Tom Goldstein, Avi Schwarzschild, Manli Shu, Christoph Studer, Chen Zhu
We first demonstrate successful transfer attacks against a victim network using \textit{only} its feature extractor.
2 code implementations • NeurIPS 2020 • W. Ronny Huang, Jonas Geiping, Liam Fowl, Gavin Taylor, Tom Goldstein
Existing attacks for data poisoning neural networks have relied on hand-crafted heuristics, because solving the poisoning problem directly via bilevel optimization is generally thought of as intractable for deep models.
1 code implementation • ICML 2020 • Micah Goldblum, Steven Reich, Liam Fowl, Renkun Ni, Valeriia Cherepanova, Tom Goldstein
In doing so, we introduce and verify several hypotheses for why meta-learned models perform better.
1 code implementation • NeurIPS 2020 • Micah Goldblum, Liam Fowl, Tom Goldstein
Previous work on adversarially robust neural networks for image classification requires large training sets and computationally expensive training procedures.
1 code implementation • 29 Sep 2019 • Neehar Peri, Neal Gupta, W. Ronny Huang, Liam Fowl, Chen Zhu, Soheil Feizi, Tom Goldstein, John P. Dickerson
Targeted clean-label data poisoning is a type of adversarial attack on machine learning systems in which an adversary injects a few correctly-labeled, minimally-perturbed samples into the training data, causing a model to misclassify a particular test sample during inference.
2 code implementations • NeurIPS Workshop ICBINB 2020 • W. Ronny Huang, Zeyad Emam, Micah Goldblum, Liam Fowl, Justin K. Terry, Furong Huang, Tom Goldstein
The power of neural networks lies in their ability to generalize to unseen data, yet the underlying reasons for this phenomenon remain elusive.
2 code implementations • 23 May 2019 • Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein
In addition to producing small models with high test accuracy like conventional distillation, ARD also passes the superior robustness of large networks onto the student.