no code implementations • 23 Feb 2024 • Jieren Deng, Aaron Palmer, Rigel Mahmood, Ethan Rathbun, Jinbo Bi, Kaleel Mahmood, Derek Aguiar
Achieving resiliency against adversarial attacks is necessary prior to deploying neural network classifiers in domains where misclassification incurs substantial costs, e. g., self-driving cars or medical imaging.
no code implementations • 29 Sep 2021 • Kaleel Mahmood, Rigel Mahmood, Ethan Rathbun, Marten van Dijk
In this paper, we seek to help alleviate this problem by systematizing the recent advances in adversarial machine learning black-box attacks since 2019.
1 code implementation • ICCV 2021 • Kaleel Mahmood, Rigel Mahmood, Marten van Dijk
In this paper, we study the robustness of Vision Transformers to adversarial examples.