1 code implementation • 30 Aug 2023 • Shengyun Peng, Weilin Xu, Cory Cornelius, Matthew Hull, Kevin Li, Rahul Duggal, Mansi Phute, Jason Martin, Duen Horng Chau
Our research aims to unify existing works' diverging opinions on how architectural components affect the adversarial robustness of CNNs.
1 code implementation • 8 Jan 2023 • Shengyun Peng, Weilin Xu, Cory Cornelius, Kevin Li, Rahul Duggal, Duen Horng Chau, Jason Martin
Adversarial Training is the most effective approach for improving the robustness of Deep Neural Networks (DNNs).
no code implementations • 22 Aug 2022 • Xinlei He, Zheng Li, Weilin Xu, Cory Cornelius, Yang Zhang
Finally, we find that data augmentation degrades the performance of existing attacks to a larger extent, and we propose an adaptive attack using augmentation to train shadow and attack models that improve attack performance.
1 code implementation • 30 May 2017 • Weilin Xu, David Evans, Yanjun Qi
Feature squeezing is a recently-introduced framework for mitigating and detecting adversarial examples.
2 code implementations • Network and Distributed System Security Symposium 2018 • Weilin Xu, David Evans, Yanjun Qi
Although deep neural networks (DNNs) have achieved great success in many tasks, they can often be fooled by \emph{adversarial examples} that are generated by adding small but purposeful distortions to natural examples.
no code implementations • 22 Feb 2017 • Ji Gao, Beilun Wang, Zeming Lin, Weilin Xu, Yanjun Qi
By identifying and removing unnecessary features in a DNN model, DeepCloak limits the capacity an attacker can use generating adversarial samples and therefore increase the robustness against such inputs.