no code implementations • CVPR 2023 • Yunrui Yu, Cheng-Zhong Xu
Attackers can deceive neural networks by adding human imperceptive perturbations to their input data; this reveals the vulnerability and weak robustness of current deep-learning networks.
1 code implementation • 15 Nov 2022 • Yunrui Yu, Xitong Gao, Cheng-Zhong Xu
In particular, most ensemble defenses exhibit near or exactly 0% robustness against MORA with $\ell^\infty$ perturbation within 0. 02 on CIFAR-10, and 0. 01 on CIFAR-100.
1 code implementation • 15 Oct 2021 • Yinpeng Dong, Qi-An Fu, Xiao Yang, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu, Jiayu Tang, Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Ye Liu, Qilong Zhang, Lianli Gao, Yunrui Yu, Xitong Gao, Zhe Zhao, Daquan Lin, Jiadong Lin, Chuanbiao Song, ZiHao Wang, Zhennan Wu, Yang Guo, Jiequan Cui, Xiaogang Xu, Pengguang Chen
Due to the vulnerability of deep neural networks (DNNs) to adversarial examples, a large number of defense techniques have been proposed to alleviate this problem in recent years.
1 code implementation • CVPR 2021 • Yunrui Yu, Xitong Gao, Cheng-Zhong Xu
In this paper, we show that latent features in certain "robust" models are surprisingly susceptible to adversarial attacks.