1 code implementation • CVPR 2023 • Zhanhao Hu, Wenda Chu, Xiaopei Zhu, HUI ZHANG, Bo Zhang, Xiaolin Hu
In order to craft natural-looking adversarial clothes that can evade person detectors at multiple viewing angles, we propose adversarial camouflage textures (AdvCaT) that resemble one kind of the typical textures of daily clothes, camouflage textures.
1 code implementation • 28 May 2023 • Zhanhao Hu, Jun Zhu, Bo Zhang, Xiaolin Hu
Recent works found that deep neural networks (DNNs) can be fooled by adversarial examples, which are crafted by adding adversarial noise on clean inputs.
1 code implementation • 30 Jan 2023 • Xiao Li, Wei zhang, Yining Liu, Zhanhao Hu, Bo Zhang, Xiaolin Hu
Previous researches mainly focus on improving adversarial robustness in the fully supervised setting, leaving the challenging domain of zero-shot adversarial robustness an open question.
1 code implementation • 17 Aug 2022 • Xiao Li, Qiongxiu Li, Zhanhao Hu, Xiaolin Hu
We demonstrate that the generalization gap and privacy leakage are less correlated than those of the previous results.
no code implementations • 12 May 2022 • Xiaopei Zhu, Zhanhao Hu, Siyuan Huang, Jianmin Li, Xiaolin Hu
We simulated the process from cloth to clothing in the digital world and then designed the adversarial "QR code" pattern.
no code implementations • 7 Mar 2022 • Zhanhao Hu, Tao Wang, Xiaolin Hu
Compared with rate-based artificial neural networks, Spiking Neural Networks (SNN) provide a more biological plausible model for the brain.
1 code implementation • CVPR 2022 • Zhanhao Hu, Siyuan Huang, Xiaopei Zhu, Fuchun Sun, Bo Zhang, Xiaolin Hu
Experiments showed that these clothes could fool person detectors in the physical world.
no code implementations • CVPR 2022 • Xiaopei Zhu, Zhanhao Hu, Siyuan Huang, Jianmin Li, Xiaolin Hu
We simulated the process from cloth to clothing in the digital world and then designed the adversarial "QR code" pattern.