1 code implementation • 27 Jul 2020 • Dou Goodman, Hao Xin
Adversarial attack breaks the boundaries of traditional security defense.
no code implementations • 31 Jan 2020 • Dou Goodman, Lv Zhonghou, Wang minghua
In this paper, we present a novel algorithm, FastWordBug, to efficiently generate small text perturbations in a black-box setting that forces a sentiment analysis or text classification mode to make an incorrect prediction.
2 code implementations • 13 Jan 2020 • Dou Goodman, Hao Xin, Wang Yang, Wu Yuesheng, Xiong Junfeng, Zhang Huan
In recent years, neural networks have been extensively deployed for computer vision tasks, particularly visual classification problems, where new algorithms reported to achieve or even surpass the human performance.
1 code implementation • 8 Jan 2020 • Dou Goodman
Fortunately, generating adversarial examples usually requires white-box access to the victim model, and real-world cloud-based image classification services are more complex than white-box classifier, the architecture and parameters of DL models on cloud platforms cannot be obtained by the attacker.
no code implementations • 23 Aug 2019 • Dou Goodman, Xingjian Li, Ji Liu, Dejing Dou, Tao Wei
Finally, we conduct extensive experiments using a wide range of datasets and the experiment results show that our AT+ALP achieves the state of the art defense performance.
no code implementations • 19 Jun 2019 • Dou Goodman, Tao Wei
Many recent works demonstrated that Deep Learning models are vulnerable to adversarial examples. Fortunately, generating adversarial examples usually requires white-box access to the victim model, and the attacker can only access the APIs opened by cloud platforms.