no code implementations • 23 Nov 2023 • Yueqi Zeng, Ziqiang Li, Pengfei Xia, Lei Liu, Bin Li
With the boom in the natural language processing (NLP) field these years, backdoor attacks pose immense threats against deep neural network models.
no code implementations • 15 Oct 2023 • Ziqiang Li, Pengfei Xia, Hong Sun, Yueqi Zeng, Wei zhang, Bin Li
In this study, we focus on improving the poisoning efficiency of backdoor attacks from the sample selection perspective.