1 code implementation • 1 Aug 2023 • Hai Zhu, Zhaoqing Yang, Weiwei Shang, Yuren Wu
Natural language processing models are vulnerable to adversarial examples.
Adversarial Attack Hard-label Attack