no code implementations • 3 Aug 2020 • Haoqiang Guo, Lu Peng, Jian Zhang, Fang Qi, Lide Duan
Recent studies identify that Deep learning Neural Networks (DNNs) are vulnerable to subtle perturbations, which are not perceptible to human visual system but can fool the DNN models and lead to wrong outputs.