no code implementations • ICML 2020 • Yi-Hsuan Wu, Chia-Hung Yuan, Shan-Hung (Brandon) Wu
Deep neural networks are shown to be vulnerable to adversarial attacks.
1 code implementation • NeurIPS 2020 • Cheng-Hsin Weng, Yan-Ting Lee, Shan-Hung (Brandon) Wu
Deep neural networks are shown to be susceptible to both adversarial attacks and backdoor attacks.
1 code implementation • NeurIPS 2019 • Wei-Da Chen, Shan-Hung (Brandon) Wu
Although recent efforts, such as the Capsule Networks, have been made to address this issue, these new models are either hard to train and/or incompatible with existing CNN-based techniques specialized for different applications.