no code implementations • 2 Nov 2022 • Feisi Fu, Panagiota Kiourti, Wenchao Li
We present a novel methodology for neural network backdoor attacks.
no code implementations • 29 Mar 2021 • Panagiota Kiourti, Wenchao Li, Anirban Roy, Karan Sikka, Susmit Jha
Recent studies have shown that neural networks are vulnerable to Trojan attacks, where a network is trained to respond to specially crafted trigger patterns in the inputs in specific and potentially malicious ways.
2 code implementations • 1 Mar 2019 • Panagiota Kiourti, Kacper Wardega, Susmit Jha, Wenchao Li
Recent work has identified that classification models implemented as neural networks are vulnerable to data-poisoning and Trojan attacks at training time.