no code implementations • 24 Oct 2021 • Yi Xiang Marcus Tan, Penny Chong, Jiamei Sun, Ngai-Man Cheung, Yuval Elovici, Alexander Binder
In this work, we aim to close this gap by studying a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
no code implementations • 9 Dec 2020 • Yi Xiang Marcus Tan, Penny Chong, Jiamei Sun, Ngai-Man Cheung, Yuval Elovici, Alexander Binder
In this work, we propose a detection strategy to identify adversarial support sets, aimed at destroying the understanding of a few-shot classifier for a certain class.
no code implementations • 8 Dec 2019 • Yi Xiang Marcus Tan, Yuval Elovici, Alexander Binder
We investigate to what extent alternative variants of Artificial Neural Networks (ANNs) are susceptible to adversarial attacks.
no code implementations • 28 May 2019 • Yi Xiang Marcus Tan, Alfonso Iacovazzi, Ivan Homoliak, Yuval Elovici, Alexander Binder
In an attempt to address this gap, we built a set of attacks, which are applications of several generative approaches, to construct adversarial mouse trajectories that bypass authentication models.