1 code implementation • 25 Feb 2020 • Metehan Cekic, Soorya Gopalakrishnan, Upamanyu Madhow
The opportunity for doing so arises due to subtle nonlinear variations across transmitters, even those made by the same manufacturer.
1 code implementation • 22 Feb 2020 • Can Bakiskan, Soorya Gopalakrishnan, Metehan Cekic, Upamanyu Madhow, Ramtin Pedarsani
The vulnerability of deep neural networks to small, adversarially designed perturbations can be attributed to their "excessive linearity."
no code implementations • 19 May 2019 • Soorya Gopalakrishnan, Metehan Cekic, Upamanyu Madhow
A "wireless fingerprint" which exploits hardware imperfections unique to each device is a potentially powerful tool for wireless security.
1 code implementation • 24 Oct 2018 • Soorya Gopalakrishnan, Zhinus Marzi, Metehan Cekic, Upamanyu Madhow, Ramtin Pedarsani
We also devise attacks based on the locally linear model that outperform the well-known FGSM attack.
3 code implementations • 11 Mar 2018 • Soorya Gopalakrishnan, Zhinus Marzi, Upamanyu Madhow, Ramtin Pedarsani
It is by now well-known that small adversarial perturbations can induce classification errors in deep neural networks (DNNs).
3 code implementations • 15 Jan 2018 • Zhinus Marzi, Soorya Gopalakrishnan, Upamanyu Madhow, Ramtin Pedarsani
In this paper, we study this phenomenon in the setting of a linear classifier, and show that it is possible to exploit sparsity in natural data to combat $\ell_{\infty}$-bounded adversarial perturbations.