no code implementations • 15 Feb 2022 • Omer Faruk Tuna, Ferhat Ozgur Catak, M. Taner Eskil
In this study, we show both mathematically and experimentally that using some widely known activation functions in the output layer of the model with high temperature values has the effect of zeroing out the gradients for both targeted and untargeted attack cases, preventing attackers from exploiting the model's loss function to craft adversarial samples.
no code implementations • 8 Feb 2021 • Omer Faruk Tuna, Ferhat Ozgur Catak, M. Taner Eskil
Deep neural network architectures are considered to be robust to random perturbations.
no code implementations • 11 Dec 2020 • Omer Faruk Tuna, Ferhat Ozgur Catak, M. Taner Eskil
While state-of-the-art Deep Neural Network (DNN) models are considered to be robust to random perturbations, it was shown that these architectures are highly vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible.