no code implementations • 13 Feb 2024 • Gautham Anil, Vishnu Vinod, Apurva Narayan
In this work, we introduce QuGAP: a novel framework for generating UAPs for quantum classifiers.
no code implementations • 2 Nov 2023 • Abhijith Sharma, Phil Munz, Apurva Narayan
The number of patches in a patch attack is variable and determines the attack's potency in a specific environment.
no code implementations • 27 Jul 2023 • Abhijith Sharma, Phil Munz, Apurva Narayan
Visual AI systems are vulnerable to natural and synthetic physical corruption in the real-world.
no code implementations • 10 Mar 2023 • Vipul Gupta, Apurva Narayan
We show that we can decrease the training time for any adversarial training algorithm by using only a subset of training data for adversarial training.
no code implementations • 16 Jun 2022 • Abhijith Sharma, Yijun Bian, Phil Munz, Apurva Narayan
Adversarial attacks in deep learning models, especially for safety-critical systems, are gaining more and more attention in recent years, due to the lack of trust in the security and robustness of AI models.
no code implementations • 4 Jun 2022 • Abhijith Sharma, Apurva Narayan
The focus of our work is to use abstract certification to extract a subset of inputs for (hence we call it 'soft') adversarial training.
1 code implementation • 14 May 2022 • Ramashish Gaurav, Bryan Tripp, Apurva Narayan
MaxPooling layers in Convolutional Neural Networks (CNNs) are an integral component to downsample the intermediate feature maps and introduce translational invariance, but the absence of their hardware-friendly spiking equivalents limits such CNNs' conversion to deep SNNs.
no code implementations • 10 Apr 2019 • Ilia Sucholutsky, Apurva Narayan, Matthias Schonlau, Sebastian Fischmeister
The output of the model will be a close reconstruction of the true data, and can be fed to algorithms that rely on clean data.