no code implementations • 3 Jul 2023 • Bushra Sabir, M. Ali Babar, Sharif Abuadbba
It focuses on interpretability and transparency in detecting and transforming textual adversarial examples.
no code implementations • NAACL 2021 • Bushra Sabir, M. Ali Babar, Raj Gaire
Adversarial Examples (AEs) generated by perturbing original training examples are useful in improving the robustness of Deep Learning (DL) based models.
no code implementations • 17 Dec 2020 • Bushra Sabir, Faheem Ullah, M. Ali Babar, Raj Gaire
Objective: This paper aims at systematically reviewing ML-based data exfiltration countermeasures to identify and classify ML approaches, feature engineering techniques, evaluation datasets, and performance metrics used for these countermeasures.
1 code implementation • 18 May 2020 • Bushra Sabir, M. Ali Babar, Raj Gaire, Alsharif Abuadbba
Therefore, the security vulnerabilities of these systems, in general, remain primarily unknown which calls for testing the robustness of these systems.