Search Results for author: Salijona Dyrmishi

Found 4 papers, 2 papers with code

How Realistic Is Your Synthetic Data? Constraining Deep Generative Models for Tabular Data

1 code implementation7 Feb 2024 Mihaela Cătălina Stoian, Salijona Dyrmishi, Maxime Cordy, Thomas Lukasiewicz, Eleonora Giunchiglia

Further, we show how our CL does not necessarily need to be integrated at training time, as it can be also used as a guardrail at inference time, still producing some improvements in the overall performance of the models.

How do humans perceive adversarial text? A reality check on the validity and naturalness of word-based adversarial attacks

no code implementations24 May 2023 Salijona Dyrmishi, Salah Ghamizi, Maxime Cordy

Natural Language Processing (NLP) models based on Machine Learning (ML) are susceptible to adversarial attacks -- malicious algorithms that imperceptibly modify input text to force models into making incorrect predictions.

Adversarial Text

On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks

1 code implementation7 Feb 2022 Salijona Dyrmishi, Salah Ghamizi, Thibault Simonetto, Yves Le Traon, Maxime Cordy

While the literature on security attacks and defense of Machine Learning (ML) systems mostly focuses on unrealistic adversarial examples, recent research has raised concern about the under-explored field of realistic adversarial attacks and their implications on the robustness of real-world systems.

Adversarial Robustness Malware Detection +2

Cannot find the paper you are looking for? You can Submit a new open access paper.