no code implementations • 22 Feb 2024 • Stefano Calzavara, Lorenzo Cazzaro, Claudio Lucchese, Giulio Ermanno Pibiri
Verifiable learning advocates for training machine learning models amenable to efficient security verification.
1 code implementation • 5 May 2023 • Stefano Calzavara, Lorenzo Cazzaro, Giulio Ermanno Pibiri, Nicola Prezza
In this paper, we identify a restricted class of decision tree ensembles, called large-spread ensembles, which admit a security verification algorithm running in polynomial time.
1 code implementation • 27 Sep 2022 • Stefano Calzavara, Lorenzo Cazzaro, Claudio Lucchese, Federico Marcuzzi
We present a new approach to the global fairness verification of tree-based classifiers.
no code implementations • 5 Dec 2021 • Stefano Calzavara, Lorenzo Cazzaro, Claudio Lucchese, Federico Marcuzzi, Salvatore Orlando
In this paper we criticize the robustness measure traditionally employed to assess the performance of machine learning models deployed in adversarial settings.
no code implementations • 3 Dec 2020 • Marco Squarcina, Mauro Tempesta, Lorenzo Veronese, Stefano Calzavara, Matteo Maffei
Related-domain attackers control a sibling domain of their target web application, e. g., as the result of a subdomain takeover.
Cryptography and Security
no code implementations • 6 Jul 2020 • Stefano Calzavara, Pietro Ferrara, Claudio Lucchese
Machine learning has proved invaluable for a range of different tasks, yet it also proved vulnerable to evasion attacks, i. e., maliciously crafted perturbations of input data designed to force mispredictions.
no code implementations • 7 Apr 2020 • Stefano Calzavara, Claudio Lucchese, Federico Marcuzzi, Salvatore Orlando
The attacker aims at finding a minimal perturbation of a test instance that changes the model outcome.
1 code implementation • 2 Jul 2019 • Stefano Calzavara, Claudio Lucchese, Gabriele Tolomei, Seyum Assefa Abebe, Salvatore Orlando
Despite its success and popularity, machine learning is now recognized as vulnerable to evasion attacks, i. e., carefully crafted perturbations of test inputs designed to force prediction errors.