Search Results for author: Stefano Calzavara

Found 8 papers, 3 papers with code

Verifiable Boosted Tree Ensembles

no code implementations22 Feb 2024 Stefano Calzavara, Lorenzo Cazzaro, Claudio Lucchese, Giulio Ermanno Pibiri

Verifiable learning advocates for training machine learning models amenable to efficient security verification.

Verifiable Learning for Robust Tree Ensembles

1 code implementation5 May 2023 Stefano Calzavara, Lorenzo Cazzaro, Giulio Ermanno Pibiri, Nicola Prezza

In this paper, we identify a restricted class of decision tree ensembles, called large-spread ensembles, which admit a security verification algorithm running in polynomial time.

Beyond Robustness: Resilience Verification of Tree-Based Classifiers

no code implementations5 Dec 2021 Stefano Calzavara, Lorenzo Cazzaro, Claudio Lucchese, Federico Marcuzzi, Salvatore Orlando

In this paper we criticize the robustness measure traditionally employed to assess the performance of machine learning models deployed in adversarial settings.

Can I Take Your Subdomain? Exploring Related-Domain Attacks in the Modern Web

no code implementations3 Dec 2020 Marco Squarcina, Mauro Tempesta, Lorenzo Veronese, Stefano Calzavara, Matteo Maffei

Related-domain attackers control a sibling domain of their target web application, e. g., as the result of a subdomain takeover.

Cryptography and Security

Certifying Decision Trees Against Evasion Attacks by Program Analysis

no code implementations6 Jul 2020 Stefano Calzavara, Pietro Ferrara, Claudio Lucchese

Machine learning has proved invaluable for a range of different tasks, yet it also proved vulnerable to evasion attacks, i. e., maliciously crafted perturbations of input data designed to force mispredictions.

Treant: Training Evasion-Aware Decision Trees

1 code implementation2 Jul 2019 Stefano Calzavara, Claudio Lucchese, Gabriele Tolomei, Seyum Assefa Abebe, Salvatore Orlando

Despite its success and popularity, machine learning is now recognized as vulnerable to evasion attacks, i. e., carefully crafted perturbations of test inputs designed to force prediction errors.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.