Search Results for author: Lujo Bauer

Found 8 papers, 5 papers with code

Group-based Robustness: A General Framework for Customized Robustness in the Real World

1 code implementation29 Jun 2023 Weiran Lin, Keane Lucas, Neo Eyal, Lujo Bauer, Michael K. Reiter, Mahmood Sharif

In this work, we identify real-world scenarios where the true threat cannot be assessed accurately by existing attacks.

RS-Del: Edit Distance Robustness Certificates for Sequence Classifiers via Randomized Deletion

1 code implementation NeurIPS 2023 Zhuoqun Huang, Neil G. Marchant, Keane Lucas, Lujo Bauer, Olga Ohrimenko, Benjamin I. P. Rubinstein

When applied to the popular MalConv malware detection model, our smoothing mechanism RS-Del achieves a certified accuracy of 91% at an edit distance radius of 128 bytes.

Binary Classification Malware Detection

Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks

1 code implementation28 Dec 2021 Weiran Lin, Keane Lucas, Lujo Bauer, Michael K. Reiter, Mahmood Sharif

First, we demonstrate a loss function that explicitly encodes (1) and show that Auto-PGD finds more attacks with it.

$n$-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers

no code implementations19 Dec 2019 Mahmood Sharif, Lujo Bauer, Michael K. Reiter

This paper proposes a new defense called $n$-ML against adversarial examples, i. e., inputs crafted by perturbing benign inputs by small amounts to induce misclassifications by classifiers.

General Classification

On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples

no code implementations27 Feb 2018 Mahmood Sharif, Lujo Bauer, Michael K. Reiter

Combined with prior work, we thus demonstrate that nearness of inputs as measured by $L_p$-norms is neither necessary nor sufficient for perceptual similarity, which has implications for both creating and defending against adversarial examples.

Perceptual Distance

A General Framework for Adversarial Examples with Objectives

4 code implementations31 Dec 2017 Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter

Images perturbed subtly to be misclassified by neural networks, called adversarial examples, have emerged as a technically deep challenge and an important concern for several application domains.

Face Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.