1 code implementation • 9 Mar 2024 • Dennis Ulmer, Martin Gubri, Hwaran Lee, Sangdoo Yun, Seong Joon Oh
As large language models (LLMs) are increasingly deployed in user-facing applications, building trust and maintaining safety by accurately quantifying a model's confidence in its prediction becomes even more important.
1 code implementation • 20 Feb 2024 • Martin Gubri, Dennis Ulmer, Hwaran Lee, Sangdoo Yun, Seong Joon Oh
Large Language Model (LLM) services and models often come with legal rules on who can use them and how they must use them.
1 code implementation • 5 Apr 2023 • Martin Gubri, Maxime Cordy, Yves Le Traon
A common hypothesis to explain this is that deep neural networks (DNNs) first learn robust features, which are more generic, thus a better surrogate.
1 code implementation • 26 Jul 2022 • Martin Gubri, Maxime Cordy, Mike Papadakis, Yves Le Traon, Koushik Sen
We propose transferability from Large Geometric Vicinity (LGV), a new technique to increase the transferability of black-box adversarial attacks.
no code implementations • 14 Dec 2020 • Adriano Franci, Maxime Cordy, Martin Gubri, Mike Papadakis, Yves Le Traon
Graph-based Semi-Supervised Learning (GSSL) is a practical solution to learn from a limited amount of labelled data together with a vast amount of unlabelled data.
1 code implementation • 10 Nov 2020 • Martin Gubri, Maxime Cordy, Mike Papadakis, Yves Le Traon, Koushik Sen
An established way to improve the transferability of black-box evasion attacks is to craft the adversarial examples on an ensemble-based surrogate to increase diversity.
1 code implementation • 6 Jan 2018 • Martin Gubri
Machine Learning models have been shown to be vulnerable to adversarial examples, ie.