no code implementations • 22 Dec 2023 • Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala
Machine learning techniques are increasingly used for high-stakes decision-making, such as college admissions, loan attribution or recidivism prediction.
no code implementations • 29 Aug 2023 • Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala
In addition, we demonstrate that under realistic assumptions regarding the interpretable models' structure, the uncertainty of the reconstruction can be computed efficiently.
1 code implementation • 24 Jul 2023 • Patrik Joslin Kenfack, Samira Ebrahimi Kahou, Ulrich Aïvodji
Surprisingly, our framework outperforms models trained with constraints on the true sensitive attributes.
1 code implementation • 8 Mar 2023 • Julien Ferry, Gabriel Laberge, Ulrich Aïvodji
The advantages of such models over classical ones are two-fold: 1) They grant users precise control over the level of transparency of the system and 2) They can potentially perform better than a standalone black box since redirecting some of the inputs to an interpretable model implicitly acts as regularization.
no code implementations • 2 Sep 2022 • Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala
More precisely, we propose a generic reconstruction correction method, which takes as input an initial guess made by the adversary and corrects it to comply with some user-defined constraints (such as the fairness information) while minimizing the changes in the adversary's guess.
1 code implementation • 30 May 2022 • Gabriel Laberge, Ulrich Aïvodji, Satoshi Hara, Mario Marchand., Foutse khomh
SHAP explanations aim at identifying which features contribute the most to the difference in model prediction at a specific input versus a background distribution.
1 code implementation • NeurIPS 2021 • Ulrich Aïvodji, Hiromi Arai, Sébastien Gambs, Satoshi Hara
In particular, we show that fairwashed explanation models can generalize beyond the suing group (i. e., data points that are being explained), meaning that a fairwashed explainer can be used to rationalize subsequent unfair decisions of a black-box model.
1 code implementation • 3 Sep 2020 • Ulrich Aïvodji, Alexandre Bolot, Sébastien Gambs
Post-hoc explanation techniques refer to a posteriori methods that can be used to explain how black-box machine learning models produce their outcomes.
no code implementations • 26 Sep 2019 • Ulrich Aïvodji, Sébastien Gambs, Timon Ther
While some model inversion attacks have been developed in the past in the black-box attack setting, in which the adversary does not have direct access to the structure of the model, few of these have been conducted so far against complex models such as deep neural networks.
1 code implementation • 9 Sep 2019 • Ulrich Aïvodji, Julien Ferry, Sébastien Gambs, Marie-José Huguet, Mohamed Siala
While it has been shown that interpretable models can be as accurate as black-box models in several critical domains, existing fair classification techniques that are interpretable by design often display poor accuracy/fairness tradeoffs in comparison with their non-interpretable counterparts.
no code implementations • 19 Jun 2019 • Ulrich Aïvodji, François Bidet, Sébastien Gambs, Rosin Claude Ngueveu, Alain Tapp
The widespread use of automated decision processes in many areas of our society raises serious ethical issues concerning the fairness of the process and the possible resulting discriminations.
1 code implementation • 28 Jan 2019 • Ulrich Aïvodji, Hiromi Arai, Olivier Fortineau, Sébastien Gambs, Satoshi Hara, Alain Tapp
Black-box explanation is the problem of explaining how a machine learning model -- whose internal logic is hidden to the auditor and generally complex -- produces its outcomes.