Search Results for author: Julien Ferry

Found 8 papers, 3 papers with code

Smooth Sensitivity for Learning Differentially-Private yet Accurate Rule Lists

no code implementations18 Mar 2024 Timothée Ly, Julien Ferry, Marie-José Huguet, Sébastien Gambs, Ulrich Aivodji

Differentially-private (DP) mechanisms can be embedded into the design of a machine learningalgorithm to protect the resulting model against privacy leakage, although this often comes with asignificant loss of accuracy.

Trained Random Forests Completely Reveal your Dataset

1 code implementation29 Feb 2024 Julien Ferry, Ricardo Fukasawa, Timothée Pascal, Thibaut Vidal

Even with bootstrap aggregation, the majority of the data can also be reconstructed.

Reconstruction Attack

SoK: Taming the Triangle -- On the Interplays between Fairness, Interpretability and Privacy in Machine Learning

no code implementations22 Dec 2023 Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala

Machine learning techniques are increasingly used for high-stakes decision-making, such as college admissions, loan attribution or recidivism prediction.

Decision Making Fairness

Probabilistic Dataset Reconstruction from Interpretable Models

no code implementations29 Aug 2023 Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala

In addition, we demonstrate that under realistic assumptions regarding the interpretable models' structure, the uncertainty of the reconstruction can be computed efficiently.

Learning Optimal Fair Scoring Systems for Multi-Class Classification

no code implementations11 Apr 2023 Julien Rouzot, Julien Ferry, Marie-José Huguet

In this paper, we use Mixed-Integer Linear Programming (MILP) techniques to produce inherently interpretable scoring systems under sparsity and fairness constraints, for the general multi-class classification setup.

Binary Classification Classification +3

Learning Hybrid Interpretable Models: Theory, Taxonomy, and Methods

1 code implementation8 Mar 2023 Julien Ferry, Gabriel Laberge, Ulrich Aïvodji

The advantages of such models over classical ones are two-fold: 1) They grant users precise control over the level of transparency of the system and 2) They can potentially perform better than a standalone black box since redirecting some of the inputs to an interpretable model implicitly acts as regularization.

Exploiting Fairness to Enhance Sensitive Attributes Reconstruction

no code implementations2 Sep 2022 Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala

More precisely, we propose a generic reconstruction correction method, which takes as input an initial guess made by the adversary and corrects it to comply with some user-defined constraints (such as the fairness information) while minimizing the changes in the adversary's guess.

Fairness

Learning Fair Rule Lists

1 code implementation9 Sep 2019 Ulrich Aïvodji, Julien Ferry, Sébastien Gambs, Marie-José Huguet, Mohamed Siala

While it has been shown that interpretable models can be as accurate as black-box models in several critical domains, existing fair classification techniques that are interpretable by design often display poor accuracy/fairness tradeoffs in comparison with their non-interpretable counterparts.

Classification Decision Making +2

Cannot find the paper you are looking for? You can Submit a new open access paper.