Search Results for author: Gabriel Laberge

Found 7 papers, 5 papers with code

Mining Action Rules for Defect Reduction Planning

no code implementations22 May 2024 Khouloud Oueslati, Gabriel Laberge, Maxime Lamothe, Foutse khomh

In this paper, we introduce CounterACT, a Counterfactual ACTion rule mining approach that can generate defect reduction plans without black-box models.

counterfactual Counterfactual Explanation

Detection and Evaluation of bias-inducing Features in Machine learning

no code implementations19 Oct 2023 Moses Openja, Gabriel Laberge, Foutse khomh

In this study, we propose an approach for systematically identifying all bias-inducing features of a model to help support the decision-making of domain experts.

Learning Hybrid Interpretable Models: Theory, Taxonomy, and Methods

1 code implementation8 Mar 2023 Julien Ferry, Gabriel Laberge, Ulrich Aïvodji

The advantages of such models over classical ones are two-fold: 1) They grant users precise control over the level of transparency of the system and 2) They can potentially perform better than a standalone black box since redirecting some of the inputs to an interpretable model implicitly acts as regularization.

Understanding Interventional TreeSHAP : How and Why it Works

1 code implementation29 Sep 2022 Gabriel Laberge, Yann Pequignot

Shapley values are ubiquitous in interpretable Machine Learning due to their strong theoretical background and efficient implementation in the SHAP library.

Interpretable Machine Learning

Fool SHAP with Stealthily Biased Sampling

1 code implementation30 May 2022 Gabriel Laberge, Ulrich Aïvodji, Satoshi Hara, Mario Marchand., Foutse khomh

SHAP explanations aim at identifying which features contribute the most to the difference in model prediction at a specific input versus a background distribution.

Fairness

Partial Order in Chaos: Consensus on Feature Attributions in the Rashomon Set

1 code implementation26 Oct 2021 Gabriel Laberge, Yann Pequignot, Alexandre Mathieu, Foutse khomh, Mario Marchand

In this work, instead of aiming at reducing the under-specification of model explanations, we fully embrace it and extract logical statements about feature attributions that are consistent across all models with good empirical performance (i. e. all models in the Rashomon Set).

Additive models Feature Importance +1

Cannot find the paper you are looking for? You can Submit a new open access paper.