no code implementations • 22 May 2024 • Khouloud Oueslati, Gabriel Laberge, Maxime Lamothe, Foutse khomh
In this paper, we introduce CounterACT, a Counterfactual ACTion rule mining approach that can generate defect reduction plans without black-box models.
no code implementations • 19 Oct 2023 • Moses Openja, Gabriel Laberge, Foutse khomh
In this study, we propose an approach for systematically identifying all bias-inducing features of a model to help support the decision-making of domain experts.
1 code implementation • 8 Mar 2023 • Julien Ferry, Gabriel Laberge, Ulrich Aïvodji
The advantages of such models over classical ones are two-fold: 1) They grant users precise control over the level of transparency of the system and 2) They can potentially perform better than a standalone black box since redirecting some of the inputs to an interpretable model implicitly acts as regularization.
1 code implementation • 29 Sep 2022 • Gabriel Laberge, Yann Pequignot
Shapley values are ubiquitous in interpretable Machine Learning due to their strong theoretical background and efficient implementation in the SHAP library.
1 code implementation • 30 May 2022 • Gabriel Laberge, Ulrich Aïvodji, Satoshi Hara, Mario Marchand., Foutse khomh
SHAP explanations aim at identifying which features contribute the most to the difference in model prediction at a specific input versus a background distribution.
1 code implementation • 26 Oct 2021 • Gabriel Laberge, Yann Pequignot, Alexandre Mathieu, Foutse khomh, Mario Marchand
In this work, instead of aiming at reducing the under-specification of model explanations, we fully embrace it and extract logical statements about feature attributions that are consistent across all models with good empirical performance (i. e. all models in the Rashomon Set).
1 code implementation • 26 Jul 2021 • Florian Tambon, Gabriel Laberge, Le An, Amin Nikanjam, Paulina Stevia Nouwou Mindom, Yann Pequignot, Foutse khomh, Giulio Antoniol, Ettore Merlo, François Laviolette
Method: We conduct a Systematic Literature Review (SLR) of research papers published between 2015 to 2020, covering topics related to the certification of ML systems.