no code implementations • 20 May 2024 • Olivier Letoffe, Xuanxiang Huang, Nicholas Asher, Joao Marques-Silva
The importance of this task of explainability by feature attribution is illustrated by the ubiquitous recent use of tools such as SHAP and LIME.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +2
no code implementations • 14 May 2024 • Yacine Izza, Xuanxiang Huang, Antonio Morgado, Jordi Planes, Alexey Ignatiev, Joao Marques-Silva
Logic-based XAI represents a rigorous approach to explainability; it is model-based and offers the strongest guarantees of rigor of computed explanations.
Adversarial Robustness Explainable artificial intelligence +1
no code implementations • 30 Apr 2024 • Olivier Letoffe, Xuanxiang Huang, Joao Marques-Silva
Recent work uncovered examples of classifiers for which SHAP scores yield misleading feature attributions.
no code implementations • 30 Sep 2023 • Xuanxiang Huang, Joao Marques-Silva
Recent work demonstrated the inadequacy of Shapley values for explainable artificial intelligence (XAI).
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 6 Sep 2023 • Xuanxiang Huang, Joao Marques-Silva
This earlier work devised a brute-force approach to identify Boolean functions, defined on small numbers of features, and also associated instances, which displayed such inadequacy-revealing issues, and so served as evidence to the inadequacy of Shapley values for rule-based explainability.
no code implementations • 27 Jun 2023 • Joao Marques-Silva, Xuanxiang Huang
Explainable artificial intelligence (XAI) aims to help human decision-makers in understanding complex machine learning (ML) models.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1
no code implementations • 5 Jun 2023 • Xuanxiang Huang, Joao Marques-Silva
In contrast with ad-hoc methods for eXplainable Artificial Intelligence (XAI), formal explainability offers important guarantees of rigor.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 16 Feb 2023 • Xuanxiang Huang, Joao Marques-Silva
This paper develops a rigorous argument for why the use of Shapley values in explainable AI (XAI) will necessarily yield provably misleading information about the relative importance of features for predictions.
no code implementations • 12 Dec 2022 • Yacine Izza, Xuanxiang Huang, Alexey Ignatiev, Nina Narodytska, Martin C. Cooper, Joao Marques-Silva
One solution is to consider intrinsic interpretability, which does not exhibit the drawback of unsoundness.
1 code implementation • 27 Oct 2022 • Xuanxiang Huang, Martin C. Cooper, Antonio Morgado, Jordi Planes, Joao Marques-Silva
Given a machine learning (ML) model and a prediction, explanations can be defined as sets of features which are sufficient for the prediction.
no code implementations • 15 Feb 2022 • Xuanxiang Huang, Joao Marques-Silva
In contrast, this paper shows that for a number of families of classifiers, FMP is in NP.
no code implementations • 4 Jul 2021 • Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Martin C. Cooper, Nicholas Asher, Joao Marques-Silva
Knowledge compilation (KC) languages find a growing number of practical uses, including in Constraint Programming (CP) and in Machine Learning (ML).
1 code implementation • 2 Jun 2021 • Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Joao Marques-Silva
Recent work has shown that not only decision trees (DTs) may not be interpretable but also proposed a polynomial-time algorithm for computing one PI-explanation of a DT.