Explainable Artificial Intelligence (XAI)

206 papers with code • 0 benchmarks • 2 datasets

Explainable Artificial Intelligence

Most implemented papers

RISE: Randomized Input Sampling for Explanation of Black-box Models

eclique/RISE 19 Jun 2018

We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments.

Proposed Guidelines for the Responsible Use of Explainable Machine Learning

jphall663/kdd_2019 8 Jun 2019

Explainable machine learning (ML) enables human learning from ML, human appeal of automated model decisions, regulatory compliance, and security audits of ML models.

Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy

chr5tphr/zennit 24 Jun 2021

Deep Neural Networks (DNNs) are known to be strong predictors, but their prediction strategies can rarely be understood.

Contrastive Explanations with Local Foil Trees

MarcelRobeer/ContrastiveExplanation 19 Jun 2018

Recent advances in interpretable Machine Learning (iML) and eXplainable AI (XAI) construct explanations based on the importance of features in classification tasks.

AudioMNIST: Exploring Explainable Artificial Intelligence for Audio Analysis on a Simple Benchmark

soerenab/AudioMNIST 9 Jul 2018

Explainable Artificial Intelligence (XAI) is targeted at understanding how models perform feature selection and derive their classification decisions.

Do Not Trust Additive Explanations

ModelOriented/iBreakDown 27 Mar 2019

Explainable Artificial Intelligence (XAI)has received a great deal of attention recently.

TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP

copenlu/tx-ray 2 Dec 2019

While state-of-the-art NLP explainability (XAI) methods focus on explaining per-sample decisions in supervised end or probing tasks, this is insufficient to explain and quantify model knowledge transfer during (un-)supervised training.

On the Explanation of Machine Learning Predictions in Clinical Gait Analysis

sebastian-lapuschkin/explaining-deep-clinical-gait-classification 16 Dec 2019

Machine learning (ML) is increasingly used to support decision-making in the healthcare sector.

Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI

ahmedmagdiosman/clevr-xai 16 Mar 2020

The rise of deep learning in today's applications entailed an increasing need in explaining the model's decisions beyond prediction performances in order to foster trust and accountability.

Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset

etjoa003/explainable_ai 7 Sep 2020

Heatmaps can be appealing due to the intuitive and visual ways to understand them but assessing their qualities might not be straightforward.