3260 papers • 126 benchmarks • 313 datasets
XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.
(Image credit: Papersgraph)
These leaderboards are used to track progress in explainable-artificial-intelligence-11
No benchmarks available.
Use these libraries to find explainable-artificial-intelligence-11 models and implementations
The problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works, is studied and two fundamental axioms— Sensitivity and Implementation Invariance that attribution methods ought to satisfy are identified.
GnExplainer is proposed, the first general, model-agnostic approach for providing interpretable explanations for predictions of any GNN-based model on any graph-based machine learning task.
This paper proposes a novel end-to-end differentiable approach enabling the extraction of logic explanations from neural networks using the formalism of First-Order Logic, which relies on an entropy-based criterion which automatically identifies the most relevant concepts.
Three software packages designed to facilitate the exploration of model reasoning using attribution approaches and beyond are presented, aiming to promote reproducibility in the field and empower scientists and practitioners to uncover the intricacies of complex model behavior.
This work proposes a multi-explanation graph attention network (MEGAN) that can produce node and edge attributional explanations along multiple channels, the number of which is independent of task specifications, and finds that the model produces sparse high-fidelity explanations consistent with human intuition about those tasks.
A novel Open Source audio dataset is presented consisting of 30,000 audio samples of English spoken digits which is used for classification tasks on spoken digits and speakers' biological sex and demonstrates the superior interpretability of audible explanations over visual ones in a human user study.
This paper examines the behavior of the most popular instance-level explanations under the presence of interactions, introduces a new method that detects interactions for instance- level explanations, and performs a large scale benchmark to see how frequently additive explanations may be misleading.
Adding a benchmark result helps the community track progress.