3260 papers • 126 benchmarks • 313 datasets
Evaluation of explanation fidelity with respect to the underlying model.
(Image credit: Papersgraph)
These leaderboards are used to track progress in explanation-fidelity-evaluation-18
Use these libraries to find explanation-fidelity-evaluation-18 models and implementations
No subtasks available.
A sampling methodology based on observation-level feature importance to derive more meaningful perturbed samples is proposed and applied to the LIME explanation method to demonstrate considerable improvements in terms of fidelity and explainability.
Defining a representative locality is an urgent challenge in perturbation-based explanation methods, which influences the fidelity and soundness of explanations. We address this issue by proposing a robust and intuitive approach for EXPLaining black-box classifiers using Adaptive Neighborhood generation (EXPLAN). EXPLAN is a module-based algorithm consisted of dense data generation, representative data selection, data balancing, and rule-based interpretable model. It takes into account the adjacency information derived from the black-box decision function and the structure of the data for creating a representative neighborhood for the instance being explained. As a local model-agnostic explanation method, EXPLAN generates explanations in the form of logical rules that are highly interpretable and well-suited for qualitative analysis of the model’s behavior. We discuss fidelity-interpretability trade-offs and demonstrate the performance of the proposed algorithm by a comprehensive comparison with state-of-the-art explanation methods LIME, LORE, and Anchor. The conducted experiments on real-world data sets show our method achieves solid empirical results in terms of fidelity, precision, and stability of explanations.
This paper proposes a three phase approach to developing an evaluation method, adapt an existing evaluation method primarily for image and text data to evaluate models trained on tabular data, and evaluates two popular explainable methods using this evaluation method.
Three novel evaluation schemes are proposed to more reliably measure the faithfulness of post-hoc attribution methods, to make comparisons between them more fair, and to make visual inspection more systematic.
It is shown that even though the explanations generated by these techniques are linear additives, they can fail to provide accurate explanations when explaining linear additive models.
This work proposes Structure-Aware Shapley-based Multipiece Explanation (SAME), a method to address the structure-aware feature interactions challenges for GNNs explanation that has the potential to be as explainable as the theoretically optimal explanation obtained by the Shapley value within polynomial time.
Adding a benchmark result helps the community track progress.