3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in explainable-models-22
No benchmarks available.
Use these libraries to find explainable-models-22 models and implementations
No subtasks available.
It is quantitatively shown that training with the textual explanations not only yields better textual justification models, but also better localizes the evidence that supports the decision, supporting the thesis that multimodal explanation models offer significant benefits over unimodal approaches.
This article uses a neural network-based architecture for symbolic regression called the equation learner (EQL) network and integrates it with other deep learning architectures such that the whole system can be trained end-to-end through backpropagation.
A framework that jointly retrieves and spatiotemporally highlights actions in videos by enhancing current deep cross-modal retrieval methods and generates various maps conditioned on different actions, in which conventional visual reasoning methods only go as far as to show a single deterministic saliency map.
This work builds a hybrid neural-network and decision-tree model for segmentation that attains neural network segmentation accuracy and provides semi-automatically constructed visual decision rules such as "Is there a window?".
Defining a representative locality is an urgent challenge in perturbation-based explanation methods, which influences the fidelity and soundness of explanations. We address this issue by proposing a robust and intuitive approach for EXPLaining black-box classifiers using Adaptive Neighborhood generation (EXPLAN). EXPLAN is a module-based algorithm consisted of dense data generation, representative data selection, data balancing, and rule-based interpretable model. It takes into account the adjacency information derived from the black-box decision function and the structure of the data for creating a representative neighborhood for the instance being explained. As a local model-agnostic explanation method, EXPLAN generates explanations in the form of logical rules that are highly interpretable and well-suited for qualitative analysis of the model’s behavior. We discuss fidelity-interpretability trade-offs and demonstrate the performance of the proposed algorithm by a comprehensive comparison with state-of-the-art explanation methods LIME, LORE, and Anchor. The conducted experiments on real-world data sets show our method achieves solid empirical results in terms of fidelity, precision, and stability of explanations.
This work proposes audioLIME, a method based on Local Interpretable Model-agnostic Explanation (LIME), extended by a musical definition of locality, and demonstrates the general applicability of the method on a third-party music tagger.
Cascading Decision Trees (CDTs) apply representation learning on the decision path to allow richer expressivity and show that in both situations, where CDTs are used as policy function approximators or as imitation learners to explain black-box policies, CDTs can achieve better performances with more succinct and explainable models than SDTs.
A novel, modular, convolution-based feature extraction and attention mechanism that simultaneously identifies the variables as well as time intervals which determine the classifier output on multi-variate time series classification task.
Adding a benchmark result helps the community track progress.