3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in additive-models-2
No benchmarks available.
Use these libraries to find additive-models-2 models and implementations
No datasets available.
No subtasks available.
Neural Additive Models (NAMs) are proposed which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models and are more accurate than widely used intelligible models such as logistic regression and shallow decision trees.
This work proposes SALSA, which bridges this gap by allowing interactions between variables, but controls model capacity by limiting the order of interactions, and shows that the method is competitive against other alternatives.
Two instantiations of Aug-imodels in natural-language processing are explored: Aug-Linear, which augments a linear model with decoupled embeddings from an LLM and Aug-Tree, which augmented a decision tree with LLM feature expansions, which outperform their non-augmented, interpretable counterparts.
This paper examines the behavior of the most popular instance-level explanations under the presence of interactions, introduces a new method that detects interactions for instance- level explanations, and performs a large scale benchmark to see how frequently additive explanations may be misleading.
InterpretML is an open-source Python package which exposes machine learning interpretability algorithms to practitioners and researchers by exposing multiple methods under a unified API, and by having a built-in, extensible visualization platform.
The experiments show that RFA is competitive with the classical aggregation when the level of corruption is low, while demonstrating greater robustness under high corruption, and establishes the convergence of the robust federated learning algorithm for the stochastic learning of additive models with least squares.
It is found that GAMs with high feature sparsity can miss patterns in the data and be unfair to rare subpopulations, and tree-based GAMs represent the best balance of sparsity, fidelity and accuracy and thus appear to be the most trustworthy GAM models.
Numerical experiments show that the proposed explainable GAMI-Net enjoys superior interpretability while maintaining competitive prediction accuracy in comparison to the explainable boosting machine and other benchmark machine learning models.
This work proposes a neural GAM (NODE-GAM) and neural GA$^2$M ( NODE-GA$^1$M) that scale well and perform better than other GAMs on large datasets, while remaining interpretable compared to other ensemble and deep learning models.
Adding a benchmark result helps the community track progress.