3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in causal-identification-5
No benchmarks available.
Use these libraries to find causal-identification-5 models and implementations
No subtasks available.
A method to estimate causal effects from observational text data, adjusting for confounding features of the text such as the subject or writing quality, and studies causally sufficient embeddings with semi-synthetic datasets and finds that they improve causal estimation over related embedding methods.
Leveraging the neural toolbox, an algorithm is developed that is both sufficient and necessary to determine whether a causal effect can be learned from data and then estimates the effect whenever identifiability holds (causal estimation).
The method reveals which state and input variables of the system have a causal influence on each other, and shows that the obtained knowledge of the causal structure reduces the complexity of model learning and yields improved generalization capabilities.
Recent work has focused on the potential and pitfalls of causal identification in observational studies with multiple simultaneous treatments. Building on previous work, we show that even if the conditional distribution of unmeasured confounders given treatments were known exactly, the causal effects would not in general be identifiable, although they may be partially identified. Given these results, we propose a sensitivity analysis method for characterizing the effects of potential unmeasured confounding, tailored to the multiple treatment setting, that can be used to characterize a range of causal effects that are compatible with the observed data. Our method is based on a copula factorization of the joint distribution of outcomes, treatments, and confounders, and can be layered on top of arbitrary observed data models. We propose a practical implementation of this approach making use of the Gaussian copula, and establish conditions under which causal effects can be bounded. We also describe approaches for reasoning about effects, including calibrating sensitivity parameters, quantifying robustness of effect estimates, and selecting models that are most consistent with prior hypotheses.
Nearly Invariant Causal Estimation (NICE) is developed, which uses invariant risk minimization (IRM) to learn a representation of the covariates that, under some assumptions, strips out bad controls but preserves sufficient information to adjust for confounding.
This work considered bivariate cases, which is the most elementary form of a causal discovery problem where one needs to decide whetherX causes Y or Y causesX, given joint distributions of two variablesX, Y, and found that these methods can fail to capture the true causal direction for some levels of noise.
A new causal discovery method is introduced to solve the bivariate causal discovery problem that leverages the expressive power of flow-based models and tries to learn the complex relationship between two variables.
This paper designs a proxy-based hypothesis test for identifying causal relationships when unobserved variables are present that has ideal power when large samples are available and demonstrates the effectiveness of the method using synthetic and real-world data.
A constraint-based algorithm that can identify the entire causal structure from subsampled time series, without any parametric constraint is proposed, which is nonparametric and can achieve full causal identification.
It is shown that causal variables can still be identified for many common setups, e.g., additive Gaussian noise models, if the agent's interactions with a causal variable can be described by an unknown binary variable.
Adding a benchmark result helps the community track progress.