3260 papers • 126 benchmarks • 313 datasets
Verifying facts given semi-structured data.
(Image credit: Papersgraph)
These leaderboards are used to track progress in table-based-fact-verification-6
Use these libraries to find table-based-fact-verification-6 models and implementations
No subtasks available.
TAPEX is proposed to show that table pre-training can be achieved by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries and their execution outputs.
A large-scale dataset with 16k Wikipedia tables as the evidence for 118k human-annotated natural language statements, which are labeled as either ENTAILED or REFUTED is constructed and two different models are designed: Table-BERT and Latent Program Algorithm (LPA).
This work adapts TAPAS (Herzig et al., 2020), a table-based BERT model, to recognize entailment, and creates a balanced dataset of millions of automatically created training examples which are learned in an intermediate step prior to fine-tuning.
This paper describes the approach for Task 9 of SemEval 2021: Statement Verification and Evidence Finding with Tables, whereby the TAPAS model is extended to adapt to the ‘unknown’ class of statements by finetuning it on an augmented version of the task data.
Inspired by counterfactual causality, this system identifies token-level salience in the statement with probing-based salience estimation and applies salience-aware data augmentation to generate a more diverse set of training instances by replacing non-salient terms.
This work forms the table-based fact verification task as an evidence retrieval and reasoning framework, proposing the Logic-level Evidence Retrieval and Graph-based Verification network (LERGV) and shows the effectiveness of the proposed approach on the large-scale benchmark TABFACT.
This paper proposes a program-guided approach to constructing a pseudo dataset for decomposition model training that achieves the new state-of-the-art performance, an 82.7\% accuracy, on the TabFact benchmark.
The UnifiedSKG framework is proposed, which unifies 21 SKG tasks into a text-to-text format, aiming to promote systematic SKG research, instead of being exclusive to a single task, domain, or dataset.
A mixture-of-experts neural network to recognize and execute different types of reasoning and a self-adaptive method is developed to teach the management module combining results of different experts more efficiently without external knowledge.
Binder is a training-free neural-symbolic framework that maps the task input to a program, which allows binding a unified API of language model (LM) functionalities to a programming language (e.g., SQL, Python) to extend its grammar coverage and thus tackle more diverse questions.
Adding a benchmark result helps the community track progress.