3260 papers • 126 benchmarks • 313 datasets
Machine Reading Comprehension is one of the key problems in Natural Language Understanding, where the task is to read and comprehend a given text passage, and then answer questions based on it. Source: Making Neural Machine Reading Comprehension Faster
(Image credit: Papersgraph)
These leaderboards are used to track progress in machine-reading-comprehension-5
Use these libraries to find machine-reading-comprehension-5 models and implementations
No subtasks available.
This new dataset is aimed to overcome a number of well-known weaknesses of previous publicly available datasets for the same task of reading comprehension and question answering, and is the most comprehensive real-world dataset of its kind in both quantity and quality.
This paper proposes to formulate the task of NER as a machine reading comprehension (MRC) task, and naturally tackles the entity overlapping issue in nested NER: the extraction of two overlapping entities with different categories requires answering two independent questions.
This work proposes a simple yet robust stochastic answer network (SAN) that simulates multi-step reasoning in machine reading comprehension that achieves results competitive to the state-of-the-art on the Stanford Question Answering Dataset, the Adversarial SQuAD, and the Microsoft MAchine Reading COmprehensionDataset.
An innovated contextualized attention-based deep neural network, SDNet, to fuse context into traditional MRC models, which leverages both inter-attention and self-att attention to comprehend conversation context and extract relevant information from passage.
A novel sample re-weighting scheme to assign sample-specific weights to the loss of a joint Machine Reading Comprehension (MRC) model that can be applied to a wide range of MRC tasks in different domains.
An extension of the Stochastic Answer Network (SAN), one of the state-of-the-art machine reading comprehension models, to be able to judge whether a question is unanswerable or not, is presented.
This paper proposes to use dice loss in replacement of the standard cross-entropy objective for data-imbalanced NLP tasks, based on the Sørensen--Dice coefficient or Tversky index, which attaches similar importance to false positives and false negatives, and is more immune to the data-IMbalance issue.
This work introduces Korean Language Understanding Evaluation (KLUE), a collection of 8 Korean natural language understanding (NLU) tasks, including Topic Classification, SemanticTextual Similarity, Natural Language Inference, Named Entity Recognition, Relation Extraction, Dependency Parsing, Machine Reading Comprehension, and Dialogue State Tracking, and provides suitable evaluation metrics and fine-tuning recipes for pretrained language models for each task.
The Reinforced Mnemonic Reader for machine reading comprehension tasks, which enhances previous attentive readers in two aspects: a reattention mechanism is proposed to refine current attentions by directly accessing to past attentions that are temporally memorized in a multi-round alignment architecture.
This paper introduces DuReader, a new large-scale, open-domain Chinese machine reading comprehension (MRC) dataset, designed to address real-world MRC, and organizes a shared competition to encourage the exploration of more models.
Adding a benchmark result helps the community track progress.