3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in answer-generation-17
Use these libraries to find answer-generation-17 models and implementations
No subtasks available.
This systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks and achieves state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more.
The proposed system for the MIA Shared Task on Cross-lingual Openretrieval Question Answering (COQA) is introduced, showing that language- and domain-specialization as well as data augmentation help, especially for low-resource languages.
The VOGUE framework attempts to generate a verbalized answer using a hybrid approach through a multi-task learning paradigm, and it outperforms all current baselines on both BLEU and METEOR scores.
This work casts neural QA as a sequence labeling problem and proposes an end-to-end sequence labeling model, which overcomes all the above challenges and outperforms the baselines significantly on WebQA.
Answer-Clue-Style-aware Question Generation (ACS-QG), which aims at automatically generating high-quality and diverse question-answer pairs from unlabeled text corpus at scale by imitating the way a human asks questions, dramatically outperforms state-of-the-art neural question generation models in terms of the generation quality.
LRTA [Look, Read, Think, Answer], a transparent neural-symbolic reasoning framework for visual question answering that solves the problem step-by-step like humans and provides human-readable form of justification at each step is proposed.
An end-to-end differentiable training method for retrieval-augmented open-domain question answering systems that combine information from multiple retrieved documents when generating answers and demonstrates the feasibility of learning to retrieve to improve answer generation without explicit supervision of retrieval decisions.
A new QA evaluation benchmark with 1,384 questions over news articles that require cross- media grounding of objects in images onto text, and introduces a novel multimedia data augmentation framework, based on cross-media knowledge extraction and synthetic question-answer generation, to automatically augment data that can provide weak supervision for this task.
Adding a benchmark result helps the community track progress.