3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in query-focused-summarization-9
No benchmarks available.
Use these libraries to find query-focused-summarization-9 models and implementations
No datasets available.
No subtasks available.
A new query-focused table summarization task, where text generation models have to perform human-like reasoning and analysis over the given table to generate a tailored summary, and proposes a new approach, named ReFactor, to retrieve and reason over query-relevant information from tabular data to generate several natural language facts.
MaRGE is introduced, a Masked ROUGE Regression framework for evidence estimation and ranking which relies on a unified representation for summaries and queries, so that summaries in generic data can be converted into proxy queries for learning a query model.
QFS-BART is proposed, a model that incorporates the explicit answer relevance of the source documents given the query via a question answering model, to generate coherent and answer-related summaries and achieves the new state-of-the-art performance.
This work proposes CLIP-It, a language-guided multimodal transformer that learns to score frames in a video based on their importance relative to one another and their correlation with a user-defined query or an automatically generated dense video caption (for generic video summarization).
This paper collects a dataset of realistic aspect-oriented summaries, AspectNews, which covers different subtopics about articles in news sub-domains, and compares several training schemes that differ in how strongly keywords are used and how oracle summaries are extracted.
This paper conducts a systematic exploration of neural approaches to QFS, considering two general classes of methods: two-stage extractive-abstractive solutions and end-to-end models.
This work proposes the novel task of summarizing the reactions of different speakers, as expressed by their reported statements, to a given event, and creates a new multi-document summarization benchmark, SumREN, comprising 745 summaries of reported statements from various public figures obtained from 633 news articles discussing 132 events.
This work proposes leveraging a recently developed constrained generation model Neurological Decoding (NLD) as an alternative to current QFS regimes which rely on additional sub-architectures and training, and demonstrates the efficacy of this approach on two public QFS collections achieving near parity with the state-of-the-art model with substantially reduced complexity.
This work proposes pre- training a generic multi-document model from a novel cross-document question answering pre-training objective, and develops a novel multi- document QA formulation that directs the model to better recover cross-text informational relations, and introduces a natural augmentation that artificially increases the pre- Training data.
The results show that QuOTeS provides a positive user experience and consistently provides query-focused summaries that are relevant, concise, and complete.
Adding a benchmark result helps the community track progress.