3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in extractive-summarization-11
No benchmarks available.
Use these libraries to find extractive-summarization-11 models and implementations
No subtasks available.
BERTSUM, a simple variant of BERT, for extractive summarization, is described, which is the state of the art on the CNN/Dailymail dataset, outperforming the previous best-performed system by 1.65 on ROUGE-L.
This paper reports on the project called Lecture Summarization Service, a python based RESTful service that utilizes the BERT model for text embeddings and KMeans clustering to identify sentences closes to the centroid for summary selection.
SummaRuNNer, a Recurrent Neural Network (RNN) based sequence model for extractive summarization of documents and show that it achieves performance better than or comparable to state-of-the-art.
It is shown that generating English Wikipedia articles can be approached as a multi- document summarization of source documents and a neural abstractive model is introduced, which can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles.
Two adaptive learning models are presented: AREDSUM-SEQ that jointly considers salience and novelty during sentence selection; and a two-step AREDsUM-CTX that scores salience first, then learns to balancesalience and redundancy, enabling the measurement of the impact of each aspect.
This work proposes a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions: a query attention model which learns to focus on different portions of the query at different time steps and a new diversity based Attention model which aims to alleviate the problem of repeating phrases in the summary.
This work introduces Multi-News, the first large-scale MDS news dataset, and proposes an end-to-end model which incorporates a traditional extractive summarization model with a standard SDS model and achieves competitive results on MDS datasets.
This paper forms the extractive summarization task as a semantic text matching problem, in which a source document and candidate summaries will be matched in a semantic space to create a semantic matching framework.
It is proposed to explicitly incorporate the underlying structure of narratives into general unsupervised and supervised extractive summarization models and formalize narrative structure in terms of key narrative events (turning points) and treat it as latent in order to summarize screenplays.
Adding a benchmark result helps the community track progress.