3260 papers • 126 benchmarks • 313 datasets
Generating a summary of a given sentence.
(Image credit: Papersgraph)
These leaderboards are used to track progress in sentence-summarization-5
No benchmarks available.
Use these libraries to find sentence-summarization-5 models and implementations
No datasets available.
This work proposes several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling key-words, capturing the hierarchy of sentence-to-word structure, and emitting words that are rare or unseen at training time.
This work proposes a fully data-driven approach to abstractive sentence summarization by utilizing a local attention-based model that generates each word of the summary conditioned on the input sentence.
This paper proposes a new framework for sparse and structured attention, building upon a smoothed max operator, and shows that the gradient of this operator defines a mapping from real values to probabilities, suitable as an attention mechanism.
This work generates two novel multi-sentence summarization datasets from scientific articles and test the suitability of a wide range of existing extractive and abstractive neural network-based summarization approaches, demonstrating that scientific papers are suitable for data-driven text summarization.
This work proposes an end-to-end differentiable method for learning monotonic alignments which, at test time, enables computing attention online and in linear time and validates the approach on sentence summarization, machine translation, and online speech recognition problems.
This work proposes a new state-of-the art for unsupervised sentence summarization according to ROUGE scores, and demonstrates that the commonly reported RouGE F1 metric is sensitive to summary length.
A novel copying framework, named Sequential Copying Networks (SeqCopyNet), which not only learns to copy single words, but also copies sequences from the input sentence and leverages the pointer networks to explicitly select a sub-span from the source side to target side.
An unsupervised approach to summarize sentences in abstractive way using Variational Autoencoder, showing that shorter sentences can not beat a simple baseline but yield higher ROUGE scores than trying to reconstruct the whole sentence.
Adding a benchmark result helps the community track progress.