3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in unsupervised-extractive-summarization-10
Use these libraries to find unsupervised-extractive-summarization-10 models and implementations
No subtasks available.
This work proposes the first model for abstractive summarization of single, longer-form documents (e.g., research papers), consisting of a new hierarchical encoder that models the discourse structure of a document, and an attentive discourse-aware decoder to generate the summary.
A fully unsupervised, extractive text summarization system that leverages a submodularity framework that allows summaries to be generated in a greedy way while preserving near-optimal performance guarantees is presented.
The task of summarizing such legal documents in plain English, which would enable users to have a better understanding of the terms they are accepting, is proposed and a call for resource and technique development for simplification and style transfer for legal language is made.
An unsupervised approach arguing that it is unrealistic to expect large-scale and high-quality training data to be available or created for different types of summaries, domains, or languages is developed.
This work finds that transformer attentions can be used to rank sentences for unsupervised extractive summarization by first pre-train a hierarchical transformer model using unlabeled documents only and then proposing a method to rank sentence-level self-attentions and pre-training objectives.
This work proposes a graph-based unsupervised abstractive summarization system in the single-document setting for Bengali text documents, which requires only a Part-Of-Speech (POS) tagger and a pre-trained language model trained on Bengali texts.
This work proposes new metrics of relevance and redundancy using pointwise mutual information (PMI) between sentences, which can be easily computed by a pre-trained language model and outperforms similarity-based methods on datasets in a range of domains.
Faceted summarization will spur further advances in summarization research and foster the development of NLP systems that can leverage the structured information in both long texts and summaries, according to this study.
This work pioneer the first extractive summarization-based collaborative filtering model called ESCOFILT and argues that its approach enhances both rating prediction accuracy and user/item explainability and proposes a comprehensive set of criteria that assesses the real-life explainability of explanations.
Adding a benchmark result helps the community track progress.