3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in scientific-document-summarization-2
Use these libraries to find scientific-document-summarization-2 models and implementations
The first large-scale manually-annotated corpus for scientific papers is developed and released by enabling faster annotation and summarization methods that integrate the authors’ original highlights and the article’s actual impacts on the community are proposed, to create comprehensive, hybrid summaries.
This overview describes the participation and the official results of the CL-SciSumm 2019 Shared Task, organized as a part of the 42nd Annual Conference of the Special Interest Group in Information Retrieval (SIGIR), held in Paris, France in July 2019.
This overview describes the official results of the CL-SciSumm Shared Task 2018 -- the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain and compares the participating systems in terms of two evaluation metrics.
Two novel lay summarisation datasets are presented, PLOS (large-scale) and eLife (medium-scale), each of which contains biomedical journal articles alongside expert-written lay summaries, highlighting differing levels of readability and abstractiveness between datasets that can be leveraged to support the needs of different applications.
This paper proposes HAESum, a novel approach utilizing graph neural networks to locally and globally model documents based on their hierarchical discourse structure, and introduces a novel hypergraph self-attention layer to enhance the characterization of high-order inter-sentence relations.
Adding a benchmark result helps the community track progress.