3260 papers • 126 benchmarks • 313 datasets
Summarizing a technical or scientific document in simple, non-technical language that is comprehensible to a lay person (non-expert).
(Image credit: Papersgraph)
These leaderboards are used to track progress in scientific-document-summarization
Use these libraries to find scientific-document-summarization models and implementations
No subtasks available.
Two novel lay summarisation datasets are presented, PLOS (large-scale) and eLife (medium-scale), each of which contains biomedical journal articles alongside expert-written lay summaries, highlighting differing levels of readability and abstractiveness between datasets that can be leveraged to support the needs of different applications.
This work develops three text generation techniques for controlling readability: instruction-based readability control, reinforcement learning to minimize the gap between requested and observed readability and a decoding approach that uses lookahead to estimate the readability of upcoming decoding steps.
Adding a benchmark result helps the community track progress.