3260 papers • 126 benchmarks • 313 datasets
Generating a summary from meeting transcriptions. A survey for this task: Abstractive Meeting Summarization: A Survey
(Image credit: Papersgraph)
These leaderboards are used to track progress in meeting-summarization-5
Use these libraries to find meeting-summarization-5 models and implementations
No subtasks available.
A novel graph-based framework for abstractive meeting speech summarization that is fully unsupervised and does not rely on any annotations to take exterior semantic knowledge into account, and to design custom diversity and informativeness measures.
A novel abstractive summary network that adapts to the meeting scenario is proposed with a hierarchical structure to accommodate long meeting transcripts and a role vector to depict the difference among speakers.
It is suggested that considering a wider variety of tasks would lead to an improvement in the field, in terms of generalization and robustness, and it is shown that automatic alignment is relevant for data annotation since it leads to large improvement of almost +4 on all ROUGE scores on the summarization task.
Summ^N is the first multi-stage split-then-summarize framework for long input summarization and outperforms previous state-of-the-art methods by improving ROUGE scores on three long meeting summarization datasets AMI, ICSI, and QMSum.
An overview of the challenges raised by the task of abstractive meeting summarization and of the data sets, models, and evaluation metrics that have been used to tackle the problems is provided.
This paper creates gold-standard annotations for domain terminology on a sizable meeting corpus; they are known as jargon terms and reveal that domain terminology can have a substantial impact on summarization performance.
A Dialogue Discourse-Dware Meeting Summarizer (DDAMS) to explicitly model the interaction between utterances in a meeting by modeling different discourse relations and a relational graph encoder, where the utterances and discourse relations are modeled in a graph interaction manner.
This work defines a new query-based multi-domain meeting summarization task, where models have to select and summarize relevant spans of meetings in response to a query, and introduces QMSum, a new benchmark for this task.
Improved performance of query-based meeting summarization is achieved by adding query embeddings to the input of the model, by using BART as an alternative language model, and by using clustering methods to extract key information at utterance level before feeding the text into summarization models.
Adding a benchmark result helps the community track progress.