3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in discourse-parsing-5
Use these libraries to find discourse-parsing-5 models and implementations
An RST segmentation and parsing system that adapts models and feature sets from various previous work, as described below, that can process short documents such as news articles or essays in less than a second.
It is found that non-news datasets are slightly easier to transfer to than news datasets when the training and test sets are very different, and a statistic from the theoretical domain adaptation literature which can be directly tied to error-gap is proposed.
This work proposes a new task to decompose each complex sentence into simple sentences derived from the tensed clauses in the source, and proposes a novel problem formulation as a graph edit task that learns to Accept, Break, Copy or Drop elements of a graph that combines word adjacency and grammatical dependencies.
An efficient neural framework for sentence-level discourse analysis in accordance with Rhetorical Structure Theory that comprises a discourse segmenter to identify the elementary discourse units (EDU) in a text, and a discourse parser that constructs a discourse tree in a top-down fashion.
A representation learning approach, in which surface features are transformed into a latent space that facilitates RST discourse parsing, which obtains substantial improvements over the previous state-of-the-art in predicting relations and nuclearity on the RST Treebank.
The task definition, the training and test sets, and the evaluation protocol and metric used during the CoNLL-2016 Shared Task are presented, which will serve as a benchmark for future research on shallow discourse parsing.
It is shown that a simple LSTM sequential discourse parser takes advantage of this multi-view and multi-task framework with 12-15% error reductions over the authors' baseline and results that rival more complex state-of-the-art parsers.
A new discourse parser which is simpler, yet competitive (significantly better on 2/3 metrics) to state of the art for English, and a harmonization of discourse treebanks across languages are presented, enabling the first experiments on cross-lingual discourse parsing to be presented.
This paper describes the submission of the UIMA framework to the CoNLL-2015 shared task on shallow discourse parsing, and how it was used to develop the parser and to add machine learning functionality to the U IMA framework.
This paper proposes the first end-to-end discourse parser that jointly parses in both syntax and discourse levels, as well as the first syntacto-discourse treebank by integrating the Penn Treebank and the RST Treebank.
Adding a benchmark result helps the community track progress.