3260 papers • 126 benchmarks • 313 datasets
Parsing a text into a set of discourse relations between two adjacent or non-adjacent discourse units in the absence of explicit connectives, such as 'but' or 'however', and classifying those relations. (Source: Adapted from https://www.cs.brandeis.edu/~clp/conll15st/intro.html)
(Image credit: Papersgraph)
These leaderboards are used to track progress in implicit-discourse-relation-classification
No benchmarks available.
Use these libraries to find implicit-discourse-relation-classification models and implementations
No subtasks available.
A method to automatically extract the implicit discourse relation argument pairs and labels from a dataset of dialogic turns is proposed, resulting in a novel corpus of discourse relation pairs, the first of its kind to attempt to identify the discourse relations connecting the dialogicturns in open-domain discourse.
A novel latent variable recurrent neural network architecture for jointly modeling sequences of words and (possibly latent) discourse relations between adjacent sentences that outperforms state-of-the-art alternatives for two tasks: implicit discourse relation classification in the Penn Discourse Treebank, and dialog act Classification in the Switchboard corpus.
This work proposes and investigates transfer- and active learning solutions to the rare class problem of dissonance detection through utilizing models trained on closely related tasks and the evaluation of acquisition strategies, including a proposed probability-of-rare-class (PRC) approach.
This work presents a new system using zero-shot transfer learning for implicit discourse relation classification, where the only resource used for the target language is unannotated parallel text.
An end-to-end neural model is designed to explicitly generate discourse connectives for the task, inspired by the annotation process of PDTB, which significantly outperforms various baselines on three datasets, demonstrating its superiority for thetask.
This work argues that a powerful contextualized representation module, a bilateral multi-perspective matching module, and a global information fusion module are all important to implicit discourse analysis and proposes a novel model to combine these modules together.
Adding a benchmark result helps the community track progress.