3260 papers • 126 benchmarks • 313 datasets
Dialog Relation Extraction is the task of predicting the relation type between entities mentioned in dialogue. It uses multiple tokens to capture possible relations between pairs of entities in the dialogue. The popular benchmark for this task is the DialogRE dataset. The models are typically evaluated with the metric of F1 Score for both standard-setting and conversational settings.
(Image credit: Papersgraph)
These leaderboards are used to track progress in dialog-relation-extraction-7
Use these libraries to find dialog-relation-extraction-7 models and implementations
No subtasks available.
It is argued that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks, and a new metric to evaluate the performance of RE methods in a conversational setting is designed.
This paper proposes to construct a latent multi-view graph to capture various possible relationships among tokens and refined this graph to select important words for relation prediction, and shows that GDPNet achieves the best performance on dialogue-level RE, and comparable performance with the state-of-the-arts on sentence- level RE.
This paper crawled movie scripts from IMSDb, and annotated the relation label for each session according to 13 pre-defined relationships, and constructed session-level and pair-level relation classification tasks with widely-accepted baselines.
An attention-based heterogeneous graph network is presented to deal with the dialogue relation extraction task in an inductive manner and shows superior performance on the benchmark dataset DialogRE.
A simple yet effective model named SimpleRE is proposed for the RE task, which captures the interrelations among multiple relations in a dialogue through a novel input format named BERT Relation Token Sequence.
A Knowledge-aware Prompt-tuning approach with synergistic optimization (KnowPrompt) that injects latent knowledge contained in relation labels into prompt construction with learnable virtual type words and answer words.
To the knowledge, this work is the first to leverage a formal semantic representation into neural dialogue modeling, and compares with the textual input, AMR explicitly provides core semantic knowledge and reduces data sparsity.
This paper proposes the TUrn COntext awaRE Graph Convolutional Network (TUCORE-GCN) modeled by paying attention to the way people understand dialogues, and proposes a novel approach which treats the task of emotion recognition in conversations (ERC) as a dialogue-based RE.
This work proposes a model-agnostic framework, D-REX, a policy-guided semi-supervised algorithm that optimizes for explanation quality and relation extraction simultaneously, and frames relation extraction as a re-ranking task and includes relation- and entity-specific explanations as an intermediate step of the inference process.
This work introduces SOLS, a novel model which can explicitly induce speaker-oriented latent structures for better DiaRE, and learns latent structures to capture the relationships among tokens beyond the utterance boundaries, alleviating the entangled logic issue.
Adding a benchmark result helps the community track progress.