3260 papers • 126 benchmarks • 313 datasets
Joint Entity and Relation Extraction is the task of extracting entity mentions and semantic relations between entities from unstructured text with a single model.
(Image credit: Papersgraph)
These leaderboards are used to track progress in joint-entity-and-relation-extraction-4
Use these libraries to find joint-entity-and-relation-extraction-4 models and implementations
No subtasks available.
The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links and supports construction of a scientific knowledge graph, which is used to analyze information in scientific literature.
This framework significantly outperforms state-of-the-art on multiple information extraction tasks across multiple datasets reflecting different domains and is good at detecting nested span entities, with significant F1 score improvement on the ACE dataset.
This work examines the capabilities of a unified, multi-task framework for three information extraction tasks: named entity recognition, relation extraction, and event extraction (called DyGIE++) and achieves state-of-the-art results across all tasks.
A novel domain-independent framework that jointly embeds entity mentions, relation mentions, text features and type labels into two low-dimensional spaces, and adopts a novel partial-label loss function for noisy labeled data and introduces an object "translation" function to capture the cross-constraints of entities and relations on each other.
A novel tagging scheme is proposed that can convert the joint extraction task to a tagging problem, and different end-to-end models are studied to extract entities and their relations directly, without identifying entities and relations separately.
SpERT, an attention model for span-based joint entity and relation extraction, is introduced, which features entity recognition and filtering, as well as relation classification with a localized, marker-free context representation.
It is argued that it can be beneficial to design two distinct encoders to capture such two different types of information in the learning process, and proposed is the novel {\em table-sequence encoder} where two different encoder -- a table encoder and a sequence encoder are designed to help each other in the representation learning process.
This work presents a simple pipelined approach for entity and relation extraction, and establishes the new state-of-the-art on standard benchmarks, obtaining a 1.7%-2.8% absolute improvement in relation F1 over previous joint models with the same pre-trained encoders.
This work proposes a novel span representation approach, named Packed Levitated Markers (PL-Marker), to consider the interrelation between the spans (pairs) by strategically packing the markers in the encoder, and proposes a neighborhood-oriented packing strategy, which considers the neighbor spans integrally to better model the entity boundary information.
This paper develops a sequence-to-sequence approach, seq2rel, that can learn the subtasks of DocRE end- to-end, replacing a pipeline of task-specific components, and demonstrates that, under the model, an end-To-end approach outperforms a pipeline-based approach.
Adding a benchmark result helps the community track progress.