3260 papers • 126 benchmarks • 313 datasets
Scores reported from systems which jointly extract entities and relations.
(Image credit: Papersgraph)
These leaderboards are used to track progress in joint-entity-and-relation-extraction-9
Use these libraries to find joint-entity-and-relation-extraction-9 models and implementations
No subtasks available.
The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links and supports construction of a scientific knowledge graph, which is used to analyze information in scientific literature.
This framework significantly outperforms state-of-the-art on multiple information extraction tasks across multiple datasets reflecting different domains and is good at detecting nested span entities, with significant F1 score improvement on the ACE dataset.
A novel domain-independent framework that jointly embeds entity mentions, relation mentions, text features and type labels into two low-dimensional spaces, and adopts a novel partial-label loss function for noisy labeled data and introduces an object "translation" function to capture the cross-constraints of entities and relations on each other.
A novel tagging scheme is proposed that can convert the joint extraction task to a tagging problem, and different end-to-end models are studied to extract entities and their relations directly, without identifying entities and relations separately.
This work examines the capabilities of a unified, multi-task framework for three information extraction tasks: named entity recognition, relation extraction, and event extraction (called DyGIE++) and achieves state-of-the-art results across all tasks.
SpERT, an attention model for span-based joint entity and relation extraction, is introduced, which features entity recognition and filtering, as well as relation classification with a localized, marker-free context representation.
It is argued that it can be beneficial to design two distinct encoders to capture such two different types of information in the learning process, and proposed is the novel {\em table-sequence encoder} where two different encoder -- a table encoder and a sequence encoder are designed to help each other in the representation learning process.
A Table Filling Multi-Task Recurrent Neural Network model that reduces the entity recognition and relation classification tasks to a table-filling problem and models their interdependencies and shows that a simple approach of piggybacking candidate entities to model the label dependencies from relations to entities improves performance.
Adversarial training is demonstrated to be a regularization method that allows improving the state-of-the-art effectiveness on several datasets in different contexts (i.e., news, biomedical, and real estate data) and for different languages (English and Dutch).
GraphRel, an end-to-end relation extraction model which uses graph convolutional networks (GCNs) to jointly learn named entities and relations, outperforms previous work by 3.2% and 5.8% and achieves a new state-of-the-art for relation extraction.
Adding a benchmark result helps the community track progress.