3260 papers • 126 benchmarks • 313 datasets
Relation Extraction is the task of predicting attributes and relations for entities in a sentence. For example, given a sentence “Barack Obama was born in Honolulu, Hawaii.”, a relation classifier aims at predicting the relation of “bornInCity”. Relation Extraction is the key component for building relation knowledge graphs, and it is of crucial significance to natural language processing applications such as structured search, sentiment analysis, question answering, and summarization. Source: Deep Residual Learning for Weakly-Supervised Relation Extraction
(Image credit: Papersgraph)
These leaderboards are used to track progress in relation-extraction-11
Use these libraries to find relation-extraction-11 models and implementations
This article introduces BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora that largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre- trained on biomedical Corpora.
The LayoutLM is proposed to jointly model interactions between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents.
This paper builds on extensions of Harris’ distributional hypothesis to relations, as well as recent advances in learning text representations (specifically, BERT), to build task agnostic relation representations solely from entity-linked text.
New pretrained contextualized representations of words and entities based on the bidirectional transformer, and an entity-aware self-attention mechanism that considers the types of tokens (words or entities) when computing attention scores are proposed.
This paper successively removes nonlinearities and collapsing weight matrices between consecutive layers, and theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier.
The proposed joint neural model outperforms the previous neural models that use automatically extracted features, while it performs within a reasonable margin of feature-based neural models, or even beats them.
This paper proposes a model that both leverages the pre-trained BERT language model and incorporates information from the target entities to tackle the relation classification task and achieves significant improvement over the state-of-the-art method on the SemEval-2010 task 8 relational dataset.
The approach extends BERT by masking contiguous random spans, rather than random tokens, and training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it.
Two novel word attention models for distantly- supervised relation extraction are proposed: a Bi-directional Gated Recurrent Unit based word attention model (Bi-GRU) and a combination model which combines multiple complementary models using weighted voting method for improved relation extraction.
Adding a benchmark result helps the community track progress.