3260 papers • 126 benchmarks • 313 datasets
Relation Classification is the task of identifying the semantic relation holding between two nominal entities in text. Source: Structure Regularized Neural Network for Entity Relation Classification for Chinese Literature Text
(Image credit: Papersgraph)
These leaderboards are used to track progress in relation-classification-13
Use these libraries to find relation-classification-13 models and implementations
This paper builds on extensions of Harris’ distributional hypothesis to relations, as well as recent advances in learning text representations (specifically, BERT), to build task agnostic relation representations solely from entity-linked text.
New pretrained contextualized representations of words and entities based on the bidirectional transformer, and an entity-aware self-attention mechanism that considers the types of tokens (words or entities) when computing attention scores are proposed.
This paper proposes a model that both leverages the pre-trained BERT language model and incorporates information from the target entities to tackle the relation classification task and achieves significant improvement over the state-of-the-art method on the SemEval-2010 task 8 relational dataset.
This work proposes a novel end-to-end recurrent neural model which incorporates an entity-aware attention mechanism with a latent entity typing (LET) method and demonstrates that the model outperforms existing state-of-the-art models without any high-level features.
This work uses gaze and EEG features to augment models of named entity recognition, relation classification, and sentiment analysis and shows the potential and current limitations of employing human language processing data for NLP.
This work uses various pretrained language models (i.e., BERT, XLNet, RoBERTa, SciBERT, and ALBERT) to solve each of the three subtasks of the DeftEval competition, and explores a multi-task architecture that was trained to jointly predict the outputs for the second and the third subtasks.
This paper exploits a convolutional deep neural network (DNN) to extract lexical and sentence level features from the output of pre-existing natural language processing systems and significantly outperforms the state-of-the-art methods.
This work proposes a new pairwise ranking loss function that makes it easy to reduce the impact of artificial classes and shows that it is more effective than CNN followed by a softmax classifier and using only word embeddings as input features is enough to achieve state-of-the-art results.
A novel end-to-end neural model to extract entities and relations between them and compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8).
This work presents a systematic large-scale analysis of neural relation classification architectures on six benchmark datasets with widely varying characteristics, and proposes a novel multi-channel LSTM model combined with a CNN that takes advantage of all currently popular linguistic and architectural features.
Adding a benchmark result helps the community track progress.