Triple classification aims to judge whether a given triple (h, r, t) is correct or not with respect to the knowledge graph.
3260 papers • 126 benchmarks • 313 datasets
Triple classification aims to judge whether a given triple (h, r, t) is correct or not with respect to the knowledge graph.
(Image credit: Papersgraph)
These leaderboards are used to track progress in knowledge-graph-completion
Use these libraries to find knowledge-graph-completion models and implementations
TransR is proposed to build entity and relation embeddings in separate entity space and relation spaces by first projecting entities from entity space to corresponding relation space and then building translations between projected entities.
This work treats triples in knowledge graphs as textual sequences and proposes a novel framework named Knowledge Graph Bidirectional Encoder Representations from Transformer (KG-BERT) to model these triples.
This article extensively conduct and quantitative comparison and analysis of several typical KRL methods on three evaluation tasks of knowledge acquisition including knowledge graph completion, triple classification, and relation extraction.
The focus of this work is to create an explainable method that maintains a competitive predictive accuracy, and it is found that the method outperforms several baselines on the benchmark datasets FB15k-237, WN18RR, and Hetionet.
CoDEx is distinguished from the popular FB15K-237 knowledge graph completion dataset by showing that CoDEx covers more diverse and interpretable content, and is a more difficult link prediction benchmark.
Experimental results demonstrate that the proposed Image-embodied Knowledge Representation Learning models outperform all baselines on both tasks, which indicates the significance of visual information for knowledge representations and the capability of the models in learning knowledge representations with images.
A novel confidence-aware knowledge representation learning framework (CKRL), which detects possible noises in KGs while learning knowledge representations with confidence simultaneously and proposes three kinds of triple confidences considering both local and global structural information.
A novel embedding model is introduced that explores a relational memory network to encode potential dependencies in relationship triples and obtains state-of-the-art results on SEARCH17 for the search personalization task, and on WN11 and FB13 for the triple classification task.
A novel knowledge graph embedding model named TransC is proposed by differentiating concepts and instances by encodes each concept in knowledge graph as a sphere and each instance as a vector in the same semantic space.
This work introduces to CKG construction methods conceptualization, i.e., to view entities mentioned in text as instances of specific concepts or vice versa, and builds synthetic triples by conceptualization.
Adding a benchmark result helps the community track progress.