3260 papers • 126 benchmarks • 313 datasets
Entity Alignment is the task of finding entities in two knowledge bases that refer to the same real-world object. It plays a vital role in automatically integrating multiple knowledge bases. Note: results that have incorporated machine translated entity names (introduced in the RDGCN paper) or pre-alignment name embeddings are considered to have used extra training labels (both are marked with "Extra Training Data" in the leaderboard) and are not adhere to a comparable setting with others that have followed the original setting of the benchmark. Source: Cross-lingual Entity Alignment via Joint Attribute-Preserving Embedding The task of entity alignment is related to the task of entity resolution which focuses on matching structured entity descriptions in different contexts.
(Image credit: Papersgraph)
These leaderboards are used to track progress in entity-alignment-26
Use these libraries to find entity-alignment-26 models and implementations
MTransE, a translation-based model for multilingual knowledge graph embeddings, is proposed to provide a simple and automated solution to achieve cross-lingual knowledge alignment and explore how MTransE preserves the key properties of its monolingual counterpart.

This work presents a two-stage neural architecture for learning and refining structural correspondences between graphs that scales well to large, real-world inputs while still being able to recover global correspondences consistently.
This paper abstracts existing entity alignment methods into a unified framework, Shape-Builder & Alignment, which not only successfully explains the above phenomena but also derives two key criteria for an ideal transformation operation.
This work shows that images are particularly useful to align long-tail KG entities, which inherently lack the structural contexts necessary for capturing the correspondences, and provides EVA, a completely unsupervised solution to this problem.
This paper proposes a novel framework based on Relation-aware Graph Attention Networks to capture the interactions between entities and relations and proposes a global alignment algorithm to make one-to-one entity alignments with a fine-grained similarity matrix.
ClusterEA is presented, a general framework that is capable of scaling up EA models and enhancing their results by leveraging normalization methods on mini-batches with a high entity equivalent rate and contains three components to align entities between large-scale KGs, including stochastic training, ClusterSampler, and SparseFusion.
It is argued that existing complex EA methods inevitably inherit the inborn defects from their neural network lineage: poor interpretability and weak scalability and a neural-free EA framework is proposed — LightEA, consisting of three efficient components: Random Orthogonal Label Generation, Three-view Label Propagation, and Sparse Sinkhorn Operation.
This paper presents the first framework that can generate explanations for understanding and repairing embedding-based EA results by constructing an alignment dependency graph and resolving three types of alignment conflicts based on dependency graphs.
The experimental results on real-world datasets show that this approach significantly outperforms the state-of-the-art embedding approaches for cross-lingual entity alignment and could be complemented with methods based on machine translation.
This paper presents a novel approach for entity alignment via joint knowledge embeddings that jointly encodes both entities and relations of various KGs into a unified low-dimensional semantic space according to a small seed set of aligned entities.
Adding a benchmark result helps the community track progress.