3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in link-property-prediction-7
Use these libraries to find link-property-prediction-7 models and implementations
No subtasks available.
GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.
Results on link prediction and triplets classification show that the searched SFs by AutoSF, are KG dependent, new to the literature, and outperform the state-of- the-art SFs designed by humans.
It is proved that with labeling trick a sufficiently expressive GNN learns the most expressive node set representations, thus in principle solves any joint learning tasks over node sets.
The framework treats link prediction as a pairwise learning to rank problem and consists of four main components, i.e., neighborhood encoder, link predictor, negative sampler and objective function, which is flexible that any generic graph neural convolution or link prediction specific neural architecture could be employed as neighborhood encoding.
An anchor-based distance is proposed that brings significant improvement for link prediction with few additional parameters and achieved state-of-the-art result on the drug-drug-interaction and protein-protein-association tasks of OGB.
This work proposes a novel KGE method named Graph Feature Attentive Neural Network (GFA-NN) that computes graphical features of entities that achieves on-par or better results than state-of-the-art KGE solutions.
The Neural Bellman-Ford Network (NBFNet) is proposed, a general graph neural network framework that solves the path formulation with learned operators in the generalized Bell man-Ford algorithm, and outperforms existing methods by a large margin in both transductive and inductive settings.
A new self-supervised training objective for multi-relational graph representation learning is proposed, via simply incorporating relation prediction into the commonly used 1vsAll objective, which is especially effective on highly multi- Relational datasets, i.e. datasets with a large number of predicates.
VQ-GNN, a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance, is proposed and it is shown that such a compact low-rank version of the gigantic convolution matrix is sufficient both theoretically and experimentally.
The resulting formalism, Digraph Hyperbolic Networks (D-HYPR) -- albeit conceptually simple -- generalizes to digraphs where cycles and non-transitive relations are common, and is applicable to multiple downstream tasks including node classification, link presence prediction, and link property prediction.
Adding a benchmark result helps the community track progress.