3260 papers • 126 benchmarks • 313 datasets
Training GNNs or generating graph embeddings requires graph samples.
(Image credit: Papersgraph)
These leaderboards are used to track progress in graph-sampling-6
No benchmarks available.
Use these libraries to find graph-sampling-6 models and implementations
No subtasks available.
GraphSAINT is proposed, a graph sampling based inductive learning method that improves training efficiency in a fundamentally different way and can decouple the sampling process from the forward and backward propagation of training, and extend GraphSAINT with other graph samplers and GCN variants.
The proposed HGT model consistently outperforms all the state-of-the-art GNN baselines by 9–21 on various downstream tasks, and the heterogeneous mini-batch graph sampling algorithm—HGSampling—for efficient and scalable training.
This paper proposes a new, efficient and scalable graph deep learning architecture which sidesteps the need for graph sampling by using graph convolutional filters of different size that are amenable to efficient precomputation, allowing extremely fast training and inference.
This work finds that reversible connections in combination with deep network architectures enable the training of overparameterized GNNs that significantly outperform existing methods on multiple datasets.
This paper proposes novel parallelization techniques for graph sampling-based GCNs that achieve superior scalable performance on very large graphs without compromising accuracy, and demonstrates that the parallel graph embedding outperforms state-of theart methods in scalability, efficiency and accuracy on several large datasets.
The proposed training outperforms the state-of-the-art in scalability, efficiency and accuracy simultaneously simultaneously and enables fast training of deeper GNNs, as demonstrated by orders of magnitude speedup compared to the Tensorflow implementation.
The best performing methods are the ones based on random-walks and "forest fire"; they match very accurately both static as well as evolutionary graph patterns, with sample sizes down to about 15% of the original graph.
This work defines an empirical risk for relational data and obtains stochastic gradients for this empirical risk that are automatically unbiased by integrating fast implementations of graph sampling schemes with standard automatic differentiation tools, providing an efficient turnkey solver for the risk minimization problem.
This work proposes a reinforcement learning framework to discover effective network sampling heuristics by leveraging automatically learnt node and graph representations that encode important structural properties of the network.
Neighborhood Matching Network (NMN), a novel entity alignment framework for tackling the structural heterogeneity challenge, can well estimate the neighborhood similarity in more tough cases and significantly outperforms 12 previous state-of-the-art methods.
Adding a benchmark result helps the community track progress.