3260 papers • 126 benchmarks • 313 datasets
The goal of Relational Reasoning is to figure out the relationships among different entities, such as image pixels, words or sentences, human skeletons or interactive moving agents. Source: Social-WaGDAT: Interaction-aware Trajectory Prediction via Wasserstein Graph Double-Attention Network
(Image credit: Papersgraph)
These leaderboards are used to track progress in relational-reasoning-9
Use these libraries to find relational-reasoning-9 models and implementations
No subtasks available.
It is argued that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective.
A new approach for reasoning globally in which a set of features are globally aggregated over the coordinate space and then projected to an interaction space where relational reasoning can be efficiently computed.
This work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations.
A graph neural network based relation prediction framework, GraIL, that reasons over local subgraph structures and has a strong inductive bias to learn entity-independent relational semantics is proposed.
This work makes use of complex valued embeddings to solve the link prediction problem through latent factorization, and uses the Hermitian dot product, the complex counterpart of the standard dot product between real vectors.
The recurrent relational network is introduced, a general purpose module that operates on a graph representation of objects that can augment any neural network model with the capacity to do many-step relational reasoning.
This paper introduces an effective and interpretable network module, the Temporal Relation Network (TRN), designed to learn and reason about temporal dependencies between video frames at multiple time scales.
A diagnostic benchmark suite, named CLUTRR, is introduced to clarify some key issues related to the robustness and systematicity of NLU systems, and highlights a substantial performance gap between state-of-the-art NLU models.
Holographic embeddings (HolE) are proposed to learn compositional vector space representations of entire knowledge graphs to outperform state-of-the-art methods for link prediction on knowledge graphs and relational learning benchmark datasets.
Adding a benchmark result helps the community track progress.