1
The CLRS Algorithmic Reasoning Benchmark
3
Graph Neural Networks are Dynamic Programmers
4
A data-driven approach for learning to control computers
5
End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking
6
In Defense of the Unitary Scalarization for Deep Multi-Task Learning
7
How to transfer algorithmic reasoning knowledge to learn new algorithms?
8
The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization
9
Neural Algorithmic Reasoners are Implicit Planners
10
MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research
11
Reasoning-Modulated Representations
12
Neural algorithmic reasoning
13
Size-Invariant Graph Representations for Graph Classification Extrapolations
14
Snowflake: Scaling GNNs to High-Dimensional Continuous Control via Parameter Freezing
15
Combinatorial optimization and reasoning with graph neural networks
16
From Local Structures to Size Generalization in Graph Neural Networks
17
My Body is a Cage: the Role of Morphology in Graph-Based Incompatible Control
18
How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks
20
The NetHack Learning Environment
21
Leveraging Procedural Generation to Benchmark Reinforcement Learning
22
Neural Execution of Graph Algorithms
23
What Can Neural Networks Reason About?
24
Structured agents for physical construction
25
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks
26
Relational inductive bias for physical construction in humans and machines
27
Graph networks as learnable physics engines for inference and control
28
Learning Latent Permutations with Gumbel-Sinkhorn Networks
29
NerveNet: Learning Structured Policy with Graph Neural Networks
30
Attention is All you Need
31
DeepPermNet: Visual Permutation Learning
32
Learning Combinatorial Optimization Algorithms over Graphs
33
Neural Message Passing for Quantum Chemistry
35
End-To-End Memory Networks
36
Adam: A Method for Stochastic Optimization
37
On the difficulty of training recurrent neural networks
38
Understanding the difficulty of training deep feedforward neural networks
43
Introduction to algorithms
44
A Learning Algorithm for Continually Running Fully Recurrent Neural Networks
45
THE TRAVELING SALESMAN PROBLEM A Guided Tour of Combinatorial Optimization
46
Programming pearls: algorithm design techniques
47
Fast Pattern Matching in Strings
48
On the Identification of the Convex Hull of a Finite Set of Points in the Plane
49
Algorithms for Minimum Coloring, Maximum Clique, Minimum Covering by Cliques, and Maximum Independent Set of a Chordal Graph
50
An Efficient Algorithm for Determining the Convex Hull of a Finite Planar Set
51
Concerning nonnegative matrices and doubly stochastic matrices
52
A Relationship Between Arbitrary Positive Matrices and Doubly Stochastic Matrices
53
Algorithm 97: Shortest path
54
A note on two problems in connexion with graphs
55
Shortest connection networks and some generalizations
56
Can Q-Learning with Graph Networks Learn a Generalizable Branching Heuristic for a SAT Solver?
57
Haiku: Sonnet for JAX, 2020
58
JAX: composable transformations of Python+NumPy programs
59
Sinkhorn Networks: Using Optimal Transport Techniques to Learn Permutations
60
Gradient-based learning applied to document recognition
61
On the evolution of random graphs
62
The Design and Analysis of Computer Algorithms
63
. Algorithm 232: heapsort. Commun. ACM , 7:347–348, 1964. 6 [49] Charles AR Hoare. Quicksort
64
On the shortest spanning subtree of a graph and the traveling salesman problem
65
Execute A on the resulting input, recording intermediate states, to obtain the training trajectory
66
Additional experimental details
67
we generate the dataset in an online manner, providing the model with an infinite source of training data, to avoid overreliance on any particular fixed-size dataset. The original CLRS-30 dataset
68
We use an embedding size h = 128 across all experiments. We train in batches of size 32 using an Adam optimizer
69
• We vary the needle length, m, to avoid overreliance on specific needle/haystack boundaries in string matching. The original CLRS-30 dataset
70
Choose a connection probability
71
Generate an input, represented as a graph with n nodes, and with input node, edge and graph features sampled to match the algorithm’s spec
72
If the task is a string algorithm, choose a pattern length 1 ≤ m ≤ ⌊ n 2 ⌋ at random. Then use the first n − m nodes to represent the string to be searched (the haystack)
73
If the task is a graph algorithm, for every pair of nodes (u, v), decide whether to connect them with an edge by sampling e uv ∼ Bernoulli(p)