1
Exploratory play, rational action, and efficient search
2
Deep Sets for Generalization in RL
3
Measuring Compositional Generalization: A Comprehensive Method on Realistic Data
4
Extending Machine Language Models toward Human-Level Language Understanding
5
ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
6
Contextual Imagined Goals for Self-Supervised Robotic Learning
7
Emergent Systematic Generalization in a Situated Agent
8
Automated curricula through setter-solver interactions
9
Self-Educated Language Agent with Hindsight Experience Replay for Instruction Following
10
Self-supervised Learning of Distance Functions for Goal-Conditioned Reinforcement Learning
11
Language as an Abstraction for Hierarchical Deep Reinforcement Learning
12
A Survey of Reinforcement Learning Informed by Natural Language
13
Good-Enough Compositional Data Augmentation
14
A Hitchhiker's Guide to Statistical Comparisons of Reinforcement Learning Algorithms
15
Skew-Fit: State-Covering Self-Supervised Reinforcement Learning
16
Using Natural Language for Reward Shaping in Reinforcement Learning
17
Multi-Object Representation Learning with Iterative Variational Inference
18
From Language to Goals: Inverse Reinforcement Learning for Vision-Based Instruction Following
19
ACTRCE: Augmenting Experience via Teacher's Advice For Multi-Goal Reinforcement Learning
20
Go-Explore: a New Approach for Hard-Exploration Problems
21
On the Limitations of Representing Functions on Sets
22
MONet: Unsupervised Scene Decomposition and Representation
23
Vision-Based Navigation With Language-Based Assistance via Imitation Learning With Indirect Intervention
24
Off-Policy Deep Reinforcement Learning without Exploration
25
BabyAI: First Steps Towards Grounded Language Learning With a Human In the Loop
26
CURIOUS: Intrinsically Motivated Modular Multi-Goal Reinforcement Learning
27
Guiding Policies with Language via Meta-Learning
28
Visual Reinforcement Learning with Imagined Goals
29
Curiosity Driven Exploration of Learned Disentangled Goal Spaces
30
Learning to Understand Goal Specifications by Modelling Reward
31
Multi-Goal Reinforcement Learning: Challenging Robotics Environments and Request for Research
32
Unicorn: Continual Learning with a Universal, Off-policy Agent
33
What Is an Object File?
34
Intrinsically Motivated Goal Exploration Processes with Automatic Curriculum Learning
35
Hindsight Experience Replay
36
Gated-Attention Architectures for Task-Oriented Language Grounding
37
Grounded Language Learning in a Simulated 3D World
38
Automatic Goal Generation for Reinforcement Learning Agents
39
Curiosity-Driven Exploration by Self-Supervised Prediction
41
Modular active curiosity-driven discovery of tool use
42
Unifying Count-Based Exploration and Intrinsic Motivation
43
The Psychology and Neuroscience of Curiosity
44
Continuous control with deep reinforcement learning
45
Universal Value Function Approximators
46
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
47
Developmental Robotics: From Babies to Robots
48
Adam: A Method for Stochastic Optimization
49
Recurrent Neural Network Regularization
50
Intrinsically Motivated Learning in Natural and Artificial Systems
51
Real-Time Parallel Processing of Grammatical Structure in the Fronto-Striatal System: A Recurrent Network Simulation Study Using Reservoir Computing
52
Active learning of inverse models with intrinsically motivated goal exploration in robots
53
Grounded Models of Semantic Representation
54
Learning to Interpret Natural Language Navigation Instructions from Observations
55
Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990–2010)
56
Reading between the Lines: Learning to Map High-Level Instructions to Commands
57
A cognitive neuroscience perspective on embodied language for human–robot cooperation
58
Cognitive Developmental Robotics: A Survey
59
In Search of the Neural Circuits of Intrinsic Motivation
60
Semiotic Dynamics for Embodied Agents
61
Emergence of grammatical constructions: evidence from simulation and grounded agent experiments
62
Constructing a Language
63
Intrinsically Motivated Reinforcement Learning
64
Frequent frames as a cue for grammatical categories in child directed speech
65
Development of object concepts in infancy: Evidence for early learning in an eye-tracking paradigm
66
Constructions: a new theoretical approach to language
67
Grounding language in action
68
The item-based nature of children’s early syntactic development
71
Twenty-Three-Month-Old Children Have a Grammatical Category of Noun.
73
Maternal responsiveness to infants in three societies: the United States, France, and Japan.
74
The Narrative Construction of Reality
75
The Language and Thought of the Child
76
Language Models are Unsupervised Multitask Learners
77
Systematic generalization: What is required and can it be learned? In ICLR
78
Tool and symbol in child development
79
Embodied Sentence Comprehension
80
The scientist in the crib : minds, brains, and how children learn
81
Vygotsky. Tool and Symbol in Child Development. In Mind in Society, chapter Tool and Symbol in Child Development, pages 19–30
82
The British Journal for the Philosophy of Science
84
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION 1 Intrinsic Motivation Systems for Autonomous Mental Development
85
Lifeways in the Great Basin
86
Playground : a procedurally generated environment designed to study several types of generalization (across predicates, attributes, object types and categories)
87
The state-action trajectories are stored in mem (Π)
88
Modular policy and reward function architectures combined with attention mechanisms