3260 papers • 126 benchmarks • 313 datasets
Continual Learning (also known as Incremental Learning, Life-long Learning) is a concept to learn a model for a large number of tasks sequentially without forgetting knowledge obtained from the preceding tasks, where the data in the old tasks are not available anymore during training new ones. If not mentioned, the benchmarks here are Task-CL, where task-id is provided on validation. Source: Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation Three scenarios for continual learning Lifelong Machine Learning Continual lifelong learning with neural networks: A review
(Image credit: Papersgraph)
These leaderboards are used to track progress in continual-learning-15
Use these libraries to find continual-learning-15 models and implementations
It is shown that it is possible to overcome the limitation of connectionist models and train networks that can maintain expertise on tasks that they have not experienced for a long time and selectively slowing down learning on the weights important for previous tasks.
This work evaluates the progressive networks architecture extensively, and shows that it outperforms common baselines based on pretraining and finetuning and demonstrates that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
This work proposes the Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities, and performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques.
Variational continual learning is developed, a simple but general framework for continual learning that fuses online variational inference and recent advances in Monte Carlo VI for neural networks that outperforms state-of-the-art continual learning methods.
This work empirically analyze the effectiveness of a very small episodic memory in a CL setup where each training example is only seen once and finds that repetitive training on even tiny memories of past tasks does not harm generalization, on the contrary, it improves it.
Three continual learning scenarios are described based on whether at test time task identity is provided and--in case it is not--whether it must be inferred, and it is found that regularization-based approaches fail and that replaying representations of previous experiences seems required for solving this scenario.
It is shown that it is possible to learn naturally sparse representations that are more effective for online updating and it is demonstrated that a basic online updating strategy on representations learned by OML is competitive with rehearsal based methods for continual learning.
This study introduces intelligent synapses that bring some of this biological complexity into artificial neural networks, and shows that it dramatically reduces forgetting while maintaining computational efficiency.
Insight is provided into the structure of low-dimensional task embedding spaces (the input space of the hypernetwork) and it is shown that task-conditioned hypernetworks demonstrate transfer learning.
Adding a benchmark result helps the community track progress.