3260 papers • 126 benchmarks • 313 datasets
Incremental learning of a sequence of tasks when the task-ID is not available at test time.
(Image credit: Papersgraph)
These leaderboards are used to track progress in class-incremental-learning-5
Use these libraries to find class-incremental-learning-5 models and implementations
It is shown that it is possible to overcome the limitation of connectionist models and train networks that can maintain expertise on tasks that they have not experienced for a long time and selectively slowing down learning on the weights important for previous tasks.
A novel training methodology that consistently outperforms cross entropy on supervised learning tasks across different architectures and data augmentations is proposed, and the batch contrastive loss is modified, which has recently been shown to be very effective at learning powerful representations in the self-supervised setting.
iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail, and distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures.
This work proposes the Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities, and performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques.
The Deep Generative Replay is proposed, a novel framework with a cooperative dual model architecture consisting of a deep generative model ("generator") and a task solving model ("solver"), with only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task.
Three continual learning scenarios are described based on whether at test time task identity is provided and--in case it is not--whether it must be inferred, and it is found that regularization-based approaches fail and that replaying representations of previous experiences seems required for solving this scenario.
This work empirically analyze the effectiveness of a very small episodic memory in a CL setup where each training example is only seen once and finds that repetitive training on even tiny memories of past tasks does not harm generalization, on the contrary, it improves it.
A novel continual learning protocol based on the CORe50 benchmark is introduced and two rehearsal-free continual learning techniques are proposed, CWR* and AR1*, that can learn effectively even in the challenging case of nearly 400 small non-i.i.d. incremental batches.
This paper proposes a simple yet effective method for detecting any abnormal samples, which is applicable to any pre-trained softmax neural classifier, and obtains the class conditional Gaussian distributions with respect to (low- and upper-level) features of the deep models under Gaussian discriminant analysis.
This work formulation of sample selection as a constraint reduction problem based on the constrained optimization view of continual learning shows that it is equivalent to maximizing the diversity of samples in the replay buffer with parameters gradient as the feature.
Adding a benchmark result helps the community track progress.