3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in overlapped-15-1-1
Use these libraries to find overlapped-15-1-1 models and implementations
No datasets available.
No subtasks available.
This work proposes the Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities, and performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques.
This work formally introduces the incremental learning problem for semantic segmentation in which a pixel-wise labeling is considered and proposes various approaches working both on the output logits and on intermediate features.
Local POD is proposed, a multi-scale pooling distillation scheme that preserves long- and short-range spatial relationships at feature level that significantly outperforms state-of-the-art methods in existing CSS scenarios, as well as in newly proposed challenging benchmarks 1.
Local POD is proposed, a multi-scale pooling distillation scheme that preserves long- and short-range spatial relationships at feature level that significantly outperforms state-of-the-art methods in existing CSS scenarios, as well as in newly proposed challenging benchmarks.
This paper proposes a new method, dubbed SSUL-M (Semantic Segmentation with Unknown Label with Memory), by carefully combining techniques tailored for semantic segmentation by utilizing tiny exemplar memory for the first time in CISS to improve both plasticity and stability.
This work proposes to use a structural re-parameterization mechanism, named representation compensation (RC) module, to decouple the representation learning of both old and new knowledge, and outperforms state-of-the-art performance.
Extensive evaluations on multiple public benchmarks support that the proposed self-attention transfer method can further effectively alleviate the catastrophic forgetting issue, and its flexible combination with one or more widely adopted strategies significantly outperforms state-of-the-art solutions.
This work proposes to address the background shift with a novel classifier initialization method which employs gradient-based attribution to identify the most relevant weights for new classes from the classifier’s weights for the previous background and transfers these weights to the new classifier.
The proposed background-class separation framework for CISS encourages the separation between the background and new classes with a novel orthogonal objective along with label-guided output distillation and state-of-the-art results validate the effectiveness of these proposed methods.
Adding a benchmark result helps the community track progress.