3260 papers • 126 benchmarks • 313 datasets
Continual learning in semantic segmentation.
(Image credit: Papersgraph)
These leaderboards are used to track progress in continual-semantic-segmentation-1
Use these libraries to find continual-semantic-segmentation-1 models and implementations
Local POD is proposed, a multi-scale pooling distillation scheme that preserves long- and short-range spatial relationships at feature level that significantly outperforms state-of-the-art methods in existing CSS scenarios, as well as in newly proposed challenging benchmarks 1.
Local POD is proposed, a multi-scale pooling distillation scheme that preserves long- and short-range spatial relationships at feature level that significantly outperforms state-of-the-art methods in existing CSS scenarios, as well as in newly proposed challenging benchmarks.
An algorithm for adapting a semantic segmentation model that is trained using a labeled source domain to generalize well in an unlabeled target domain is developed and achieves competitive performance even compared with joint UDA approaches.
This paper proposes a new method, dubbed SSUL-M (Semantic Segmentation with Unknown Label with Memory), by carefully combining techniques tailored for semantic segmentation by utilizing tiny exemplar memory for the first time in CISS to improve both plasticity and stability.
This work proposes an architecture that leverages the simultaneous availability of two or more datasets to learn a disentanglement between the content and domain in an adversarial fashion, and takes inspiration from domain adaptation and combines it with continual learning for hippocampal segmentation in brain MRI.
This work proposes to use a structural re-parameterization mechanism, named representation compensation (RC) module, to decouple the representation learning of both old and new knowledge, and outperforms state-of-the-art performance.
Extensive evaluations on multiple public benchmarks support that the proposed self-attention transfer method can further effectively alleviate the catastrophic forgetting issue, and its flexible combination with one or more widely adopted strategies significantly outperforms state-of-the-art solutions.
The findings suggest that in a class-incremental setting, it is critical to achieve a uniform distribution for the different classes in the buffer to avoid a bias towards newly learned classes, and that the effective sampling methods help to decrease the representation shift significantly in early layers, which is a major cause of forgetting in domain- increasemental learning.
Continual semantic segmentation (CSS) aims to extend an existing model to tackle unseen tasks while retaining its old knowledge. Naively fine-tuning the old model on new data leads to catastrophic forgetting. A common solution is knowledge distillation (KD), where the output distribution of the new model is regularized to be similar to that of the old model. However, in CSS, this is challenging because of the background shift issue. Existing KD-based CSS methods continue to suffer from confusion between the background and novel classes since they fail to establish a reliable class correspondence for distillation. To address this issue, we propose a new label-guided knowledge distillation (LGKD) loss, where the old model output is expanded and transplanted (with the guidance of the ground truth label) to form a semantically appropriate class correspondence with the new model output. Consequently, the useful knowledge from the old model can be effectively distilled into the new model without causing confusion. We conduct extensive experiments on two prevailing CSS benchmarks, Pascal-VOC and ADE20K, where our LGKD significantly boosts the performance of three competing methods, especially on novel mIoU by up to +76%, setting new state-of-the-art. Finally, to further demonstrate its generalization ability, we introduce the first CSS benchmark for 3D point cloud based on ScanNet, along with several re-implemented baselines for comparison. Experiments show that LGKD is versatile in both 2D and 3D modalities without requiring ad hoc design. Codes are available at https://github.com/Ze-Yang/LGKD.
Adding a benchmark result helps the community track progress.