3260 papers • 126 benchmarks • 313 datasets
Semantic segmentation with continous increments of classes.
(Image credit: Papersgraph)
These leaderboards are used to track progress in class-incremental-semantic-segmentation-32
No benchmarks available.
Use these libraries to find class-incremental-semantic-segmentation-32 models and implementations
No datasets available.
This paper proposes a new method, dubbed SSUL-M (Semantic Segmentation with Unknown Label with Memory), by carefully combining techniques tailored for semantic segmentation by utilizing tiny exemplar memory for the first time in CISS to improve both plasticity and stability.
This work demonstrates that the semantic shift of the background class and a bias towards new classes are the major causes of forgetting in CiSS and shows that both causes mostly manifest themselves in deeper classification layers of the network, while the early layer of the model are not affected.
A new decomposed knowledge distillation (DKD) technique is proposed, improving the rigidity of a model and addressing the forgetting problem more effectively, and a novel initialization method to train new classifiers for novel classes is introduced.
Despite the significant recent progress made on 3D point cloud semantic segmentation, the current methods require training data for all classes at once, and are not suitable for real-life scenarios where new categories are being continuously discovered. Substantial memory storage and expensive re-training is required to update the model to sequentially arriving data for new concepts. In this paper, to continually learn new categories using previous knowledge, we introduce class-incremental semantic segmentation of 3D point cloud. Unlike 2D images, 3D point clouds are disordered and unstructured, making it difficult to store and transfer knowledge especially when the previous data is not available. We further face the challenge of semantic shift, where previous/future classes are indiscriminately collapsed and treated as the background in the current step, causing a dramatic performance drop on past classes. We exploit the structure of point cloud and propose two strategies to address these challenges. First, we design a geometry-aware distillation module that transfers point-wise feature associations in terms of their geometric characteristics. To counter forgetting caused by the semantic shift, we further develop an uncertainty-aware pseudo-labelling scheme that eliminates noise in uncertain pseudo-labels by label propagation within a local neighborhood. Our extensive experiments on S3DIS and ScanNet in a class-incremental setting show impressive results comparable to the joint training strategy (upper bound). Code is available at: https://github.com/leolyj/3DPC-CISS
This paper investigates the efficient multi-grained knowledge reuse for CISS, and proposes a novel method, Evolving kNowleDge minING (ENDING), employing a frozen backbone, to demonstrate new state-of-the-art performance.
This work proposes to address the background shift with a novel classifier initialization method which employs gradient-based attribution to identify the most relevant weights for new classes from the classifier’s weights for the previous background and transfers these weights to the new classifier.
The proposed MicroSeg is based on the assumption that background regions with strong objectness possibly belong to those concepts in the historical or future stages, and first splits the given image into hundreds of segment proposals with a proposal generator, to avoid forgetting old knowledge at the current training stage.
This work proposes a pseudo-labeling strategy to augment the few-shot training annotations in order to learn novel classes more effectively and uses knowledge distillation on both labeled and unlabeled data to retain knowledge on existing classes.
Inspired by the Gaussian mixture model that samples from a mixture of Gaussian distributions, CoinSeg emphasizes intra-class diversity with multiple contrastive representation centroids and ensures the model’s stability and alleviates forgetting through a specific flexible tuning strategy.
Adding a benchmark result helps the community track progress.