3260 papers • 126 benchmarks • 313 datasets
The goal of Novel Class Discovery (NCD) is to identify new classes in unlabeled data, by exploiting prior knowledge from known classes. In this specific setup, the data is split in two sets. The first is a labeled set containing known classes and the second is an unlabeled set containing unknown classes that must be discovered.
(Image credit: Papersgraph)
These leaderboards are used to track progress in novel-class-discovery-33
Use these libraries to find novel-class-discovery-33 models and implementations
No subtasks available.
This study investigates the failure of parametric classifiers, verifies the effectiveness of previous design choices when high-quality supervision is available, and identifies unreliable pseudo-labels as a key problem.
A novel knowledge distillation framework is proposed, which utilizes the authors' class-relation representation to regularize the learning of novel classes and develops a learnable weighting function for the regularization, which adaptively promotes knowledge transfer based on the semantic similarity between the novel and known classes.
This work first extracts the structure of the retinal images, then it combines both the structure features and the last layer features extracted from original health image to reconstruct the original input healthy image, and measures the difference between structure extracted from Original and the reconstructed image.
This paper demystify assumptions behind NCD and finds that high-level semantic features should be shared among the seen and unseen classes, which proves that NCD is theoretically solvable under certain assumptions and can be naturally linked to meta-learning that has exactly the same assumption as NCD.
This paper addresses Novel Class Discovery (NCD), the task of unveiling new classes in a set of unlabeled samples given a labeled dataset with known classes, and builds a new framework, named Neighborhood Contrastive Learning (NCL), to learn discriminative representations that are important to clustering performance.
This work suggests that the common approach of bootstrapping an image representation using the labelled data only introduces an unwanted bias, and that this can be avoided by using self-supervised learning to train the representation from scratch on the union of labelled and unlabelled data.
A UNified Objective function (UNO) for discovering novel classes, with the explicit purpose of favoring synergy between supervised and unsupervised learning, and outperforms the state of the art on several benchmarks.
This work introduces a new setting of Novel Class Discovery in Semantic Segmentation (NCDSS), which aims at segmenting unlabeled images containing new classes given prior knowledge from a labeled set of disjoint classes, and proposes the Entropy-based Uncertainty Modeling and Self-training (EUMS) framework to overcome noisy pseudo-labels.
This work characterize existing NCD approaches into singlestage and two-stage methods based on whether they require access to labeled and unlabeled data together while discovering new classes and devise a simple yet powerful loss function that enforces separability in the latent space using cues from multi-dimensional scaling, which is referred to as Spacing Loss.
Adding a benchmark result helps the community track progress.