3260 papers • 126 benchmarks • 313 datasets
Learning fine-grained representation with coarsely-labelled dataset, which can significantly reduce the labelling cost. As a simple example, for the task of differentiation between different pets, we need a knowledgeable cat lover to distinguish between ‘British short’ and ‘Siamese’, but even a child annotator may help to discriminate between ‘cat’ and ‘non-cat’.
(Image credit: Papersgraph)
These leaderboards are used to track progress in learning-with-coarse-labels-11
Use these libraries to find learning-with-coarse-labels-11 models and implementations
No subtasks available.
This work proposes an algorithm to learn the fine-grained patterns for the target task, when only its coarse-class labels are available, and provides a theoretical guarantee for this.
A novel ’Angular normalization’ module is introduced that allows to effectively combine supervised and self-supervised contrastive pre-training to approach the proposed C2FS task, demonstrating significant gains in a broad study over multiple baselines and datasets.
This work proposes a contrastive learning method, called masked Contrastive learning (MaskCon), to address the under-explored problem setting, where it learns with a coarse-labelled dataset in order to address a finer labelling problem.
FocalSegNet is proposed, a novel 3D focal modulation UNet to detect an aneurysm and offer an initial, coarse segmentation of it from time-of-flight MRA image patches, which is further refined with a dense conditional random field (CRF) post-processing layer to produce a final segmentation map.
Adding a benchmark result helps the community track progress.