3260 papers • 126 benchmarks • 313 datasets
In contrast to supervised few-shot learning, only the unlabeled dataset is available in the pre-training or meta-training stage for unsupervised few-shot learning.
(Image credit: Papersgraph)
These leaderboards are used to track progress in few-shot-image-classification
No benchmarks available.
Use these libraries to find few-shot-image-classification models and implementations
No datasets available.
No subtasks available.
This work proposes an effective unsupervised FSL method, learning representations with self-supervision, following the InfoMax principle, which achieves comparable performance on widely used FSL benchmarks without any labels of the base classes.
It is demonstrated that the self-supervised prototypical transfer learning approach ProtoTransfer outperforms state-of-the-art unsupervised meta-learning methods on few-shot tasks from the mini-ImageNet dataset and has comparable performance to supervised methods, but requires orders of magnitude fewer labels.
This work rethink the relations between class concepts, and proposes a novel Absolute-relative Learning paradigm to fully take advantage of label information to refine the image an relation representations in both supervised and unsupervised scenarios.
This paper develops a novel framework called Unsupervised Few-shot Learning via Distribution Shift-based Data Augmentation (ULDA), which pays attention to the distribution diversity inside each constructed pretext few-shot task when using data augmentation.
This work removes the requirement of base class labels and learns generalizable embeddings via Unsupervised Meta-Learning (UML), and applies embedding-based classifiers to novel tasks with labeled few-shot examples during meta-test.
It is found that the program synthesis classifier could not solve problems involving shape distances, because it relied on symbolic computation which scales poorly with input dimension and adding distances into such computation would increase the dimension combinatorially with the number of shapes in an image.
It is shown that a simple Triplet-based loss (Trip) can achieve surprisingly good performance without requiring large batches or asymmetry designs and a simple plug-and-play RandOm MApping (ROMA) strategy is proposed by randomly mapping samples into other spaces and requiring these randomly projected samples to satisfy the same relationship indicated by the triplets.
This work proposes a novel self-attention based message passing contrastive learning approach (coined as SAMP-CLR) for U-FSL pre-training and proposes an optimal transport based fine-tuning strategy (OT) to efficiently induce task awareness into the authors' novel end-to-end unsupervised few-shot classification framework (SAMPTransfer).
UVStyle-Net is proposed, a style similarity measure for B-Reps that leverages the style signals in the second order statistics of the activations in a pre-trained (unsupervised) 3D encoder, and learns their relative importance to a subjective end-user through few-shot learning.
Adding a benchmark result helps the community track progress.