3260 papers • 126 benchmarks • 313 datasets
In contrast to (supervised) few-shot image classification, only the unlabeled dataset is available in the pre-training or meta-training stage for unsupervised few-shot image classification.
(Image credit: Papersgraph)
These leaderboards are used to track progress in few-shot-image-classification
Use these libraries to find few-shot-image-classification models and implementations
No subtasks available.
This work proposes an effective unsupervised FSL method, learning representations with self-supervision, following the InfoMax principle, which achieves comparable performance on widely used FSL benchmarks without any labels of the base classes.
This paper proposed to train a more generalized embedding network with self-supervised learning (SSL) which can provide robust representation for downstream tasks by learning from the data itself.
It is demonstrated that the self-supervised prototypical transfer learning approach ProtoTransfer outperforms state-of-the-art unsupervised meta-learning methods on few-shot tasks from the mini-ImageNet dataset and has comparable performance to supervised methods, but requires orders of magnitude fewer labels.
This paper develops a novel framework called Unsupervised Few-shot Learning via Distribution Shift-based Data Augmentation (ULDA), which pays attention to the distribution diversity inside each constructed pretext few-shot task when using data augmentation.
This work rethink the relations between class concepts, and proposes a novel Absolute-relative Learning paradigm to fully take advantage of label information to refine the image an relation representations in both supervised and unsupervised scenarios.
This paper focuses on unsupervised learning from an abundance of unlabeled data followed by few-shot fine-tuning on a downstream classification task and extends a recent study on adopting contrastive learning for self-supervised pre-training by incorporating class-level cognizance through iterative clustering and re-ranking and expanding the contrastive optimization loss to account for it.
This work addresses the core reason for lack of a clustering-friendly property in the embedding space by minimizing the inter- to intra-class similarity ratio to provide clustered-friendly embedding features, and validates the approach through comprehensive experiments.
A Multi-level Second-order (MlSo) few-shots learning network for supervised or unsupervised few-shot image classification and few- shot action recognition is proposed, leveraging so-called power-normalized second-order base learner streams combined with features that express multiple levels of visual abstraction, and self-supervised discriminating mechanisms are used.
This work removes the requirement of base class labels and learns generalizable embeddings via Unsupervised Meta-Learning (UML), and applies embedding-based classifiers to novel tasks with labeled few-shot examples during meta-test.
Adding a benchmark result helps the community track progress.