3260 papers • 126 benchmarks • 313 datasets
One-shot learning is the task of learning information about object categories from a single training example. ( Image credit: Siamese Neural Networks for One-shot Image Recognition )
(Image credit: Papersgraph)
These leaderboards are used to track progress in one-shot-learning-9
Use these libraries to find one-shot-learning-9 models and implementations
No subtasks available.
An algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning is proposed.
This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning.
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.
The ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples is demonstrated.
This work trains a network that, given a small set of annotated images, produces parameters for a Fully Convolutional Network (FCN), and uses this FCN to perform dense pixel-level prediction on a test image for the new semantic class.
This work presents a system that performs lengthy meta-learning on a large dataset of videos, and is able to frame few- and one-shot learning of neural talking head models of previously unseen people as adversarial training problems with high capacity generators and discriminators.
This work proposes to extend an object recognition system with an attention based few-shot classification weight generator, and to redesign the classifier of a ConvNet model as the cosine similarity function between feature representations and classification weight vectors.
Siamese Mask R-CNN is extended by a Siamese backbone encoding both reference image and scene, allowing it to target detection and segmentation towards the reference category.
Adding a benchmark result helps the community track progress.