3260 papers • 126 benchmarks • 313 datasets
Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart. It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering. (Image credit: Schroff et al. 2015)
(Image credit: Open Source)
These leaderboards are used to track progress in contrastive-learning-36
Use these libraries to find contrastive-learning-36 models and implementations
No datasets available.
No subtasks available.
Adding a benchmark result helps the community track progress.