3260 papers • 126 benchmarks • 313 datasets
Cross-Modal Information Retrieval (CMIR) is the task of finding relevant items across different modalities. For example, given an image, find a text or vice versa. The main challenge in CMIR is known as the heterogeneity gap: since items from different modalities have different data types, the similarity between them cannot be measured directly. Therefore, the majority of CMIR methods published to date attempt to bridge this gap by learning a latent representation space, where the similarity between items from different modalities can be measured. Source: Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A Reproducibility Study
(Image credit: Papersgraph)
These leaderboards are used to track progress in cross-modal-information-retrieval-14
No benchmarks available.
Use these libraries to find cross-modal-information-retrieval-14 models and implementations
No datasets available.
Adding a benchmark result helps the community track progress.