3260 papers • 126 benchmarks • 313 datasets
In the Partially Relevant Video Retrieval (PRVR) task, an untrimmed video is considered to be partially relevant w.r.t. a given textual query if it contains a moment relevant to the query. PRVR aims to retrieve such partially relevant videos from a large collection of untrimmed videos.
(Image credit: Papersgraph)
These leaderboards are used to track progress in partially-relevant-video-retrieval-10
Use these libraries to find partially-relevant-video-retrieval-10 models and implementations
No subtasks available.
Almost all previous text-to-video retrieval works assume that videos are pre-trimmed with short durations. However, in practice, videos are generally untrimmed containing much background content. In this work, we investigate the more practical but challenging Partially Relevant Video Retrieval (PRVR) task, which aims to retrieve partially relevant untrimmed videos with the query input. Particularly, we propose to address PRVR from a new perspective, i.e., distilling the generalization knowledge from the large-scale vision-language pre-trained model and transferring it to a task-specific PRVR network. To be specific, we introduce a Dual Learning framework with Dynamic Knowledge Distillation (DL-DKD), which exploits the knowledge of a large vision-language model as the teacher to guide a student model. During the knowledge distillation, an inheritance student branch is devised to absorb the knowledge from the teacher model. Considering that the large model may be of mediocre performance due to the domain gaps, we further develop an exploration student branch to take the benefits of task-specific information. In addition, a dynamical knowledge distillation strategy is further devised to adjust the effect of each student branch learning during the training. Experiment results demonstrate that our proposed model achieves state-of-the-art performance on ActivityNet and TVR datasets for PRVR.
This paper proposes GMMFormer, a Gaussian-Mixture-Model based Transformer which models clip representations implicitly, and incorporates Gaussian-Mixture-Model constraints to focus each frame on its adjacent frames instead of the whole video.
Adding a benchmark result helps the community track progress.