3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in passage-ranking-3
Use these libraries to find passage-ranking-3 models and implementations
No subtasks available.
This work conducts the first study into the potential for multiple representation dense retrieval to be enhanced using pseudo-relevance feedback, and extracts representative feedback embeddings that are shown to both enhance the effectiveness of a reranking as well as an additional dense retrieval operation.
This work proposes a different approach, called RepBERT, to represent documents and queries with fixed-length contextualized embeddings, which achieves state-of-the-art results among all initial retrieval techniques.
Surprisingly, it is found that the choice of target tokens impacts effectiveness, even for words that are closely related semantically, which sheds some light on why the sequence-to-sequence formulation for document ranking is effective.
The Dense Retriever (DR) and BERT re-ranker can become robust to typos in queries, resulting in significantly improved effectiveness compared to models trained without appropriately accounting for typos.
A learning-to-rank framework for generative retrieval, dubbed LTRGR, that enables generative retrieval to learn to rank passages directly, optimizing the autoregressive model toward the final passage ranking target via a rank loss.
Several small modifications to Duet---a deep neural ranking model---and the updated model on the MS MARCO passage ranking task are proposed and significant improvements from the proposed changes based on an ablation study are reported.
This paper proposes a simple unsupervised method for conversational passage ranking by formulating the passage score for a query as a combination of similarity and coherence, and built a word-proximity network from a large corpus.
This work shows that the generalized model significantly outperforms several state-of-the-art baselines for healthcare passage ranking and is able to adapt to heterogeneous domains without additional fine-tuning.
This work demonstrates CROWN (Conversational passage ranking by Reasoning Over Word Networks): an unsupervised yet effective system for conversational QA with passage responses, that supports several modes of context propagation over multiple turns.
Adding a benchmark result helps the community track progress.