3260 papers • 126 benchmarks • 313 datasets
Passage re-ranking is the task of scoring and re-ranking a collection of retrieved documents based on an input query.
(Image credit: Papersgraph)
These leaderboards are used to track progress in passage-re-ranking-7
Use these libraries to find passage-re-ranking-7 models and implementations
No subtasks available.
A simple re-implementation of BERT for query-based passage re-ranking on the TREC-CAR dataset and the top entry in the leaderboard of the MS MARCO passage retrieval task, outperforming the previous state of the art by 27% in MRR@10.
The Dense Retriever (DR) and BERT re-ranker can become robust to typos in queries, resulting in significantly improved effectiveness compared to models trained without appropriately accounting for typos.
A simple method that predicts which queries will be issued for a given document and then expands it with those predictions with a vanilla sequence-to-sequence model, trained using datasets consisting of pairs of query and relevant documents is proposed.
We study few-shot reranking for multi-hop QA (MQA) with open-domain questions. To alleviate the need for a large number of labeled question-document pairs for retriever training, we propose PromptRank, which relies on language model prompting for multi-hop path reranking. PromptRank first constructs an instruction-based prompt that includes a candidate document path and then computes the relevance score between a given question and the path based on the conditional likelihood of the question given the path prompt according to a language model. PromptRank yields strong retrieval performance on HotpotQA with only 128 training examples compared to state-of-the-art methods trained on thousands of examples — 73.6 recall@10 by PromptRank vs. 77.8 by PathRetriever and 77.5 by multi-hop dense retrieval.
It is demonstrated that by mitigating the position bias, Transformer-based re-ranking models are equally effective on a biased and debiased dataset, as well as more effective in a transfer-learning setting between two differently biased datasets.
Several small modifications to Duet---a deep neural ranking model---and the updated model on the MS MARCO passage ranking task are proposed and significant improvements from the proposed changes based on an ablation study are reported.
Compared to pointwise models, the Set-Encoder is particularly more effective when considering inter-passage information, such as novelty, and retains its advantageous properties compared to other listwise models.
This work introduces and formalizes the paradigm of deep generative retrieval models defined via the cumulative probabilities of generating query terms, and introduces a novel generative ranker (T-PGN), which combines the encoding capacity of Transformers with the Pointer Generator Network model.
Adding a benchmark result helps the community track progress.