3260 papers • 126 benchmarks • 313 datasets
Sort documents according to some criterion so that the "best" results appear early in the result list displayed to the user (Source: Wikipedia).
(Image credit: Papersgraph)
These leaderboards are used to track progress in document-ranking-22
Use these libraries to find document-ranking-22 models and implementations
XLNet is proposed, a generalized autoregressive pretraining method that enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and overcomes the limitations of BERT thanks to its autore progressive formulation.
ColBERT is presented, a novel ranking model that adapts deep LMs (in particular, BERT) for efficient retrieval that is competitive with existing BERT-based models (and outperforms every non-BERT baseline) and enables leveraging vector-similarity indexes for end-to-end retrieval directly from millions of documents.
This work investigates how two pretrained contextualized language models (ELMo and BERT) can be utilized for ad-hoc document ranking and proposes a joint approach that incorporates BERT's classification vector into existing neural models and shows that it outperforms state-of-the-art ad-Hoc ranking baselines.
A series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them are developed.
A two-level hierarchical recurrent neural network is introduced to learn search context representation of individual queries, search tasks, and corresponding dependency structure by jointly optimizing two companion retrieval tasks: document ranking and query suggestion.
This dissertation aims to provide a history of web exceptionalism from 1989 to 2002, a period chosen in order to explore its roots as well as specific cases up to and including the year in which descriptions of “Web 2.0” began to circulate.
Evaluation on MS MARCO document re-ranking task confirms the effectiveness of the proposed simplifications of TinyBERT, and investigates the applications of knowledge distillation models on document ranking task.
A design pattern for tackling text ranking problems, dubbed "Expando-Mono-Duo", that has been empirically validated for a number of ad hoc retrieval tasks in different domains, and implementations of the design are open-sourced in the Pyserini IR toolkit and PyGaggle neural reranking library.
This work proposes two variants of BERT, called monoBERT and duoBERT, that formulate the ranking problem as pointwise and pairwise classification, respectively, arranged in a multi-stage ranking architecture to form an end-to-end search system.
A unified framework takes advantage of both schools of thinking in information retrieval modelling and shows that the generative model learns to fit the relevance distribution over documents via the signals from the discriminative model to achieve a better estimation for document ranking.
Adding a benchmark result helps the community track progress.