3260 papers • 126 benchmarks • 313 datasets
Learning to rank is the application of machine learning to build ranking models. Some common use cases for ranking models are information retrieval (e.g., web search) and news feeds application (think Twitter, Facebook, Instagram).
(Image credit: Papersgraph)
These leaderboards are used to track progress in learning-to-rank-12
No benchmarks available.
Use these libraries to find learning-to-rank-12 models and implementations
No subtasks available.
This work proposes an improved framework DCN-V2, which is simple, can be easily adopted as building blocks, and has delivered significant offline accuracy and online business metrics gains across many web-scale learning to rank systems at Google.
A novel loss function for pairwise ranking is proposed, which is smooth everywhere, and a label decision module is incorporated into the model, estimating the optimal confidence thresholds for each visual concept.
This paper uses simulated nonlinear patterns, a real learning to rank sushi data set, and a chess data set to show that the proposed SVMcompare algorithm outperforms SVMrank when there are equality pairs.
A novel end-to-end neural architecture for ranking candidate answers, that adapts a hierarchical recurrent neural network and a latent topic clustering module is proposed, that shows state-of-the-art results for ranking question-answer pairs.
A novel latent vector space model that jointly learns the latent representations of words, e-commerce products and a mapping between the two without the need for explicit annotations and achieves its enhanced performance as it learns better product representations.
This work analyzes how information propagates among different information sources in a gradient-descent learning paradigm, and proposes an extendable version of the JRL framework (eJRL), which is rigorously extendable to new information sources to avoid model re-training in practice.
This work proposes a new framework for multivariate scoring functions, in which the relevance score of a document is determined jointly by multiple documents in the list, and refers to this framework as GSFs---groupwise scoring functions.
The results show that networks trained to regress to the ground truth targets for labeled data and to simultaneously learn to rank unlabeled data obtain significantly better, state-of-the-art results for both IQA and crowd counting.
The results show that the choice between the methodologies is consequential and depends on the presence of selection bias, and the degree of position bias and interaction noise, and that counterfactual methods can obtain the highest ranking performance; however, in other circumstances their optimization can be detrimental to the user experience.
Adding a benchmark result helps the community track progress.