3260 papers • 126 benchmarks • 313 datasets
Conversational response selection refers to the task of identifying the most relevant response to a given input sentence from a collection of sentences.
(Image credit: Papersgraph)
These leaderboards are used to track progress in conversational-response-selection-9
Use these libraries to find conversational-response-selection-9 models and implementations
No subtasks available.
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
A new type of deep contextualized word representation is introduced that models both complex characteristics of word use and how these uses vary across linguistic contexts, allowing downstream models to mix different types of semi-supervision signals.
The Ubuntu Dialogue Corpus is introduced, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words, that provides a unique resource for research into building dialogue managers based on neural language models that can make use of large amounts of unlabeled data.
It is found that transfer learning using sentence embeddings tends to outperform word level transfer with surprisingly good performance with minimal amounts of supervised training data for a transfer task.
This work develops a new transformer architecture, the Poly-encoder, that learns global rather than token level self-attention features and achieves state-of-the-art results on three existing tasks.
The proposed ConveRT (Conversational Representations from Transformers), a pretraining framework for conversational tasks satisfying all the following requirements: it is effective, affordable, and quick to train, and promises wider portability and scalability for Conversational AI applications.
This work collects data and train models tocondition on their given profile information; and information about the person they are talking to, resulting in improved dialogues, as measured by next utterance prediction.
This paper investigates a sequential matching model based only on chain sequence for multi-turn response selection, which outperforms all previous models, including state-of-the-art hierarchy-based models, and achieves new state of the art performances on two large-scale public multi- turn response selection benchmark datasets.
A repository of conversational datasets consisting of hundreds of millions of examples, and a standardised evaluation procedure for conversational response selection models using '1-of-100 accuracy' is presented.
Adding a benchmark result helps the community track progress.