3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in end-to-end-dialogue-modelling-14
Use these libraries to find end-to-end-dialogue-modelling-14 models and implementations
No subtasks available.
A Multi-Action Data Augmentation (MADA) framework to utilize the one-to-many property to generate diverse appropriate dialog responses, which consistently improves dialog policy diversity, and results in improved response diversity and appropriateness.
This study presents PPTOD, a unified plug-and-play model for task-oriented dialogue, and introduces a new dialogue multi-task pre-training strategy that allows the model to learn the primary TOD task completion skills from heterogeneous dialog corpora.
SimpleTOD is a simple approach to task-oriented dialogue that uses a single causal language model trained on all sub-tasks recast as a single sequence prediction problem, which allows it to fully leverage transfer learning from pre-trained, open domain, causal language models such as GPT-2.
A new method that uses transfer learning and machine teaching to build task bots at scale, Soloist, is presented, which parameterize classical modular task-oriented dialog systems using a Transformer-based auto-regressive language model, which subsumes different dialog modules into a single neural model.
This work proposes a novel task setting to study the ability of both creating and maintaining common ground in dynamic environments, and collects a large-scale dataset of 5,617 dialogues to enable fine-grained evaluation and analysis of various dialogue systems.
This paper proposes a probabilistic dialog model, called the LAtent BElief State (LABES) model, where belief states are represented as discrete latent variables and jointly modeled with system responses given user inputs to develop semi-supervised learning under the principled variational learning framework.
This work introduces modified training objectives for language model finetuning, and employs massive data augmentation via back-translation to increase the diversity of the training data and examines the possibilities of combining data from multiples sources to improve performance on the target dataset.
GALAXY is a novel pre-trained dialog model that explicitly learns dialog policy from limited labeled dialogs and large-scale unlabeled dialog corpora via semi-supervised learning and has a stronger few-shot ability than existing models under various low-resource settings.
Adding a benchmark result helps the community track progress.