3260 papers • 126 benchmarks • 313 datasets
Achieving a pre-defined goal through a dialog.
(Image credit: Papersgraph)
These leaderboards are used to track progress in goal-oriented-dialogue-systems-14
No benchmarks available.
Use these libraries to find goal-oriented-dialogue-systems-14 models and implementations
No subtasks available.
A novel latent action framework that treats the action spaces of an end-to-end dialog agent as latent variables and develops unsupervised methods in order to induce its own action space from the data is proposed.
This paper explores and quantify the role of context for different aspects of a dialogue, namely emotion, intent, and dialogue act identification, using state-of-the-art dialog understanding methods as baselines and employs various perturbations to distort the context of a given utterance and study its impact on the different tasks and baselines.
A novel application of large language models in user simulation for task-oriented dialog systems, specifically focusing on an in-context learning approach, which eliminates the need for labor-intensive rule definition or extensive annotated data, making it more efficient and accessible.
A novel approach for modeling dialogue context in a recurrent neural network (RNN) based language understanding system that allows encoding context from the dialogue history in chronological order and results in reduced semantic frame error rates.
This work presents bot#1337, a dialog system developed for the 1st NIPS Conversational Intelligence Challenge 2017 (ConvAI), which won the competition with an average dialogue quality score of 2.78 out of 5 given by human evaluators.
An RNN-based end-to-end encoder-decoder architecture which is trained with joint embeddings of the knowledge graph and the corpus as input and incorporates a Knowledge Graph entity lookup during inference to guarantee the generated output is state-full based on the local knowledge graph provided.
A novel architecture for integrating KGs into the response generation process by training a BERT model that learns to answer using the elements of the KG (entities and relations) in a multi-task, end-to-end setting is proposed.
This work investigates how robust IC/SL models are to noisy data and designs aggregate data-augmentation approaches that increase the model performance across all seven noise types by +10.8\% for IC accuracy and +15 points for SL F1 on average.
This work proposes a novel task setting to study the ability of both creating and maintaining common ground in dynamic environments, and collects a large-scale dataset of 5,617 dialogues to enable fine-grained evaluation and analysis of various dialogue systems.
Adding a benchmark result helps the community track progress.