3260 papers • 126 benchmarks • 313 datasets
Dialogue act classification is the task of classifying an utterance with respect to the function it serves in a dialogue, i.e. the act the speaker is performing. Dialogue acts are a type of speech acts (for Speech Act Theory, see Austin (1975) and Searle (1969)).
(Image credit: Papersgraph)
These leaderboards are used to track progress in dialogue-act-classification-14
Use these libraries to find dialogue-act-classification-14 models and implementations
No subtasks available.
Experiments on the SwDA corpus show that the modified CRF layer outperforms the original one, with very wide margins for some DA labels, and visualizations demonstrate that the layer can learn meaningful, sophisticated transition patterns between DA label pairs conditioned on speaker-change in an end-to-end way.
A hierarchical recurrent neural network is built using bidirectional LSTM as a base unit and the conditional random field as the top layer to classify each utterance into its corresponding dialogue act, thus modeling the dependency among both, labels and utterances, an important consideration of natural dialogue.
A pool of various recurrent neural models trained on a dialogue act corpus are proposed to annotate the emotion corpora with dialogue act labels, and an ensemble annotator extracts the final dialogue act label.
This paper presents a transfer learning approach for performing dialogue act classification on issue comments, and compares the performance of several word and sentence level encoding models including Global Vectors for Word Representations, Universal Sentence Encoder, and Bidirectional Encoder Representations from Transformers.
A novel probabilistic method of utterance representation is presented and a RNN sentence model for out-of-context DA Classification is described and generated from keywords selected for their frequency association with certain DA’s.
This work introduces NatCS, a multi-domain collection of spoken customer service conversations based on natural language phenomena observed in real conversations, and demonstrates that dialogue act annotations in NatCS provide more effective training data for modeling real conversations compared to existing synthetic written datasets.
This work proposes a novel context-based learning method to classify dialogue acts using a character-level language model utterance representation, and shows significant improvement in dialogue act detection.
An utterance-level attention-based bidirectional recurrent neural network (Utt-Att-BiRNN) model to analyze the importance of preceding utterances to classify the current one and to show that context-based learning not only improves the performance but also achieves higher confidence in the classification.
The findings show that SGNNs are effective at capturing low-dimensional semantic text representations, while maintaining high accuracy, and extensive evaluation on dialog act classification shows significant improvement over state-of-the-art results.
This work exploits the effectiveness of a context-aware self-attention mechanism coupled with a hierarchical recurrent neural network to solve the Dialogue Act classification problem as a sequence labeling problem using hierarchical deep neural networks.
Adding a benchmark result helps the community track progress.