3260 papers • 126 benchmarks • 313 datasets
Emotion classification, or emotion categorization, is the task of recognising emotions to classify them into the corresponding category. Given an input, classify it as 'neutral or no emotion' or as one, or more, of several given emotions that best represent the mental state of the subject's facial expression, words, and so on. Some example benchmarks include ROCStories, Many Faces of Anger (MFA), and GoEmotions. Models can be evaluated using metrics such as the Concordance Correlation Coefficient (CCC) and the Mean Squared Error (MSE).
(Image credit: Papersgraph)
These leaderboards are used to track progress in emotion-classification
Use these libraries to find emotion-classification models and implementations
No subtasks available.
GoEmotions, the largest manually annotated dataset of 58k English Reddit comments, labeled for 27 emotion categories or Neutral is introduced, and the high quality of the annotations via Principal Preserved Component Analysis is demonstrated.
It is argued that the careful implementation of modern CNN architectures, the use of the current regularization methods and the visualization of previously hidden features are necessary in order to reduce the gap between slow performances and real-time architectures.
The proposed model outperforms previous state-of-the-art methods in assigning data to one of four emotion categories when the model is applied to the IEMOCAP dataset, as reflected by accuracies ranging from 68.8% to 71.8%.
This paper proposes a Bi-LSTM architecture equipped with a multi-layer self attention mechanism that improves the model performance and allows us to identify salient words in tweets, as well as gain insight into the models making them more interpretable.
A fundamentally different approach to solve emotion recognition task that relies on incorporating facial landmarks as a part of the classification loss function and is able to outperform state-of-the-art emotion classification methods on two challenging benchmark dataset by up to 5%.
EmoTxt, the first open-source toolkit supporting both emotion recognition from text and training of custom emotion classification models, is presented, and empirical evidence of the performance of EmoTxt is provided.
A new method based on recurrent neural networks that keeps track of the individual party states throughout the conversation and uses this information for emotion classification and outperforms the state of the art by a significant margin on two different datasets.
A framework to exploit acoustic information in tandem with lexical data using two bi-directional long short-term memory (BLSTM) for obtaining hidden representations of the utterance and an attention mechanism, referred to as the multi-hop, which is trained to automatically infer the correlation between the modalities.
Through the graph network, DialogueGCN addresses context propagation issues present in the current RNN-based methods, and empirically show that this method alleviates such issues, while outperforming the current state of the art on a number of benchmark emotion classification datasets.
Experimental outcomes indicate that XLM-R outdoes all other techniques by achieving the highest weighted f_1-score of 69.73% on the test data.
Adding a benchmark result helps the community track progress.