3260 papers • 126 benchmarks • 313 datasets
The goal of Sarcasm Detection is to determine whether a sentence is sarcastic or non-sarcastic. Sarcasm is a type of phenomenon with specific perlocutionary effects on the hearer, such as to break their pattern of expectation. Consequently, correct understanding of sarcasm often requires a deep understanding of multiple sources of information, including the utterance, the conversational context, and, frequently some real world facts. Source: Attentional Multi-Reading Sarcasm Detection
(Image credit: Papersgraph)
These leaderboards are used to track progress in sarcasm-detection-10
Use these libraries to find sarcasm-detection-10 models and implementations
No subtasks available.
This paper shows that by extending the distant supervision to a more diverse set of noisy labels, the models can learn richer representations and obtain state-of-the-art performance on 8 benchmark datasets within emotion, sentiment and sarcasm detection using a single pretrained model.
The Self-Annotated Reddit Corpus (SARC) is introduced, a large corpus for sarcasm research and for training and evaluating systems for sarcasms detection, and baseline methods are evaluated.
This work proposes to automatically learn and then exploit user embeddings, to be used in concert with lexical signals to recognize sarcasm, and shows that the model outperforms a state-of-the-art approach leveraging an extensive set of carefully crafted features.
This work develops models based on a pre-trained convolutional neural network for extracting sentiment, emotion and personality features for sarcasm detection and addresses the often ignored generalizability issue of classifying data that have not been seen by the models at learning phase.
A hybrid Neural Network architecture with attention mechanism which provides insights about what actually makes sentences sarcastic is proposed which improves upon the baseline by ~ 5% in terms of classification accuracy.
A LSTM-based model is proposed that enables utterances to capture contextual information from their surroundings in the same video, thus aiding the classification process and showing 5-10% performance improvement over the state of the art and high robustness to generalizability.
This work investigates several types of Long Short-Term Memory networks that can model both the conversation context and the sarcastic response and shows that the conditional LSTM network (Rocktäschel et al. 2015) and L STM networks with sentence level attention on context and response outperform the LSTm model that reads only the response.
This work presents the first English-Hindi code-mixed dataset of tweets marked for presence of sarcasm and irony where each token is also annotated with a language tag and presents a baseline su- pervised classification system developed using the same dataset.
This work trains a predicted compute-optimal model, Chinchilla, that uses the same compute budget as Gopher but with 70B parameters and 4$\times$ more more data, and reaches a state-of-the-art average accuracy, greater than a 7% improvement over Gopher.
The methodology and results of the UTNLP team in the SemEval-2022 shared task 6 on sarcasm detection are presented and the best approach achieved an F1-score of 0.38 in the competition’s evaluation phase.
Adding a benchmark result helps the community track progress.