3260 papers • 126 benchmarks • 313 datasets
Rumor detection is the task of identifying rumors, i.e. statements whose veracity is not quickly or ever confirmed, in utterances on social media platforms.
(Image credit: Papersgraph)
These leaderboards are used to track progress in rumour-detection-8
Use these libraries to find rumour-detection-8 models and implementations
No subtasks available.
A novel approach to rumour detection that learns from the sequential dynamics of reporting during breaking news in social media to detect rumours in new stories and achieves competitive performance, beating the state-of-the-art classifier that relies on querying tweets with improved precision and recall, as well as outperforming the best baseline.
A LSTM-based sequential model is proposed that, through modelling the conversational structure of tweets, achieves an accuracy of 0.784 on the RumourEval test set outperforming all other systems in Subtask A.
This paper examines the performance of a broad set of modern transformer-based language models and shows that with basic fine-tuning, these models are competitive with and can even significantly outperform recently proposed state-of-the-art methods.
This work proposes two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets.
A new approach to this task, where the use of conversation-based and affective-based features, covering different facets of affect, has been explored, showing the effectiveness of the feature set proposed.
This paper describes the system submitted to SemEval 2019, classifying whether posts from Twitter and Reddit support, deny, query, or comment a hidden rumour, truthfulness of which is the topic of an underlying discussion thread.
This paper describes the submission to SemEval-2019 Task 7: RumourEval: Determining Rumor Veracity and Support for Rumors, and provides results and analysis of the system performance and present ablation experiments.
This thesis generates a stance-annotated Reddit dataset for the Danish language, and implements various models for stance classification, which shows that stance labels can be used across languages and platforms with a HMM to predict the veracity of rumours.
The best performing method is a unified approach which automatically corrects for this using a variant of positive unlabelled learning that finds instances which were incorrectly labelled as not check-worthy.
Adding a benchmark result helps the community track progress.